Unnamed: 0 int64 0 16k | text_prompt stringlengths 110 62.1k | code_prompt stringlengths 37 152k |
|---|---|---|
7,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Geoserver to load data on the map
In this notebook we'll take a look at using Geoserver to render raster data to the map. Geoserver is an open source server for sharing geospatial data. It includes a tiling server which the GeoJS map uses to render data efficiently to the map for visualization. Geonotebook comes with a vagrant virtual machine for hosting a local instance of Geoserver. This instance can be used for testing geonotebook. To use it simply install vagrant using your system package manager, in a checked out copy of the source code go to the devops/geoserver/ folder and run vagrant up
Step1: Make sure you have the geoserver VM running
The following cell will check whether or not your have a running instance of the geoserver virtual machine available. The following cell should show text to the effect of
Step2: Display geoserver status
This should ensure the client can successfully connect to your VM, if you do not see the Geoserver 'Status' page then something is wrong and the rest of the notebook may not function correctly.
Step3: Get the data from S3
Next get some sample data from S3. This GeoTiff represents NBAR data for September from 2010 covering a section of Washington states Glacier National Park. It is aproximately 200Mb and may take some time to download from Amazon's S3.
The tiff itself has been slightly transformed from its original HDF dataset. In particular it only has 4 bands (R,G,B & NDVI) and includes some geotiff tags with band statistics.
Step4: Adding an RGB layer to the map
Here we add our first data layer to the map. To do this we use a RasterData object imported from the geonotebook.wrappers package. By default RasterData objects read tiffs using the rasterio library. RasterData objects are designed to provide a consistent API to raster data across a number of different readers and systems. We will use the add_layer function to add the RasterData object to the map.
Step5: To add the layer we call M.add_layer passing in a subset of the raster data set's bands. In this case we index into rd with the list [1, 2, 3]. This actually returns a new RasterData object with only three bands available (in this case bands 1, 2 and 3 corrispond to Red, Green and Blue). When adding layers you can only add a layer with either 3 bands (R,G,B) or one band (we'll see a one band example in a moment).
Step6: This should have added an RGB dataset to the map for visualization. You can also see what layers are available via the M.layers attribute.
Step7: The dataset may appear alarmingly dark. This is because the data itself is not well formated. We can see this by looking at band min and max values
Step8: R,G,B values should be between 0 and 1. We can remedy this by changing some of the styling options that are available on the layers including setting an interval for scaling our data, and setting a gamma to brighten the image.
First we'll demonstrate removing the layer
Step9: Then we can re-add the layer with a color interval of 0 to 1.
Step10: We can also brighten this up by changing the gamma.
Note We don't have to remove the layer before updating it's options. Calling M.add_layer(...) with the same rd object will simply replace any existing layer with the same name. By default the layer's name is inferred from the filename.
Step11: Finally, let's add a little opacity to layer so we can see some of the underlying base map features.
Step12: Adding a single band Layer
Adding a single band layer uses the same M.add_layer(...) interface. Keep in mind that several of the styling options are slightly different. By default single band rasters are rendered with a default mapping of colors to band values.
Step13: You may find this colormap a little aggressive, in which case you can replace the colormap with any of the built in matplotlib colormaps
Step14: Including custom color maps as in this example. Here we create a linear segmented colormap that transitions from Blue to Beige to Green. When mapped to our NDVI band data -1 will appear blue, 0 will appear beige and 1 will appear green.
Step15: What can I do with this data?
We will address the use of annotations for analysis and data comparison in a separate notebook. For now Let's focus on a small agricultural area north of I-90
Step16: Go ahead and start a rectangular annotation (Second button to the right of the 'CellToolbar' button - with the square icon).
Please annotate a small region of the fields.
We can access this data from from the annotation's data attribute. We'll cover exactly what is going on here in another notebook.
Step17: As a sanity check we can prove the data is the region we've selected by plotting the data with matplotlib's imshow function
Step18: NDVI Segmentation analysis
Once we have this data we can run arbitrary analyses on it. In the next cell we use a sobel filter and a watershed transformation to generate a binary mask of the data. We then use an implementation of marching cubes to vectorize the data, effectively segmenting green areas (e.g. fields) from surrounding areas.
This next cell requires both scipy and scikit-image. Check your operating system documentation for how best to install these packages. | Python Code:
%matplotlib inline
from matplotlib import pylab as plt
Explanation: Using Geoserver to load data on the map
In this notebook we'll take a look at using Geoserver to render raster data to the map. Geoserver is an open source server for sharing geospatial data. It includes a tiling server which the GeoJS map uses to render data efficiently to the map for visualization. Geonotebook comes with a vagrant virtual machine for hosting a local instance of Geoserver. This instance can be used for testing geonotebook. To use it simply install vagrant using your system package manager, in a checked out copy of the source code go to the devops/geoserver/ folder and run vagrant up
End of explanation
!cd ../devops/geoserver && vagrant status
Explanation: Make sure you have the geoserver VM running
The following cell will check whether or not your have a running instance of the geoserver virtual machine available. The following cell should show text to the effect of:
```
Current machine states:
geoserver running (virtualbox)
The VM is running. To stop this VM, you can run vagrant halt to
shut it down forcefully, or you can run vagrant suspend to simply
suspend the virtual machine. In either case, to restart it again,
simply run vagrant up.
```
If it does not show the geoserver machine in a state of running You can load the machine by going to ../devops/geoserver/ and running vagrant up
End of explanation
from IPython.core.display import display, HTML
from geonotebook.config import Config
geoserver = Config().vis_server
display(HTML(geoserver.c.get("/about/status").text))
Explanation: Display geoserver status
This should ensure the client can successfully connect to your VM, if you do not see the Geoserver 'Status' page then something is wrong and the rest of the notebook may not function correctly.
End of explanation
!curl -o /tmp/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff http://golden-tile-geotiffs.s3.amazonaws.com/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff
Explanation: Get the data from S3
Next get some sample data from S3. This GeoTiff represents NBAR data for September from 2010 covering a section of Washington states Glacier National Park. It is aproximately 200Mb and may take some time to download from Amazon's S3.
The tiff itself has been slightly transformed from its original HDF dataset. In particular it only has 4 bands (R,G,B & NDVI) and includes some geotiff tags with band statistics.
End of explanation
# Set the center of the map to the location the data
M.set_center(-120.32, 47.84, 7)
from geonotebook.wrappers import RasterData
rd = RasterData('data/L57.Globe.month09.2010.hh09vv04.h6v1.doy247to273.NBAR.v3.0.tiff')
rd
Explanation: Adding an RGB layer to the map
Here we add our first data layer to the map. To do this we use a RasterData object imported from the geonotebook.wrappers package. By default RasterData objects read tiffs using the rasterio library. RasterData objects are designed to provide a consistent API to raster data across a number of different readers and systems. We will use the add_layer function to add the RasterData object to the map.
End of explanation
M.add_layer(rd[1, 2, 3], opacity=1.0)
M.layers.annotation.points[0].data.next()
from geonotebook.vis.ktile.utils import get_layer_vrt
print get_layer_vrt(M.layers[0])
Explanation: To add the layer we call M.add_layer passing in a subset of the raster data set's bands. In this case we index into rd with the list [1, 2, 3]. This actually returns a new RasterData object with only three bands available (in this case bands 1, 2 and 3 corrispond to Red, Green and Blue). When adding layers you can only add a layer with either 3 bands (R,G,B) or one band (we'll see a one band example in a moment).
End of explanation
M.layers
Explanation: This should have added an RGB dataset to the map for visualization. You can also see what layers are available via the M.layers attribute.
End of explanation
print("Color Min Max")
print("Red: {}, {}".format(rd[1].min, rd[1].max))
print("Green: {}, {}".format(rd[2].min, rd[2].max))
print("Blue: {}, {}".format(rd[3].min, rd[3].max))
Explanation: The dataset may appear alarmingly dark. This is because the data itself is not well formated. We can see this by looking at band min and max values:
End of explanation
M.remove_layer(M.layers[0])
Explanation: R,G,B values should be between 0 and 1. We can remedy this by changing some of the styling options that are available on the layers including setting an interval for scaling our data, and setting a gamma to brighten the image.
First we'll demonstrate removing the layer:
End of explanation
M.add_layer(rd[1, 2, 3], interval=(0,1))
Explanation: Then we can re-add the layer with a color interval of 0 to 1.
End of explanation
M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5)
Explanation: We can also brighten this up by changing the gamma.
Note We don't have to remove the layer before updating it's options. Calling M.add_layer(...) with the same rd object will simply replace any existing layer with the same name. By default the layer's name is inferred from the filename.
End of explanation
M.add_layer(rd[1, 2, 3], interval=(0,1), gamma=0.5, opacity=0.75)
# Remove the layer before moving on to the next section
M.remove_layer(M.layers[0])
Explanation: Finally, let's add a little opacity to layer so we can see some of the underlying base map features.
End of explanation
M.add_layer(rd[4])
Explanation: Adding a single band Layer
Adding a single band layer uses the same M.add_layer(...) interface. Keep in mind that several of the styling options are slightly different. By default single band rasters are rendered with a default mapping of colors to band values.
End of explanation
cmap = plt.get_cmap('winter', 10)
M.add_layer(rd[4], colormap=cmap, opacity=0.8)
Explanation: You may find this colormap a little aggressive, in which case you can replace the colormap with any of the built in matplotlib colormaps:
End of explanation
from matplotlib.colors import LinearSegmentedColormap
# Divergent Blue to Beige to Green colormap
cmap =LinearSegmentedColormap.from_list(
'ndvi', ['blue', 'beige', 'green'], 20)
# Add layer with custom colormap
M.add_layer(rd[4], colormap=cmap, opacity=0.8, min=-1.0, max=1.0)
Explanation: Including custom color maps as in this example. Here we create a linear segmented colormap that transitions from Blue to Beige to Green. When mapped to our NDVI band data -1 will appear blue, 0 will appear beige and 1 will appear green.
End of explanation
M.set_center(-119.25618502500376, 47.349300631765104, 11)
Explanation: What can I do with this data?
We will address the use of annotations for analysis and data comparison in a separate notebook. For now Let's focus on a small agricultural area north of I-90:
End of explanation
layer, data = next(M.layers.annotation.rectangles[0].data)
data
Explanation: Go ahead and start a rectangular annotation (Second button to the right of the 'CellToolbar' button - with the square icon).
Please annotate a small region of the fields.
We can access this data from from the annotation's data attribute. We'll cover exactly what is going on here in another notebook.
End of explanation
import numpy as np
fig, ax = plt.subplots(figsize=(16, 16))
ax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))
Explanation: As a sanity check we can prove the data is the region we've selected by plotting the data with matplotlib's imshow function:
Note The scale of the matplotlib image may seem slightly different than the rectangle you've selected on the map. This is because the map is displaying in Web Mercator projection (EPSG:3857) while imshow is simply displaying the raw data, selected out of the geotiff (you can think of it as being in a 'row', 'column' projection).
End of explanation
# Adapted from the scikit-image segmentation tutorial
# See: http://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html
import numpy as np
from skimage import measure
from skimage.filters import sobel
from skimage.morphology import watershed
from scipy import ndimage as ndi
THRESHOLD = 20
WATER_MIN = 0.2
WATER_MAX = 0.6
fig, ax = plt.subplots(figsize=(16, 16))
edges = sobel(data)
markers = np.zeros_like(data)
markers[data > WATER_MIN] = 2
markers[data > WATER_MAX] = 1
mask = (watershed(edges, markers) - 1).astype(bool)
seg = np.zeros_like(mask, dtype=int)
seg[~mask] = 1
# Fill holes
seg = ndi.binary_fill_holes(seg)
# Ignore entities smaller than THRESHOLD
label_objects, _ = ndi.label(seg)
sizes = np.bincount(label_objects.ravel())
mask_sizes = sizes > THRESHOLD
mask_sizes[0] = 0
clean_segs = mask_sizes[label_objects]
# Find contours of the segmented data
contours = measure.find_contours(clean_segs, 0)
ax.imshow(data, interpolation='none', cmap=cmap, clim=(-1.0, 1.0))
ax.axis('tight')
for n, contour in enumerate(contours):
ax.plot(contour[:, 1], contour[:, 0], linewidth=4)
Explanation: NDVI Segmentation analysis
Once we have this data we can run arbitrary analyses on it. In the next cell we use a sobel filter and a watershed transformation to generate a binary mask of the data. We then use an implementation of marching cubes to vectorize the data, effectively segmenting green areas (e.g. fields) from surrounding areas.
This next cell requires both scipy and scikit-image. Check your operating system documentation for how best to install these packages.
End of explanation |
7,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Set up working directory
Step1: README
This part of pipeline search for the SSU rRNA gene fragments, classify them, and extract reads aligned specific region. It is also heavy lifting part of the whole pipeline (more cpu will help).
This part works with one seqfile a time. You just need to change the "Seqfile" and maybe other parameters in the two cells bellow.
To run commands, click "Cell" then "Run All". After it finishes, you will see "*** pipeline runs successsfully
Step2: Other parameters to set
Step3: Pass hits to mothur aligner
Step4: Get aligned seqs that have > 50% matched to references
Step5: Search is done here (the computational intensive part). Hooray!
\$Tag.ssu.out/\$Tag.qc.\$Gene.align.filter
Step6: Classify SSU rRNA gene seqs using SILVA
Step7: Classify SSU rRNA gene seqs with Greengene for copy correction later
Step8: This part of pipeline (working with one sequence file) finishes here. Next we will combine samples for community analysis (see unsupervised analysis).
Following are files useful for community analysis | Python Code:
cd ~/Desktop/SSUsearch/
mkdir -p ./workdir
#check seqfile files to process in data directory (make sure you still remember the data directory)
!ls ./data/test/data
Explanation: Set up working directory
End of explanation
Seqfile='./data/test/data/1c.fa'
Explanation: README
This part of pipeline search for the SSU rRNA gene fragments, classify them, and extract reads aligned specific region. It is also heavy lifting part of the whole pipeline (more cpu will help).
This part works with one seqfile a time. You just need to change the "Seqfile" and maybe other parameters in the two cells bellow.
To run commands, click "Cell" then "Run All". After it finishes, you will see "*** pipeline runs successsfully :)" at bottom of this pape.
If your computer has many processors, there are two ways to make use of the resource:
Set "Cpu" higher number.
make more copies of this notebook (click "File" then "Make a copy" in menu bar), so you can run the step on multiple files at the same time.
(Again we assume the "Seqfile" is quality trimmed.)
Here we will process one file at a time; set the "Seqfile" variable to the seqfile name to be be processed
First part of seqfile basename (separated by ".") will be the label of this sample, so named it properly.
e.g. for "/usr/local/notebooks/data/test/data/1c.fa", "1c" will the label of this sample.
End of explanation
Cpu='1' # number of maxixum threads for search and alignment
Hmm='./data/SSUsearch_db/Hmm.ssu.hmm' # hmm model for ssu
Gene='ssu'
Script_dir='./scripts'
Gene_model_org='./data/SSUsearch_db/Gene_model_org.16s_ecoli_J01695.fasta'
Ali_template='./data/SSUsearch_db/Ali_template.silva_ssu.fasta'
Start='577' #pick regions for de novo clustering
End='727'
Len_cutoff='100' # min length for reads picked for the region
Gene_tax='./data/SSUsearch_db/Gene_tax.silva_taxa_family.tax' # silva 108 ref
Gene_db='./data/SSUsearch_db/Gene_db.silva_108_rep_set.fasta'
Gene_tax_cc='./data/SSUsearch_db/Gene_tax_cc.greengene_97_otus.tax' # greengene 2012.10 ref for copy correction
Gene_db_cc='./data/SSUsearch_db/Gene_db_cc.greengene_97_otus.fasta'
# first part of file basename will the label of this sample
import os
Filename=os.path.basename(Seqfile)
Tag=Filename.split('.')[0]
import os
New_path = '{}:{}'.format('~/Desktop/SSUsearch/external_tools/bin/', os.environ['PATH'])
Hmm=os.path.abspath(Hmm)
Seqfile=os.path.abspath(Seqfile)
Script_dir=os.path.abspath(Script_dir)
Gene_model_org=os.path.abspath(Gene_model_org)
Ali_template=os.path.abspath(Ali_template)
Gene_tax=os.path.abspath(Gene_tax)
Gene_db=os.path.abspath(Gene_db)
Gene_tax_cc=os.path.abspath(Gene_tax_cc)
Gene_db_cc=os.path.abspath(Gene_db_cc)
os.environ.update(
{'PATH':New_path,
'Cpu':Cpu,
'Hmm':os.path.abspath(Hmm),
'Gene':Gene,
'Seqfile':os.path.abspath(Seqfile),
'Filename':Filename,
'Tag':Tag,
'Script_dir':os.path.abspath(Script_dir),
'Gene_model_org':os.path.abspath(Gene_model_org),
'Ali_template':os.path.abspath(Ali_template),
'Start':Start,
'End':End,
'Len_cutoff':Len_cutoff,
'Gene_tax':os.path.abspath(Gene_tax),
'Gene_db':os.path.abspath(Gene_db),
'Gene_tax_cc':os.path.abspath(Gene_tax_cc),
'Gene_db_cc':os.path.abspath(Gene_db_cc)})
!echo "*** make sure: parameters are right"
!echo "Seqfile: $Seqfile\nCpu: $Cpu\nFilename: $Filename\nTag: $Tag"
cd workdir
mkdir -p $Tag.ssu.out
### start hmmsearch
%%bash
echo "*** hmmsearch starting"
time hmmsearch --incE 10 --incdomE 10 --cpu $Cpu \
--domtblout $Tag.ssu.out/$Tag.qc.$Gene.hmmdomtblout \
-o /dev/null -A $Tag.ssu.out/$Tag.qc.$Gene.sto \
$Hmm $Seqfile
echo "*** hmmsearch finished"
!python $Script_dir/get-seq-from-hmmout.py \
$Tag.ssu.out/$Tag.qc.$Gene.hmmdomtblout \
$Tag.ssu.out/$Tag.qc.$Gene.sto \
$Tag.ssu.out/$Tag.qc.$Gene
Explanation: Other parameters to set
End of explanation
%%bash
echo "*** Starting mothur align"
cat $Gene_model_org $Tag.ssu.out/$Tag.qc.$Gene > $Tag.ssu.out/$Tag.qc.$Gene.RFadded
# mothur does not allow tab between its flags, thus no indents here
time mothur "#align.seqs(candidate=$Tag.ssu.out/$Tag.qc.$Gene.RFadded, template=$Ali_template, threshold=0.5, flip=t, processors=$Cpu)"
rm -f mothur.*.logfile
Explanation: Pass hits to mothur aligner
End of explanation
!python $Script_dir/mothur-align-report-parser-cutoff.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.report \
$Tag.ssu.out/$Tag.qc.$Gene.align \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter \
0.5
!python $Script_dir/remove-gap.py $Tag.ssu.out/$Tag.qc.$Gene.align.filter $Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa
Explanation: Get aligned seqs that have > 50% matched to references
End of explanation
!python $Script_dir/region-cut.py $Tag.ssu.out/$Tag.qc.$Gene.align.filter $Start $End $Len_cutoff
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter."$Start"to"$End".cut.lenscreen $Tag.ssu.out/$Tag.forclust
Explanation: Search is done here (the computational intensive part). Hooray!
\$Tag.ssu.out/\$Tag.qc.\$Gene.align.filter:
aligned SSU rRNA gene fragments
\$Tag.ssu.out/\$Tag.qc.\$Gene.align.filter.fa:
unaligned SSU rRNA gene fragments
Extract the reads mapped 150bp region in V4 (577-727 in E.coli SSU rRNA gene position) for unsupervised clustering
End of explanation
%%bash
rm -f $Tag.ssu.out/$Tag.qc.$Gene.align.filter.silva_taxa_family*.taxonomy
mothur "#classify.seqs(fasta=$Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa, template=$Gene_db, taxonomy=$Gene_tax, cutoff=50, processors=$Cpu)"
mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter.silva_taxa_family*.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy
!python $Script_dir/count-taxon.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy.count
!rm -f mothur.*.logfile
Explanation: Classify SSU rRNA gene seqs using SILVA
End of explanation
%%bash
rm -f $Tag.ssu.out/$Tag.qc.$Gene.align.filter.greengene_97_otus*.taxonomy
mothur "#classify.seqs(fasta=$Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa, template=$Gene_db_cc, taxonomy=$Gene_tax_cc, cutoff=50, processors=$Cpu)"
mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter.greengene_97_otus*.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy
!python $Script_dir/count-taxon.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy.count
!rm -f mothur.*.logfile
# check the output directory
!ls $Tag.ssu.out
Explanation: Classify SSU rRNA gene seqs with Greengene for copy correction later
End of explanation
!echo "*** pipeline runs successsfully :)"
Explanation: This part of pipeline (working with one sequence file) finishes here. Next we will combine samples for community analysis (see unsupervised analysis).
Following are files useful for community analysis:
1c.577to727: aligned fasta file of seqs mapped to target region for de novo clustering
1c.qc.ssu.align.filter: aligned fasta file of all SSU rRNA gene fragments
1c.qc.ssu.align.filter.wang.gg.taxonomy: Greengene taxonomy (for copy correction)
1c.qc.ssu.align.filter.wang.silva.taxonomy: SILVA taxonomy
End of explanation |
7,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Carbonic Acid/Bicarbonate/Carbonate Equilibrium $\require{mhchem}$
Carbonic Acid ($\ce{H2CO3}$), Bicarbonate ($\ce{HCO3-}$) and Carbonate ($\ce{CO3^{2-}}$) form in water through the following equilibrium reactions
Step1: Solution Definition
We define a simple solution that contains 1 mmol of Sodium Bicarbondate ($\ce{NaHCO3}$)
Step2: List Definition
We initialize four arrays, one for the pH and one for each of the different carbonate species.
Step3: Calculation Loop
We now iteratively change the pH to the desired value, using the change_ph function to dose either hydrochloric acid ($\ce{HCl}$) or lye ($\ce{NaOH}$). Using the total function we can find the total amount of carbon dioxide, bicarbonate and carbonate.
Step4: Display Results
Using matplotlib we can display the results | Python Code:
%pylab inline
from phreeqpython import PhreeqPython
# create new PhreeqPython instance
pp = PhreeqPython()
Explanation: The Carbonic Acid/Bicarbonate/Carbonate Equilibrium $\require{mhchem}$
Carbonic Acid ($\ce{H2CO3}$), Bicarbonate ($\ce{HCO3-}$) and Carbonate ($\ce{CO3^{2-}}$) form in water through the following equilibrium reactions:
$$ \ce{CO2 + H2O <=> H2CO3} $$
$$ \ce{H2CO3 <=> HCO3- + H+} $$
$$ \ce{HCO3- <=> CO3^{2-} + H+} $$
The distribution of carbonic acid, bicarbonate and carbonate is dependent on the pH of the water, and is easily simulated using PhreeqPython.
Importing Modules
We start by importing phreeqpython package and creating a new PhreeqPython instance
End of explanation
solution = pp.add_solution_simple({'NaHCO3':1.0})
print("This solution has a pH of: {0:.2f} and a conductivity of: {1:.2f} uS/cm".format(solution.pH,solution.sc))
Explanation: Solution Definition
We define a simple solution that contains 1 mmol of Sodium Bicarbondate ($\ce{NaHCO3}$)
End of explanation
phs = []
co2 = []
hco3 = []
co3 = []
Explanation: List Definition
We initialize four arrays, one for the pH and one for each of the different carbonate species.
End of explanation
for pH in arange(0,14.1,0.1):
# change the solution pH
solution.change_ph(pH)
# get and store the ph, CO2, HCO3 and CO3
phs.append(pH)
co2.append(solution.total('CO2')*1000)
co3.append(solution.total('CO3')*1000)
hco3.append(solution.total('HCO3')*1000)
Explanation: Calculation Loop
We now iteratively change the pH to the desired value, using the change_ph function to dose either hydrochloric acid ($\ce{HCl}$) or lye ($\ce{NaOH}$). Using the total function we can find the total amount of carbon dioxide, bicarbonate and carbonate.
End of explanation
fig = plt.figure(figsize=[14,6])
plt.plot(phs,co2,label='CO2')
plt.plot(phs,hco3,label='HCO3-')
plt.plot(phs,co3,label='CO3-2')
plt.xlabel("pH")
plt.ylabel("Concentration (mmol)")
plt.title("Carbonic Acid, Bicarbonate, Carbonate distribution")
lgnd = plt.legend()
Explanation: Display Results
Using matplotlib we can display the results:
End of explanation |
7,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 19
Step1: Now, let's set an initial population $n_0$, pick a couple of different per capita rates of change $r_c$, and run them to see what happens.
Step2: The one critical part of the whole thing
Step3: Now, let's create a bunch of time points and evaluate the ODE for different values of rc!
Step4: Logistic growth is a slightly different approach. It takes into account the fact that populations usually can't just keep growing without bound. In fact, their growth rate is directly related to their current size.
The model looks something like this
Step5: Now we need to write the function that implements the differential equation.
Step6: Now we simulate it! The only difference is, this time, we feed the function name logistic_growth to the odeint() solver
Step7: Models of Competition
The population growth models we looked at are great, but they're unrealistic for many reasons, not the least of which is
Step8: Next, we need to code up one step of the differential equation, in the form of a Python function
Step9: How does it look?
Step10: Epidemiological Models
There is an entire class of compartment models dedicated to capturing the characteristics of epidemiological systems, the most popular of which is easily the SIR model.
SIR models, or Susceptible-Infected-Recovered models, represent three distinct populations and how people move from one of these populations to another in response to infectious diseases.
<img src="Lecture19/sir-overview.png" width="50%" />
Let's create a diagram of the process, just as before, showing the relevant variables, parameters, constraints, and interactions between variables.
To start, we need to list out our background knowledge of the problem, encoded as assumptions
Step11: Now we need to code up the differential equations in terms of Python functions.
Step12: Finally, we'll solve the equation. | Python Code:
# Preliminary imports
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.integrate as sig # Here's the critical module!
import seaborn as sns
Explanation: Lecture 19: Computational Modeling
CBIO (CSCI) 4835/6835: Introduction to Computational Biology
Overview and Objectives
So far, we've discussed Hidden Markov Models as way to encapsulate and represent something with a "hidden state" component. There are countless other computational and statistical models, a few of which we'll touch on here. By the end of this lecture, you should be able to:
Understand compartment models and how to design them
Relate ordinary differential equations (ODEs) to compartment models
Implement basic compartment models for population growth, disease spread, and competition
Part 1: Compartment Models
A compartment model is one of the simplest mechanistic representations of real-world phenomena.
All compartment models look something like this:
<img src="https://upload.wikimedia.org/wikipedia/commons/0/0e/Singlecell.PNG" />
The node(s) represent specific compartments in the model
The edge(s) represent flow of material from one compartment to another, as well as dependencies between compartments
Because of the fact that things are constantly moving around in compartment models, they are sometimes also referred to as dynamic models
There are lots of variations on this theme, including:
Closed models: Total amount of material within the model remains constant, simply shifting from one compartment to another
Open models: Total material can flow in and out of the model's compartments. This is referred to the model having a source (an external contributor of additional material) or a sink (a compartment where material effectively disappears from the model when it enters)
Cyclic models: Material can flow back and forth between mutually connected compartments
Or combinations of the above!
Compartment models can be discrete or continuous.
Discrete models consider the passage of time in discrete steps, e.g. integers.
<img src="https://upload.wikimedia.org/wikipedia/commons/0/0e/Singlecell.PNG" />
In this example, the input of the compartment $u(t)$ is dependent on time, where time is a discrete quantity.
Continuous models, on the other hand, shrink the change in time between events ($\delta t$) to 0.
<img src="Lecture19/continuous.png" />
We'll see some examples where this formulation may make more sense. Unfortunately, this is often much more difficult to derive for certain systems.
Compartment models can also be deterministic or stochastic.
Deterministic models give you the exact same outputs for any given input. This is what we'll see with models that use differential equations: for given initial values to the system, we always get the same final values.
Stochastic models introduce randomness into systems, simulating probabilities instead of explicit differential equations. These provide much more realistic looks into real-world systems, but are often much more difficult to analyze (e.g. for steady states), since a given input will not always (or ever!) give the same output.
An offshoot of stochastic models is the agent-based model, in which individual "agents" are allowed to act independently according to certain probabilities. This is a very powerful, but very compute-intensive, model.
Part 2: Common Dynamic Models
Enough vocabulary; let's look at a couple common dynamic models.
Population Growth
We'd like to model the growth of some population (humans, animals, bacteria, etc). There are two generally-accepted ways of doing this:
Exponential growth assumes that the population grows, well, exponentially. There are implicit assumptions here as well, most importantly that resources also grow with population to sustain its growth.
Logistic growth assumes a little more explicitly that the amount of some critical resource (e.g. food) is fixed, effectively providing an upper bound on the ultimate size of the population.
Let's take a look!
Exponential growth sounds a little misleading, since the equation doesn't, on initial inspection, look exponential.
Let's say your population can grow through birth, and shrink through death. At any given time $t$, the population is offset by the number added (birth) and removed (death).
With this information, can we build an equation for population as a function of time?
$n(t + 1) = n(t) + b - d$
Or perhaps, put another way, the change in population at any given time?
$\frac{dn}{dt} = bn(t) - dn(t)$
$b$ is the birth rate
$d$ is the death rate
$n(t)$ is the population at time $t$
You may notice both terms in the above equation have a common element that can be factored out.
$\frac{dn}{dt} = n(t) (b - d)$
The $(b - d)$ term even has a special name: the per capita rate of change. It essentially governs whether the population is increasing or decreasing at any given time, depending on whether the birth or death term dominates. It is typically represented as $r_c = b - d$, so we can rewrite the equation as simply:
$\frac{dn}{dt} = r_c n(t)$
Now that we've gone through the derivation of the differential equations, how about some nice pretty pictures?
<img src="Lecture19/growth-exp.png" width="40%" />
Compartment models lend themselves to these sorts of diagrams, which make setting up equations (and, eventually, transition matrices) a lot simpler.
So we have these equations; how do we run them and obtain some results?
Turns out, Python (specifically, SciPy) has a module for solving ordinary differential equations (ODEs).
End of explanation
n0 = 10
rc1 = 0.01
rc2 = 0.1
rc3 = -0.2
Explanation: Now, let's set an initial population $n_0$, pick a couple of different per capita rates of change $r_c$, and run them to see what happens.
End of explanation
# Differential equation functions take two arguments: the variable that's changing, and time.
def diffeq(n, t):
return n * rc
Explanation: The one critical part of the whole thing: you have to define the differential equations as Python functions, so the SciPy module knows what to solve. Let's do that here:
End of explanation
t = np.linspace(0, 15, 1000) # time
rc = rc1
n1, oded = sig.odeint(diffeq, n0, t, full_output = True)
print(oded['message'])
rc = rc2
n2, oded = sig.odeint(diffeq, n0, t, full_output = True)
print(oded['message'])
rc = rc3
n3, oded = sig.odeint(diffeq, n0, t, full_output = True)
print(oded['message'])
plt.xlabel('time')
plt.ylabel('population')
plt.title('Exponential Growth')
plt.plot(t, n1, label = '$r_c = 0.01$')
plt.plot(t, n2, label = '$r_c = 0.5$')
plt.plot(t, n3, label = '$r_c = -0.2$')
plt.legend(loc = 0)
Explanation: Now, let's create a bunch of time points and evaluate the ODE for different values of rc!
End of explanation
# Same as before
n0 = 10
rc1 = 0.01
rc2 = 0.1
rc3 = -0.2
K = 100 # The new term introduced by this method--known as "Carrying Capacity"
Explanation: Logistic growth is a slightly different approach. It takes into account the fact that populations usually can't just keep growing without bound. In fact, their growth rate is directly related to their current size.
The model looks something like this:
<img src="Lecture19/growth-log.png" width="40%" />
You still see some of the usual suspects--population $n(t)$ as a function of time, and birth and death rates, but notice the latter two are also now functions of the current population instead of simply constants.
To come up with a bounded model of population growth, we need to add a couple of things to our original equation.
Think of it this way: when the population is small, we want it to behave more or less like it did before--exponential growth. But when the population is large, we want it slow down or even stop growing.
$\frac{dn}{dt} = r_c n(t) (1 - \frac{n(t)}{K})$
Let's look at this more closely:
- We still see the same exponential growth equation as before in the first part
- There's a second part, though: $(1 - \frac{n(t)}{K})$
- Consider the equation when $n(t)$ is small: the $\frac{n(t)}{K}$ number is close to 0, which means $1 -$ that number is pretty much 1, so the equation reduces to $r_c n(t)$, exactly what we had before!
- When $n(t)$ is large--say, very close to whatever $K$ is--the fraction $\frac{n(t)}{K}$ is very close to 1, and $1 - 1 = 0$, which sets the entire equation to 0. In other words, growth stops completely!
So that's cool. Let's plot it out with Python! Remember to first set up the variables and rates:
End of explanation
def logistic_growth(n, t):
exp_term = n * rc # same as before
limit_term = 1 - (n / K) # the limiting term
return exp_term * limit_term
Explanation: Now we need to write the function that implements the differential equation.
End of explanation
t = np.linspace(0, 100, 2000) # time
rc = rc1
n1, oded = sig.odeint(logistic_growth, n0, t, full_output = True)
print(oded['message'])
rc = rc2
n2, oded = sig.odeint(logistic_growth, n0, t, full_output = True)
print(oded['message'])
rc = rc3
n3, oded = sig.odeint(logistic_growth, n0, t, full_output = True)
print(oded['message'])
plt.xlabel('time')
plt.ylabel('population')
plt.title('Logistic Growth with $K = 100$')
plt.plot(t, n1, label = '$r_c = 0.01$')
plt.plot(t, n2, label = '$r_c = 0.5$')
plt.plot(t, n3, label = '$r_c = -0.2$')
plt.legend(loc = 0)
Explanation: Now we simulate it! The only difference is, this time, we feed the function name logistic_growth to the odeint() solver:
End of explanation
a = 1.0 # prey growth rate
b = 0.1 # predation rate (prey death rate)
c = 0.075 # predator growth rate
d = 1.0 # predator death rate
Explanation: Models of Competition
The population growth models we looked at are great, but they're unrealistic for many reasons, not the least of which is: populations don't exist in a vacuum!
Populations have to coexist with restrictions such as food, water, resources, mating and fertility rates, environmental factors, and numerous others.
Lotka-Volterra models build on the idea of logistic population growth, but with the added constraint of an additional population species that specifically preys on the other.
Consider a model of 2 species with the following parameters:
Populations $n_1$ and $n_2$
Intrinsic growth rates $r_1$ and $r_2$
Carrying capacities $K_1$ and $K_2$
Assumptions
(always important to list these out!)
The prey population finds ample food at all times, whereas the predator population's food depends solely on the prey population
Related: the predator has an unlimited appetite (i.e. the amount of food consumed by predators is dependent only on the population size of the prey)
The rate of change in both populations is a function of the sizes of the populations
The environment the two populations reside in doesn't change, and genetics / adaptation don't play a role
How do we set up the competing differential equations?
Start with the exponential growth from before!
Prey growth: $\frac{dx}{dt} = \alpha x$
But we want to include a negative dependence on the predator population, too.
This negative dependence has its own rate, $\beta$.
Predation rate is not only dependent on the predator population $y$, but also the prey population $x$.
So the negative term is composed of three elements.
Prey: $\frac{dx}{dt} = \alpha x - \beta x y$
How about the predator equations?
(Hint: the part of the prey equation that kills off prey is what contributes to predator growth)
Predator growth: $\frac{dy}{dt} = \gamma x y$
That's the growth term for predators. How about its own negative term?
Predator: $\frac{dy}{dt} = \gamma x y - \delta y$
Let's model these equations in Python!
First, we have parameter values we need to set up:
End of explanation
def pred_prey(X, t):
# Remember: X is a two-element NumPy array
ax = a * X[0]
bxy = b * X[0] * X[1]
cxy = c * X[0] * X[1]
dy = d * X[1]
# Return value is also a two-element array
retval = np.array([ax - bxy, cxy - dy])
return retval
Explanation: Next, we need to code up one step of the differential equation, in the form of a Python function:
End of explanation
t = np.linspace(0, 15, 1000) # time
X0 = np.array([10, 5]) # initials conditions: 10 prey, 5 predators
X, oded = sig.odeint(pred_prey, X0, t, full_output = True)
print(oded['message'])
prey, pred = X.T
plt.xlabel('time')
plt.ylabel('population')
plt.title('Lotka-Volterra Model')
plt.plot(t, prey, 'r-', label = 'Prey')
plt.plot(t, pred , 'b-', label = 'Predators')
plt.legend(loc = 0)
Explanation: How does it look?
End of explanation
beta = 0.3 # infection rate
theta = 10.0 # birth rate
sigma = 0.5 # de-immunization rate
rho = 0.9 # recovery rate
delta = 0.5 # death rate from infection
mu = 0.05 # death rate from susceptibility or recovery
# Initial populations.
S0 = 100
I0 = 5
R0 = 0
X0 = np.array([S0, I0, R0])
Explanation: Epidemiological Models
There is an entire class of compartment models dedicated to capturing the characteristics of epidemiological systems, the most popular of which is easily the SIR model.
SIR models, or Susceptible-Infected-Recovered models, represent three distinct populations and how people move from one of these populations to another in response to infectious diseases.
<img src="Lecture19/sir-overview.png" width="50%" />
Let's create a diagram of the process, just as before, showing the relevant variables, parameters, constraints, and interactions between variables.
To start, we need to list out our background knowledge of the problem, encoded as assumptions:
Infection can be transmitted from infected to susceptible individuals
Recovered individuals become immune for a period of time
Probability of death is increased in infected patients
Can we sketch out the diagram?
<img src="Lecture19/sir-diagram.png" width="70%" />
Next step: convert the diagram into equations or rules (we've used differential equations so far), one for each population.
Susceptible population:
$\frac{dS}{dt} = \theta + \sigma R(t) - \beta S(t) I(t) - \sigma S(t)$
Infected population:
$\frac{dI}{dt} = \beta S(t) I(t) - \rho I(t) - \delta I(t)$
Recovered population:
$\frac{dR}{dt} = \rho I(t) - \sigma R(t) - \mu R(t)$
Aside
We're leaving out for the moment how exactly to come up with values for all these parameters; it's more obvious with SIR parameters, since there are a ton of them.
Research papers using the model will detail out the values used and how they were determined (often through simulation or experiment).
Let's see if we can simulate this model!
End of explanation
def diff_sir(X, t):
s = X[0]
i = X[1]
r = X[2]
# Now, compute each equation.
ds = theta + (sigma * r) - (beta * s * i) - (mu * s)
di = (beta * s * i) - (rho * i) - (delta * i)
dr = (rho * i) - (sigma * r) - (mu * r)
# Return the numbers as an array, in the same order as the input.
return np.array([ds, di, dr])
Explanation: Now we need to code up the differential equations in terms of Python functions.
End of explanation
t = np.linspace(0, 50, 1000) # time
Y, oded = sig.odeint(diff_sir, X0, t, full_output = True)
print(oded['message'])
S, I, R = Y.T
plt.xlabel('time')
plt.ylabel('population')
plt.title('SIR Model')
plt.plot(t, S, 'b-', label = 'S')
plt.plot(t, I, 'r-', label = 'I')
plt.plot(t, R, 'g-', label = 'R')
plt.legend(loc = 0)
Explanation: Finally, we'll solve the equation.
End of explanation |
7,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Next, we'll import the Sequential model type from Keras. This is simply a linear stack of neural network layers, and it's perfect for the type of feed-forward CNN we're building in this tutorial.
Step1: Step 7 | Python Code:
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras.datasets import mnist
#load pre-shffuled MNIST data into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(X_train.shape)
from matplotlib import pyplot as plt
plt.imshow(X_train[0])
X_train = X_train.reshape(X_train.shape[0],1,28,28)
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28)
print(X_train.shape)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print(y_train.shape)
print(y_train[:10])
Y_train=np_utils.to_categorical(y_train, 10)
Y_test = np_utils.to_categorical(y_test, 10)
print(Y_train.shape)
Explanation: Next, we'll import the Sequential model type from Keras. This is simply a linear stack of neural network layers, and it's perfect for the type of feed-forward CNN we're building in this tutorial.
End of explanation
model = Sequential()
model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
print(model.output_shape)
model.add(Convolution2D(32, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model = Sequential()
model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
model.add(Convolution2D(32, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X_train, Y_train,
batch_size=32, nb_epoch=10, verbose=1)
score = model.evaluate(X_test, Y_test, verbose=0)
Explanation: Step 7: Define model architecture.
Now we're ready to define our model architecture. In actual R&D work, researchers will spend a considerable amount of time studying model architectures.
To keep this tutorial moving along, we're not going to discuss the theory or math here. This alone is a rich and meaty field, and we recommend the CS231n class mentioned earlier for those who want to learn more.
Plus, when you're just starting out, you can just replicate proven architectures from academic papers or use existing examples. Here's a list of example implementations in Keras.
Let's start by declaring a sequential model format:
End of explanation |
7,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Casser le code de Vigenère
La lettre la plus fréquente en français est la lettre E. Cette information permet de casser le code de César en calculant le décalage entre la lettre la plus fréquente du message codé et E. Mais cette même méthode ne marchera pas pour casser le code de Vigenère. Babbage a contourné cet obstacle en étudiant la fréquence des groupes de trois lettres.
Charles Babbage s'est dit qu'un groupe de trois lettres consécutives avaient toutes les chances, à chaque fois qu'il apparaissait dans le message chiffré, d'être la conséquence du chiffrement des mêmes lettres du message avec les mêmes lettres de la clé (voir Cryptanalyse du chiffre de Vigenère). Pour un groupe de quatre lettres, c'est encore plus probable. Par conséquent, l'espacement entre deux mêmes groupes de lettres chiffrées est un multiple de la longueur de la clé. Par exemple, si la répétition d'un groupe est espacée de 30 lettres, puis celle d'un autre de 25, le plus grand diviseur commun de 25 et 30 est 5. La clé possède donc dans ce cas 5 lettres.
La première fonction crypte et décrypte le code de Vigenère connaissant la clé.
Step2: Les deux fonctions suivantes estime la longueur de la clé.
Step4: La fonction suivante casse le code de Vigenère connaissance la longueur de la clé.
Step6: Enfin, la dernière fonction qui casse le code en appelant toutes les autres
Step7: Un petit example avec le dernier jour d'un condamné qu'on récupère depuis le site Gutenberg
Step8: On enlève les caractères indésirables
Step9: On le code une clé
Step10: Puis on essaye de retrouver la clé | Python Code:
def code_vigenere ( message, cle, decode = False) :
message_code = ""
for i,c in enumerate(message) :
d = cle[ i % len(cle) ]
d = ord(d) - 65
if decode : d = 26 - d
message_code += chr((ord(c)-65+d)%26+65)
return message_code
def DecodeVigenere(message, cle):
return code_vigenere(message, cle, True)
def CodeVigenere(message, cle):
return code_vigenere(message, cle)
Explanation: Casser le code de Vigenère
La lettre la plus fréquente en français est la lettre E. Cette information permet de casser le code de César en calculant le décalage entre la lettre la plus fréquente du message codé et E. Mais cette même méthode ne marchera pas pour casser le code de Vigenère. Babbage a contourné cet obstacle en étudiant la fréquence des groupes de trois lettres.
Charles Babbage s'est dit qu'un groupe de trois lettres consécutives avaient toutes les chances, à chaque fois qu'il apparaissait dans le message chiffré, d'être la conséquence du chiffrement des mêmes lettres du message avec les mêmes lettres de la clé (voir Cryptanalyse du chiffre de Vigenère). Pour un groupe de quatre lettres, c'est encore plus probable. Par conséquent, l'espacement entre deux mêmes groupes de lettres chiffrées est un multiple de la longueur de la clé. Par exemple, si la répétition d'un groupe est espacée de 30 lettres, puis celle d'un autre de 25, le plus grand diviseur commun de 25 et 30 est 5. La clé possède donc dans ce cas 5 lettres.
La première fonction crypte et décrypte le code de Vigenère connaissant la clé.
End of explanation
def PGCD (m,n) :
if m <= 0 or n <= 0 : raise Exception("impossible de calculer le PGCD")
if m == 1 or n == 1 : return 1
if m == n : return m
if m < n : return PGCD (m, n-m)
return PGCD (n, m-n)
def DecodeVigenereLongueurCle (message, mot = 3) :
cette fonction determine la longueur de la clé, elle
repère les groupes de trois lettres qui se répète dans le message codé
et suppose qu'il y a une très forte probabilité qu'un même groupe de trois
lettres soit codé avec les mêmes trois lettres du message et les mêmes trois
lettres de la clé
message : .....DES...........DES...........DES.........DES....DES
cle : ABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCDABCD
code : .....EGV.........................EGV.........EGV..........
distance : <----------24--------------><----8----->
la longueur de la clé divise le PGCD de 24 et 8
al = "".join([ chr(97+i) for i in range(0,26) ]) # l'alphabet
al = al.upper ()
# parcours du message pour recenser toutes les positions
dico = {}
for i in range (0, len (message)-2) :
t = message [i:i+mot]
if t in dico : dico [t].append (i)
else : dico [t] = [i]
# on va garder toutes les distances entre
# entre deux occurrences du meme mot de n lettres
dis = []
for d in dico :
p = dico [d]
if len (p) > 1 :
for i in range (0, len (p)-1) :
#print d, p [i+1] - p [i], " --- ", float (p [i+1] - p [i]) / 8
dis.append ( p [i+1] - p [i] )
# on extrait le PGCD
if len (dis) == 0 :
raise Exception("impossible de determiner la clé")
if len (dis) == 1 : return dis [0]
longueur = PGCD (dis [0], dis [1])
for d in dis :
longueur = PGCD (longueur, d)
if longueur > 5 :
# si la longueur est suffisante, le resultat a des chances d'etre bon
return longueur
else :
# sinon, on relance l'algorithme avec des mots plus grand
return DecodeVigenereLongueurCle (message, mot+1)
Explanation: Les deux fonctions suivantes estime la longueur de la clé.
End of explanation
def DecodeVigenereCle (code, l) :
Détermine la cle du message code, connaissant sa longueur,
on suppose que la lettre E est la lettre la plus fréquente
@param code message codé
@param l longueur probable de la clé
@return message décodé
al = "".join([ chr(97+i) for i in range(0,26) ])
al = al.upper ()
cle = ""
for i in range (0, l) :
nombre = [ 0 for a in al]
sous = code [i:len (code):l] # on extrait toutes les lettres
# i, i+l, i+2l; i+3l, ...
# on compte les lettres
for k in sous : nombre [ al.find (k) ] += 1
# on cherche le maximum
p = 0
for k in range (0, len (nombre)) :
if nombre [k] > nombre [p] : p = k
# on suppose que al [p] est la lettre E code,
# il ne reste plus qu'a trouver la lettre de la cle
# qui a permis de coder E en al [p]
cle += al [ (p + 26 - al.find ("E")) % 26 ]
return cle
Explanation: La fonction suivante casse le code de Vigenère connaissance la longueur de la clé.
End of explanation
def CasseVigenere(message):
appelle les deux fonctions @see fn DecodeVigenereLongueurCle et
@see fn DecodeVigenereCle pour casser le code de Vigenère
@param message message codé
@return message décodé (sans la clé)
l = DecodeVigenereLongueurCle(message)
cle = DecodeVigenereCle(message,l)
decode = DecodeVigenere(message, cle)
return decode
Explanation: Enfin, la dernière fonction qui casse le code en appelant toutes les autres :
End of explanation
from ensae_teaching_cs.data import gutenberg_name
text = gutenberg_name("condamne", load=True)
len(text)
Explanation: Un petit example avec le dernier jour d'un condamné qu'on récupère depuis le site Gutenberg :
End of explanation
message = text.replace ("\n", "").replace ("\r", "").replace ("\t", "").replace (" ", "").replace (",", "")
message = message.replace (";", "").replace (":", "").replace (".", "").replace ("'", "").replace ("\"", "")
message = message.replace ("-", "").replace ("!", "").replace ("?", "").replace ("(", "").replace (")", "")
message = message.upper ()
Explanation: On enlève les caractères indésirables :
End of explanation
message = message [5000:7000] # on réduit la taille du message
code = CodeVigenere (message, "VIGENERES")
Explanation: On le code une clé :
End of explanation
cle_code = DecodeVigenereCle (code, DecodeVigenereLongueurCle (code))
cle_code
Explanation: Puis on essaye de retrouver la clé :
End of explanation |
7,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chord Diagrams for Bokeh
Chord diagrams are a wonderful way to visualise interactions between groups, along the genome, and many others (check the circos page for some advances examples). I recently had the need to implement a basic chord diagram into a Bokeh app. Based on Plotly's post about Chord Diagrams, I implemented a basic class which can be used with Bokeh. I hope that it can serve as starting point and can be used in other implementations as well.
The existing chord diagram implementation (as of Bokeh version 0.12.4) has a rather odd look (which is espacially apparent if you zoom in a bit).
Input data
As outlined in Plotly's post, the input data must be a square matrix, where each row denotes one group member (or entity, or item, or, ...). Each column holds the interaction between two group members. It is assumed that the row group member is "sending" interaction to the column group member. The total interaction is thus given by the sum of the ${i,j}$ and ${j,i}$ values.
Taking the example of the used in the post we have,
Step1: which indicates that group Three was sending 11 items to group Two while receiving 12 items in return. Group Three has furthermore 17 interactions with itself, whereas group Two has none.
Building the Chord Diagram
The chord diagram can be build by handing the input data to the ChordDiagram class.
Step2: The interactions between the groups can now be visualised by calling the plot method of the ChordDiagram class. The index of the group corresponds to the row of the input data. | Python Code:
# Each row defines how many items were "send" to the group specified by the column
# for the "golden image" use case, the matrix should be symmetric
matrix = np.array([[16, 3, 28, 0, 18],
[18, 0, 12, 5, 29],
[ 9, 11, 17, 27, 0],
[19, 0, 31, 11, 12],
[23, 17, 10, 0, 34]], dtype=int)
labels = ['One', 'Two', 'Three', 'Four', 'Five']
pd.DataFrame(matrix, columns=labels, index=labels)
Explanation: Chord Diagrams for Bokeh
Chord diagrams are a wonderful way to visualise interactions between groups, along the genome, and many others (check the circos page for some advances examples). I recently had the need to implement a basic chord diagram into a Bokeh app. Based on Plotly's post about Chord Diagrams, I implemented a basic class which can be used with Bokeh. I hope that it can serve as starting point and can be used in other implementations as well.
The existing chord diagram implementation (as of Bokeh version 0.12.4) has a rather odd look (which is espacially apparent if you zoom in a bit).
Input data
As outlined in Plotly's post, the input data must be a square matrix, where each row denotes one group member (or entity, or item, or, ...). Each column holds the interaction between two group members. It is assumed that the row group member is "sending" interaction to the column group member. The total interaction is thus given by the sum of the ${i,j}$ and ${j,i}$ values.
Taking the example of the used in the post we have,
End of explanation
cd = ChordDiagram(matrix)
Explanation: which indicates that group Three was sending 11 items to group Two while receiving 12 items in return. Group Three has furthermore 17 interactions with itself, whereas group Two has none.
Building the Chord Diagram
The chord diagram can be build by handing the input data to the ChordDiagram class.
End of explanation
fig = cd.plot(group=0)
t = show(row(fig, ), notebook_handle=True)
Explanation: The interactions between the groups can now be visualised by calling the plot method of the ChordDiagram class. The index of the group corresponds to the row of the input data.
End of explanation |
7,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced AD application
This notebook documents how the algorithmic (or automatic) differentiation framework may be applied to non-linear equations. For an introduction to the framework, see other tutorials on AD.
The functions in question are the normal and tangential complementary equations for contact mechanics, which are only semi-smooth (i.e. they are not differentiable everywhere)
Step1: The simpler of the equations is defined as follows
Step2: Non-smooth functions using pp.ad.Function
Handling non-smoothness in the AD setting requires the definition of extended derivatives by assigning appropriate values to the Jacobi matrices for the non-smooth function components ($\text{max}$ and $\text{abs}$) at the points in question. While this may seem somewhat technical, it is a modest price to pay for handling these equations otherwise straightforwardly using AD. We define standard Python functions and wrap them in pp.ad.Function returning pp.ad.Ad_arrays having a val and a jac attribute. For instance, the maximum value function is defined and used as follows
Step3: Technical notes on Function wrapping
Argument types
The wrapping of a function in the pp.ad.Function class may be slightly confusing in that the function (e.g. pp.ad.functions.max) takes an Ad_array as its argument, whereas the Function instance (e.g. MaxAd above) expects an Operator, which represents an ad variable or compound expression. The explanation lies in how the Function is parsed ("evaluated"), which involves the MaxAd asking its _function to operate on the values and jacobians of var0 and var1, which are represented through an Ad_array. Puh!
Chain rule
An ad Funtion is parsed as follows by pp.ad.Operator._parse_operator | Python Code:
import porepy as pp
import numpy as np
import inspect
model = pp.ContactMechanics({})
print(inspect.getsource(model._assign_equations))
Explanation: Advanced AD application
This notebook documents how the algorithmic (or automatic) differentiation framework may be applied to non-linear equations. For an introduction to the framework, see other tutorials on AD.
The functions in question are the normal and tangential complementary equations for contact mechanics, which are only semi-smooth (i.e. they are not differentiable everywhere):
\begin{equation}
\begin{aligned}
C_n &= \lambda_n + \text{max}(0, -\lambda_n-c_n([[u]]n-g))\
C{\tau} &= \text{max}(0, b) (\lambda_{\tau}+c_{\tau}[[\dot{u}]]{\tau})
- \text{max}(b, ||\lambda{\tau}+c_{\tau}[[\dot{u}]]{\tau}||)\lambda{\tau},
\end{aligned}
\end{equation}
with $b=-F(\lambda_n+c_n([[u]]_n-g))$ and F, c, and $g$ denoting friction coefficient, numerical constants and the gap function, respectively. See Hüeber 2008 for a detailed derivation and discussion and Stefansson et al. 2021 for notation.
Implementation
The implementation is found within the ContactMechanics class. After defining subdomain and interface lists and ad variables, _assign_equations calls the methods _contact_mechanics_normal_equation and _contact_mechanics_normal_equation which compose the equations from subcomponents defined in other methods:
End of explanation
print(inspect.getsource(model._contact_mechanics_normal_equation))
Explanation: The simpler of the equations is defined as follows:
End of explanation
print(inspect.getsource(pp.ad.functions.maximum))
Explanation: Non-smooth functions using pp.ad.Function
Handling non-smoothness in the AD setting requires the definition of extended derivatives by assigning appropriate values to the Jacobi matrices for the non-smooth function components ($\text{max}$ and $\text{abs}$) at the points in question. While this may seem somewhat technical, it is a modest price to pay for handling these equations otherwise straightforwardly using AD. We define standard Python functions and wrap them in pp.ad.Function returning pp.ad.Ad_arrays having a val and a jac attribute. For instance, the maximum value function is defined and used as follows:
End of explanation
print(inspect.getsource(model._gap))
Explanation: Technical notes on Function wrapping
Argument types
The wrapping of a function in the pp.ad.Function class may be slightly confusing in that the function (e.g. pp.ad.functions.max) takes an Ad_array as its argument, whereas the Function instance (e.g. MaxAd above) expects an Operator, which represents an ad variable or compound expression. The explanation lies in how the Function is parsed ("evaluated"), which involves the MaxAd asking its _function to operate on the values and jacobians of var0 and var1, which are represented through an Ad_array. Puh!
Chain rule
An ad Funtion is parsed as follows by pp.ad.Operator._parse_operator:
elif tree.op == Operation.evaluate:
# This is a function, which should have at least one argument
assert len(results) > 1
return results[0].func(*results[1:])
That is, it calls the wrapped function on the ad array produced by parsing of the function argument(s). This means that the chain rule should be applied internally in the function. For a generic funtion f of a single variable var with derivative f_prime with respect to var, we have
def function_to_be_wrapped(var: pp.ad.Ad_array) -> pp.ad.Ad_array:
var = f(var)
df_dvar = f_prime(var)
# Chain rule:
jac = var.diagvec_mul_jac(df_dvar)
return pp.ad.Ad_array(var, jac)
Partial functions
Some functions depend on arguments which do not have anything to do with ad. Instead of having to wrap such arguments in AD objects to be evaluated as part of parsing of the Function, one can exploit partial evaluation. For instance, the pp.ad.functions.l2_norm function for cell-wise vectors has been implemented for an arbitrary number of vector components. It is applied in the definition of the gap, which depends on the norm of tangential displacement jumps. The number of tangential components equals the dimension of the fracture, i.e. $nd - 1$:
End of explanation |
7,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Session 5
Step2: <a name="introduction"></a>
Introduction
So far we've seen the basics of neural networks, how they can be used for encoding large datasets, or for predicting labels. We've also seen how to interrogate the deeper representations that networks learn in order to help with their objective, and how amplifying some of these objectives led to creating deep dream. Finally, we saw how the representations in deep nets trained on object recognition are capable of representing both style and content, and how we could independently manipulate a new image to have the style of one image, and the content of another.
In this session we'll start to explore some more generative models. We've already seen how an autoencoder is composed of both an encoder which takes an input and represents it into some hidden state vector. From this hidden state vector, a decoder is capable of resynthesizing the original input, though with some loss. So think back to the decoders that we've already built. It has an internal state, and from that state, it can express the entire distribution of the original data, that is, it can express any possible image that is has seen.
We call that a generative model as it is capable of generating the distribution of the data. Contrast this to the latter half of Session 3 when we saw how we label an image using supervised learning. This model is really trying to discriminate the data distribution based on the extra labels that we have. So this is another helpful distinction with machine learning algorithms, ones that are generative and others that are discriminative.
In this session, we'll explore more generative models, and states can be used to generate data in two other very powerful generative networks, one based on game theory called the generative adversarial network, and another capable of remembering and forgetting over time, allowing us to model dynamic content and sequences, called the recurrent neural network.
<a name="generative-adversarial-networks"></a>
Generative Adversarial Networks
In session 3, we were briefly introduced to the Variational Autoencoder. This network was very powerful because it encompasses a very strong idea. And that idea is measuring distance not necessarily based on pixels, but in some "semantic space". And I mentioned then that we'd see another type of network capable of generating even better images of CelebNet.
So this is where we're heading...
We're now going to see how to do that using what's called the generative adversarial network.
The generative adversarial network is actually two networks. One called the generator, and another called the discriminator. The basic idea is the generator is trying to create things which look like the training data. So for images, more images that look like the training data. The discriminator has to guess whether what its given is a real training example. Or whether its the output of the generator. By training one after another, you ensure neither are ever too strong, but both grow stronger together. The discriminator is also learning a distance function! This is pretty cool because we no longer need to measure pixel-based distance, but we learn the distance function entirely!
The Generative Adversarial Network, or GAN, for short, are in a way, very similar to the autoencoder we created in session 3. Or at least the implementation of it is. The discriminator is a lot like the encoder part of this network, except instead of going down to the 64 dimensions we used in our autoencoder, we'll reduce our input down to a single value, yes or no, 0 or 1, denoting yes its a true training example, or no, it's a generated one.
And the generator network is exactly like the decoder of the autoencoder. Except, there is nothing feeding into this inner layer. It is just on its own. From whatever vector of hidden values it starts off with, it will generate a new example meant to look just like the training data. One pitfall of this model is there is no explicit encoding of an input. Meaning, you can't take an input and find what would possibly generate it. However, there are recent extensions to this model which make it more like the autoencoder framework, allowing it to do this.
<a name="input-pipelines"></a>
Input Pipelines
Before we get started, we're going to need to work with a very large image dataset, the CelebNet dataset. In session 1, we loaded this dataset but only grabbed the first 1000 images. That's because loading all 200 thousand images would take up a lot of memory which we'd rather not have to do. And in Session 3 we were introduced again to the CelebNet and Sita Sings the Blues which required us to load a lot of images. I glossed over the details of the input pipeline then so we could focus on learning the basics of neural networks. But I think now we're ready to see how to handle some larger datasets.
Tensorflow provides operations for taking a list of files, using that list to load the data pointed to it, decoding that file's data as an image, and creating shuffled minibatches. All of this is put into a queue and managed by queuerunners and coordinators.
As you may have already seen in the Variational Autoencoder's code, I've provided a simple interface for creating such an input pipeline using image files which will also apply cropping and reshaping of images in the pipeline so you don't have to deal with any of it. Let's see how we can use it to load the CelebNet dataset.
Let's first get the list of all the CelebNet files
Step3: And then create our input pipeline to create shuffled minibatches and crop the images to a standard shape. This will require us to specify the list of files, how large each minibatch is, how many epochs we want to run for, and how we want the images to be cropped.
Step4: Then when we are ready to use the batch generator, we'll need to create a Coordinator and specify this to tensorflow using the start_queue_runners method in order to provide the data
Step5: We can grab our data using our batch generator like so
Step6: Let's see how to make use of this while we train a generative adversarial network!
<a name="gandcgan"></a>
GAN/DCGAN
Inside the libs directory, you'll find gan.py which shows how to create a generative adversarial network with or without convolution, and how to train it using the CelebNet dataset. Let's step through the code and then I'll show you what it's capable of doing.
-- Code demonstration not transcribed. --
<a name="extensions"></a>
Extensions
So it turns out there are a ton of very fun and interesting extensions when you have a model in this space. It turns out that you can perform addition in the latent space. I'll just show you Alec Radford's code base on github to show you what that looks like.
<a name="recurrent-networks"></a>
Recurrent Networks
Up until now, all of the networks that we've learned and worked with really have no sense of time. They are static. They cannot remember sequences, nor can they understand order outside of the spatial dimensions we offer it. Imagine for instance that we wanted a network capable of reading. As input, it is given one letter at a time. So let's say it were given the letters 'n', 'e', 't', 'w', 'o', 'r', and we wanted it to learn to output 'k'. It would need to be able to reason about inputs it received before the last one it received, the letters before 'r'. But it's not just letters.
Consider the way we look at the world. We don't simply download a high resolution image of the world in front of us. We move our eyes. Each fixation takes in new information and each of these together in sequence help us perceive and act. That again is a sequential process.
Recurrent neural networks let us reason about information over multiple timesteps. They are able to encode what it has seen in the past as if it has a memory of its own. It does this by basically creating one HUGE network that expands over time. It can reason about the current timestep by conditioning on what it has already seen. By giving it many sequences as batches, it can learn a distribution over sequences which can model the current timestep given the previous timesteps. But in order for this to be practical, we specify at each timestep, or each time it views an input, that the weights in each new timestep cannot change. We also include a new matrix, H, which reasons about the past timestep, connecting each new timestep. For this reason, we can just think of recurrent networks as ones with loops in it.
Other than that, they are exactly like every other network we've come across! They will have an input and an output. They'll need a loss or an objective function to optimize which will relate what we want the network to output for some given set of inputs. And they'll be trained with gradient descent and backprop.
<a name="basic-rnn-cell"></a>
Basic RNN Cell
The basic recurrent cell can be used in tensorflow as tf.contrib.rnn_cell.BasicRNNCell. Though for most complex sequences, especially longer sequences, this is almost never a good idea. That is because the basic RNN cell does not do very well as time goes on. To understand why this is, we'll have to learn a bit more about how backprop works. When we perform backrprop, we're multiplying gradients from the output back to the input. As the network gets deeper, there are more multiplications along the way from the output to the input.
Same for recurrent networks. Remember, their just like a normal feedforward network with each new timestep creating a new layer. So if we're creating an infinitely deep network, what will happen to all our multiplications? Well if the derivatives are all greater than 1, then they will very quickly grow to infinity. And if they are less than 1, then they will very quickly grow to 0. That makes them very difficult to train in practice. The problem is known in the literature as the exploding or vanishing gradient problem. Luckily, we don't have to figure out how to solve it, because some very clever people have already come up with a solution, in 1997!, yea, what were you doing in 1997. Probably not coming up with they called the long-short-term-memory, or LSTM.
<a name="lstm-rnn-cell"></a>
LSTM RNN Cell
The mechanics of this are unforunately far beyond the scope of this course, but put simply, it uses a combinations of gating cells to control its contents and by having gates, it is able to block the flow of the gradient, avoiding too many multiplications during backprop. For more details, I highly recommend reading
Step7: And let's find out what's inside this text file by creating a set of all possible characters.
Step8: Great so we now have about 164 thousand characters and 85 unique characters in our vocabulary which we can use to help us train a model of language. Rather than use the characters, we'll convert each character to a unique integer. We'll later see that when we work with words, we can achieve a similar goal using a very popular model called word2vec
Step9: <a name="creating-the-model"></a>
Creating the Model
For our model, we'll need to define a few parameters.
Step10: Now create the input and output to the network. Rather than having batch size x number of features; or batch size x height x width x channels; we're going to have batch size x sequence length.
Step11: Now remember with MNIST that we used a one-hot vector representation of our numbers. We could transform our input data into such a representation. But instead, we'll use tf.nn.embedding_lookup so that we don't need to compute the encoded vector. Let's see how this works
Step12: To create a recurrent network, we're going to need to slice our sequences into individual inputs. That will give us timestep lists which are each batch_size x input_size. Each character will then be connected to a recurrent layer composed of n_cells LSTM units.
Step13: Now we'll create our recurrent layer composed of LSTM cells.
Step14: We'll initialize our LSTMs using the convenience method provided by tensorflow. We could explicitly define the batch size here or use the tf.shape method to compute it based on whatever X is, letting us feed in different sizes into the graph.
Step15: Great now we have a layer of recurrent cells and a way to initialize them. If we wanted to make this a multi-layer recurrent network, we could use the MultiRNNCell like so
Step16: In either case, the cells are composed of their outputs as modulated by the LSTM's output gate, and whatever is currently stored in its memory contents. Now let's connect our input to it.
Step17: For our output, we'll simply try to predict the very next timestep. So if our input sequence was "networ", our output sequence should be
Step18: <a name="loss"></a>
Loss
Our loss function will take the reshaped predictions and targets, and compute the softmax cross entropy.
Step19: <a name="clipping-the-gradient"></a>
Clipping the Gradient
Normally, we would just create an optimizer, give it a learning rate, and tell it to minize our loss. But with recurrent networks, we can help out a bit by telling it to clip gradients. That helps with the exploding gradient problem, ensureing they can't get any bigger than the value we tell it. We can do that in tensorflow by iterating over every gradient and variable, and changing their value before we apply their update to every trainable variable.
Step20: We could also explore other methods of clipping the gradient based on a percentile of the norm of activations or other similar methods, like when we explored deep dream regularization. But the LSTM has been built to help regularize the network through its own gating mechanisms, so this may not be the best idea for your problem. Really, the only way to know is to try different approaches and see how it effects the output on your problem.
<a name="training"></a>
Training | Python Code:
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n',
'You should consider updating to Python 3.4.0 or',
'higher as the libraries built for this course',
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda'
'and then restart `jupyter notebook`:\n',
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
from scipy.ndimage.filters import gaussian_filter
import IPython.display as ipyd
import tensorflow as tf
from libs import utils, gif, datasets, dataset_utils, nb_utils
except ImportError as e:
print("Make sure you have started notebook in the same directory",
"as the provided zip file which includes the 'libs' folder",
"and the file 'utils.py' inside of it. You will NOT be able",
"to complete this assignment unless you restart jupyter",
"notebook inside the directory created by extracting",
"the zip file or cloning the github repo.")
print(e)
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML(<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>)
Explanation: Session 5: Generative Models
<p class="lead">
<a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning with Google's Tensorflow</a><br />
<a href="http://pkmital.com">Parag K. Mital</a><br />
<a href="https://www.kadenze.com">Kadenze, Inc.</a>
</p>
<a name="learning-goals"></a>
Learning Goals
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Introduction
Generative Adversarial Networks
Input Pipelines
GAN/DCGAN
Extensions
Recurrent Networks
Basic RNN Cell
LSTM RNN Cell
GRU RNN Cell
Character Langauge Model
Setting up the Data
Creating the Model
Loss
Clipping the Gradient
Training
Extensions
DRAW Network
Future
Homework
Examples
Reading
<!-- /MarkdownTOC -->
End of explanation
import tensorflow as tf
from libs.datasets import CELEB
files = CELEB()
Explanation: <a name="introduction"></a>
Introduction
So far we've seen the basics of neural networks, how they can be used for encoding large datasets, or for predicting labels. We've also seen how to interrogate the deeper representations that networks learn in order to help with their objective, and how amplifying some of these objectives led to creating deep dream. Finally, we saw how the representations in deep nets trained on object recognition are capable of representing both style and content, and how we could independently manipulate a new image to have the style of one image, and the content of another.
In this session we'll start to explore some more generative models. We've already seen how an autoencoder is composed of both an encoder which takes an input and represents it into some hidden state vector. From this hidden state vector, a decoder is capable of resynthesizing the original input, though with some loss. So think back to the decoders that we've already built. It has an internal state, and from that state, it can express the entire distribution of the original data, that is, it can express any possible image that is has seen.
We call that a generative model as it is capable of generating the distribution of the data. Contrast this to the latter half of Session 3 when we saw how we label an image using supervised learning. This model is really trying to discriminate the data distribution based on the extra labels that we have. So this is another helpful distinction with machine learning algorithms, ones that are generative and others that are discriminative.
In this session, we'll explore more generative models, and states can be used to generate data in two other very powerful generative networks, one based on game theory called the generative adversarial network, and another capable of remembering and forgetting over time, allowing us to model dynamic content and sequences, called the recurrent neural network.
<a name="generative-adversarial-networks"></a>
Generative Adversarial Networks
In session 3, we were briefly introduced to the Variational Autoencoder. This network was very powerful because it encompasses a very strong idea. And that idea is measuring distance not necessarily based on pixels, but in some "semantic space". And I mentioned then that we'd see another type of network capable of generating even better images of CelebNet.
So this is where we're heading...
We're now going to see how to do that using what's called the generative adversarial network.
The generative adversarial network is actually two networks. One called the generator, and another called the discriminator. The basic idea is the generator is trying to create things which look like the training data. So for images, more images that look like the training data. The discriminator has to guess whether what its given is a real training example. Or whether its the output of the generator. By training one after another, you ensure neither are ever too strong, but both grow stronger together. The discriminator is also learning a distance function! This is pretty cool because we no longer need to measure pixel-based distance, but we learn the distance function entirely!
The Generative Adversarial Network, or GAN, for short, are in a way, very similar to the autoencoder we created in session 3. Or at least the implementation of it is. The discriminator is a lot like the encoder part of this network, except instead of going down to the 64 dimensions we used in our autoencoder, we'll reduce our input down to a single value, yes or no, 0 or 1, denoting yes its a true training example, or no, it's a generated one.
And the generator network is exactly like the decoder of the autoencoder. Except, there is nothing feeding into this inner layer. It is just on its own. From whatever vector of hidden values it starts off with, it will generate a new example meant to look just like the training data. One pitfall of this model is there is no explicit encoding of an input. Meaning, you can't take an input and find what would possibly generate it. However, there are recent extensions to this model which make it more like the autoencoder framework, allowing it to do this.
<a name="input-pipelines"></a>
Input Pipelines
Before we get started, we're going to need to work with a very large image dataset, the CelebNet dataset. In session 1, we loaded this dataset but only grabbed the first 1000 images. That's because loading all 200 thousand images would take up a lot of memory which we'd rather not have to do. And in Session 3 we were introduced again to the CelebNet and Sita Sings the Blues which required us to load a lot of images. I glossed over the details of the input pipeline then so we could focus on learning the basics of neural networks. But I think now we're ready to see how to handle some larger datasets.
Tensorflow provides operations for taking a list of files, using that list to load the data pointed to it, decoding that file's data as an image, and creating shuffled minibatches. All of this is put into a queue and managed by queuerunners and coordinators.
As you may have already seen in the Variational Autoencoder's code, I've provided a simple interface for creating such an input pipeline using image files which will also apply cropping and reshaping of images in the pipeline so you don't have to deal with any of it. Let's see how we can use it to load the CelebNet dataset.
Let's first get the list of all the CelebNet files:
End of explanation
from libs.dataset_utils import create_input_pipeline
batch_size = 100
n_epochs = 10
input_shape = [218, 178, 3]
crop_shape = [64, 64, 3]
crop_factor = 0.8
batch = create_input_pipeline(
files=files,
batch_size=batch_size,
n_epochs=n_epochs,
crop_shape=crop_shape,
crop_factor=crop_factor,
shape=input_shape)
Explanation: And then create our input pipeline to create shuffled minibatches and crop the images to a standard shape. This will require us to specify the list of files, how large each minibatch is, how many epochs we want to run for, and how we want the images to be cropped.
End of explanation
sess = tf.Session()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
Explanation: Then when we are ready to use the batch generator, we'll need to create a Coordinator and specify this to tensorflow using the start_queue_runners method in order to provide the data:
End of explanation
batch_xs = sess.run(batch)
# We get batch_size at a time, so 100
print(batch_xs.shape)
# The datatype is float32 since what is what we use in the tensorflow graph
# And the max value still has the original image range from 0-255
print(batch_xs.dtype, np.max(batch_xs))
# So to plot it, we'll need to divide by 255.
plt.imshow(batch_xs[0] / 255.0)
Explanation: We can grab our data using our batch generator like so:
End of explanation
%pylab
import tensorflow as tf
from six.moves import urllib
f, _ = urllib.request.urlretrieve('https://www.gutenberg.org/cache/epub/11/pg11.txt', 'alice.txt')
with open(f, 'r') as fp:
txt = fp.read()
Explanation: Let's see how to make use of this while we train a generative adversarial network!
<a name="gandcgan"></a>
GAN/DCGAN
Inside the libs directory, you'll find gan.py which shows how to create a generative adversarial network with or without convolution, and how to train it using the CelebNet dataset. Let's step through the code and then I'll show you what it's capable of doing.
-- Code demonstration not transcribed. --
<a name="extensions"></a>
Extensions
So it turns out there are a ton of very fun and interesting extensions when you have a model in this space. It turns out that you can perform addition in the latent space. I'll just show you Alec Radford's code base on github to show you what that looks like.
<a name="recurrent-networks"></a>
Recurrent Networks
Up until now, all of the networks that we've learned and worked with really have no sense of time. They are static. They cannot remember sequences, nor can they understand order outside of the spatial dimensions we offer it. Imagine for instance that we wanted a network capable of reading. As input, it is given one letter at a time. So let's say it were given the letters 'n', 'e', 't', 'w', 'o', 'r', and we wanted it to learn to output 'k'. It would need to be able to reason about inputs it received before the last one it received, the letters before 'r'. But it's not just letters.
Consider the way we look at the world. We don't simply download a high resolution image of the world in front of us. We move our eyes. Each fixation takes in new information and each of these together in sequence help us perceive and act. That again is a sequential process.
Recurrent neural networks let us reason about information over multiple timesteps. They are able to encode what it has seen in the past as if it has a memory of its own. It does this by basically creating one HUGE network that expands over time. It can reason about the current timestep by conditioning on what it has already seen. By giving it many sequences as batches, it can learn a distribution over sequences which can model the current timestep given the previous timesteps. But in order for this to be practical, we specify at each timestep, or each time it views an input, that the weights in each new timestep cannot change. We also include a new matrix, H, which reasons about the past timestep, connecting each new timestep. For this reason, we can just think of recurrent networks as ones with loops in it.
Other than that, they are exactly like every other network we've come across! They will have an input and an output. They'll need a loss or an objective function to optimize which will relate what we want the network to output for some given set of inputs. And they'll be trained with gradient descent and backprop.
<a name="basic-rnn-cell"></a>
Basic RNN Cell
The basic recurrent cell can be used in tensorflow as tf.contrib.rnn_cell.BasicRNNCell. Though for most complex sequences, especially longer sequences, this is almost never a good idea. That is because the basic RNN cell does not do very well as time goes on. To understand why this is, we'll have to learn a bit more about how backprop works. When we perform backrprop, we're multiplying gradients from the output back to the input. As the network gets deeper, there are more multiplications along the way from the output to the input.
Same for recurrent networks. Remember, their just like a normal feedforward network with each new timestep creating a new layer. So if we're creating an infinitely deep network, what will happen to all our multiplications? Well if the derivatives are all greater than 1, then they will very quickly grow to infinity. And if they are less than 1, then they will very quickly grow to 0. That makes them very difficult to train in practice. The problem is known in the literature as the exploding or vanishing gradient problem. Luckily, we don't have to figure out how to solve it, because some very clever people have already come up with a solution, in 1997!, yea, what were you doing in 1997. Probably not coming up with they called the long-short-term-memory, or LSTM.
<a name="lstm-rnn-cell"></a>
LSTM RNN Cell
The mechanics of this are unforunately far beyond the scope of this course, but put simply, it uses a combinations of gating cells to control its contents and by having gates, it is able to block the flow of the gradient, avoiding too many multiplications during backprop. For more details, I highly recommend reading: https://colah.github.io/posts/2015-08-Understanding-LSTMs/.
In tensorflow, we can make use of this cell using tf.contrib.rnn_cell.LSTMCell.
<a name="gru-rnn-cell"></a>
GRU RNN Cell
One last cell type is worth mentioning, the gated recurrent unit, or GRU. Again, beyond the scope of this class. Just think of it as a simplifed version of the LSTM with 2 gates instead of 4, though that is not an accurate description. In Tensorflow we can use this with tf.contrib.rnn_cell.GRUCell.
<a name="character-langauge-model"></a>
Character Langauge Model
We'll now try a fun application of recurrent networks where we try to model a corpus of text, one character at a time. The basic idea is to take one character at a time and try to predict the next character in sequence. Given enough sequences, the model is capable of generating entirely new sequences all on its own.
<a name="setting-up-the-data"></a>
Setting up the Data
For data, we're going to start with text. You can basically take any text file that is sufficiently long, as we'll need a lot of it, and try to use this. This website seems like an interesting place to begin: http://textfiles.com/directory.html and project guttenberg https://www.gutenberg.org/browse/scores/top. http://prize.hutter1.net/ also has a 50k euro reward for compressing wikipedia. Let's try w/ Alice's Adventures in Wonderland by Lewis Carroll:
End of explanation
vocab = list(set(txt))
len(txt), len(vocab)
Explanation: And let's find out what's inside this text file by creating a set of all possible characters.
End of explanation
encoder = dict(zip(vocab, range(len(vocab))))
decoder = dict(zip(range(len(vocab)), vocab))
Explanation: Great so we now have about 164 thousand characters and 85 unique characters in our vocabulary which we can use to help us train a model of language. Rather than use the characters, we'll convert each character to a unique integer. We'll later see that when we work with words, we can achieve a similar goal using a very popular model called word2vec: https://www.tensorflow.org/versions/r0.9/tutorials/word2vec/index.html
We'll first create a look up table which will map a character to an integer:
End of explanation
# Number of sequences in a mini batch
batch_size = 100
# Number of characters in a sequence
sequence_length = 100
# Number of cells in our LSTM layer
n_cells = 256
# Number of LSTM layers
n_layers = 2
# Total number of characters in the one-hot encoding
n_chars = len(vocab)
Explanation: <a name="creating-the-model"></a>
Creating the Model
For our model, we'll need to define a few parameters.
End of explanation
X = tf.placeholder(tf.int32, [None, sequence_length], name='X')
# We'll have a placeholder for our true outputs
Y = tf.placeholder(tf.int32, [None, sequence_length], name='Y')
Explanation: Now create the input and output to the network. Rather than having batch size x number of features; or batch size x height x width x channels; we're going to have batch size x sequence length.
End of explanation
# we first create a variable to take us from our one-hot representation to our LSTM cells
embedding = tf.get_variable("embedding", [n_chars, n_cells])
# And then use tensorflow's embedding lookup to look up the ids in X
Xs = tf.nn.embedding_lookup(embedding, X)
# The resulting lookups are concatenated into a dense tensor
print(Xs.get_shape().as_list())
Explanation: Now remember with MNIST that we used a one-hot vector representation of our numbers. We could transform our input data into such a representation. But instead, we'll use tf.nn.embedding_lookup so that we don't need to compute the encoded vector. Let's see how this works:
End of explanation
# Let's create a name scope for the operations to clean things up in our graph
with tf.name_scope('reslice'):
Xs = [tf.squeeze(seq, [1])
for seq in tf.split(1, sequence_length, Xs)]
Explanation: To create a recurrent network, we're going to need to slice our sequences into individual inputs. That will give us timestep lists which are each batch_size x input_size. Each character will then be connected to a recurrent layer composed of n_cells LSTM units.
End of explanation
cells = tf.contrib.rnn_cell.BasicLSTMCell(num_units=n_cells, state_is_tuple=True)
Explanation: Now we'll create our recurrent layer composed of LSTM cells.
End of explanation
initial_state = cells.zero_state(tf.shape(X)[0], tf.float32)
Explanation: We'll initialize our LSTMs using the convenience method provided by tensorflow. We could explicitly define the batch size here or use the tf.shape method to compute it based on whatever X is, letting us feed in different sizes into the graph.
End of explanation
if n_layers > 1:
cells = tf.contrib.rnn_cell.MultiRNNCell(
[cells] * n_layers, state_is_tuple=True)
initial_state = cells.zero_state(tf.shape(X)[0], tf.float32)
Explanation: Great now we have a layer of recurrent cells and a way to initialize them. If we wanted to make this a multi-layer recurrent network, we could use the MultiRNNCell like so:
End of explanation
# this will return us a list of outputs of every element in our sequence.
# Each output is `batch_size` x `n_cells` of output.
# It will also return the state as a tuple of the n_cells's memory and
# their output to connect to the time we use the recurrent layer.
outputs, state = tf.contrib.rnn.static_rnn(cells, Xs, initial_state=initial_state)
# We'll now stack all our outputs for every cell
outputs_flat = tf.reshape(tf.concat(1, outputs), [-1, n_cells])
Explanation: In either case, the cells are composed of their outputs as modulated by the LSTM's output gate, and whatever is currently stored in its memory contents. Now let's connect our input to it.
End of explanation
with tf.variable_scope('prediction'):
W = tf.get_variable(
"W",
shape=[n_cells, n_chars],
initializer=tf.random_normal_initializer(stddev=0.1))
b = tf.get_variable(
"b",
shape=[n_chars],
initializer=tf.random_normal_initializer(stddev=0.1))
# Find the output prediction of every single character in our minibatch
# we denote the pre-activation prediction, logits.
logits = tf.matmul(outputs_flat, W) + b
# We get the probabilistic version by calculating the softmax of this
probs = tf.nn.softmax(logits)
# And then we can find the index of maximum probability
Y_pred = tf.argmax(probs, 1)
Explanation: For our output, we'll simply try to predict the very next timestep. So if our input sequence was "networ", our output sequence should be: "etwork". This will give us the same batch size coming out, and the same number of elements as our input sequence.
End of explanation
with tf.variable_scope('loss'):
# Compute mean cross entropy loss for each output.
Y_true_flat = tf.reshape(tf.concat(1, Y), [-1])
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, Y_true_flat)
mean_loss = tf.reduce_mean(loss)
Explanation: <a name="loss"></a>
Loss
Our loss function will take the reshaped predictions and targets, and compute the softmax cross entropy.
End of explanation
with tf.name_scope('optimizer'):
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
gradients = []
clip = tf.constant(5.0, name="clip")
for grad, var in optimizer.compute_gradients(mean_loss):
gradients.append((tf.clip_by_value(grad, -clip, clip), var))
updates = optimizer.apply_gradients(gradients)
Explanation: <a name="clipping-the-gradient"></a>
Clipping the Gradient
Normally, we would just create an optimizer, give it a learning rate, and tell it to minize our loss. But with recurrent networks, we can help out a bit by telling it to clip gradients. That helps with the exploding gradient problem, ensureing they can't get any bigger than the value we tell it. We can do that in tensorflow by iterating over every gradient and variable, and changing their value before we apply their update to every trainable variable.
End of explanation
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
cursor = 0
it_i = 0
while True:
Xs, Ys = [], []
for batch_i in range(batch_size):
if (cursor + sequence_length) >= len(txt) - sequence_length - 1:
cursor = 0
Xs.append([encoder[ch]
for ch in txt[cursor:cursor + sequence_length]])
Ys.append([encoder[ch]
for ch in txt[cursor + 1: cursor + sequence_length + 1]])
cursor = (cursor + sequence_length)
Xs = np.array(Xs).astype(np.int32)
Ys = np.array(Ys).astype(np.int32)
loss_val, _ = sess.run([mean_loss, updates],
feed_dict={X: Xs, Y: Ys})
print(it_i, loss_val)
if it_i % 500 == 0:
p = sess.run([Y_pred], feed_dict={X: Xs})[0]
preds = [decoder[p_i] for p_i in p]
print("".join(preds).split('\n'))
it_i += 1
Explanation: We could also explore other methods of clipping the gradient based on a percentile of the norm of activations or other similar methods, like when we explored deep dream regularization. But the LSTM has been built to help regularize the network through its own gating mechanisms, so this may not be the best idea for your problem. Really, the only way to know is to try different approaches and see how it effects the output on your problem.
<a name="training"></a>
Training
End of explanation |
7,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Network 예제
Step1: Tensorflow로 구현되어 있는 golbin님의 예제에서 같은 문제를 가지고 pytorch로 구현하도록 하겠습니다.
털과 날개가 있는지 없는지에 따라서 포유류와 조류를 분류하는 신경망 모델입니다.
pytorch의 정형화된 구조
pyTorch는 다음과 정형화된 형태를 사용할 수 있을 것으로 보입니다.
1. 입력변수 설정(생성)
Step2: 2. 사전 설정
* model
* loss
* opimizer
Step3: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
Step4: 4. Predict | Python Code:
%matplotlib inline
Explanation: Neural Network 예제
End of explanation
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
# [털, 날개]
x_data = torch.Tensor(
[[0, 0], [1, 0], [1, 1], [0, 0], [0, 0], [0, 1]])
# 0: 기타 , 1: 포유류, 2: 조류
y_data = torch.LongTensor([0, 1, 2, 0, 0, 2])
Explanation: Tensorflow로 구현되어 있는 golbin님의 예제에서 같은 문제를 가지고 pytorch로 구현하도록 하겠습니다.
털과 날개가 있는지 없는지에 따라서 포유류와 조류를 분류하는 신경망 모델입니다.
pytorch의 정형화된 구조
pyTorch는 다음과 정형화된 형태를 사용할 수 있을 것으로 보입니다.
1. 입력변수 설정(생성)
End of explanation
#commnet
class _model(nn.Module) :
def __init__(self):
super(_model, self).__init__()
self.fc1 = nn.Linear(2, 10)
self.fc2 = nn.Linear(10, 3)
def forward(self, net):
net = net.view(-1, 2)
net = F.relu(self.fc1(net))
net = self.fc2(net)
return F.log_softmax(net)
model = _model()
loss_fn = nn.NLLLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
Explanation: 2. 사전 설정
* model
* loss
* opimizer
End of explanation
# trainning
for i in range(100) :
# input variable
x = Variable(x_data)
y = Variable(y_data)
# network model
y_pred = model(x)
loss = loss_fn(y_pred, y)
# zero_grad, backward, step(update parameter) in series
optimizer.zero_grad()
loss.backward()
optimizer.step()
if i % 10 == 0:
print(i, loss.data[0])
Explanation: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
End of explanation
y_pred = model(x).max(1)[1]
print('prediction :', y_pred.data)
print('true :', y)
Explanation: 4. Predict
End of explanation |
7,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Glossary
2. Mathematical Groundwork
Previous
Step1: Import section specific modules
Step3: 2.8. The Discrete Fourier Transform (DFT) and the Fast Fourier Transform (FFT)<a id='math
Step5: Althought this would produce the correct result, this way of implementing the DFT is going to be incredibly slow. The DFT can be implemented in matrix form. Convince yourself that a vectorised implementation of this operation can be achieved with
$$ X = K x $$
where $K$ is the kernel matrix, it stores the values $K_{kn} = e^{\frac{-\imath 2 \pi k n}{N}}$. This is implemented numerically as follows
Step6: This function will be much faster than the previous implementation. We should check that they both return the same result
Step7: Just to be sure our DFT really works, let's also compare the output of our function to numpy's built in DFT function (note numpy automatically implements a faster version of the DFT called the FFT, see the discussion below)
Step8: Great! Our function is returning the correct result. Next we do an example to demonstrate the duality between the spectral (frequency domain) and temporal (time domain) representations of a function. As the following example shows, the Fourier transform of a time series returns the frequencies contained in the signal.
The following code simulates a signal of the form
$$ y = \sin(2\pi f_1 t) + \sin(2\pi f_2 t) + \sin(2\pi f_3 t), $$
takes the DFT and plots the amplitude and phase of the resulting components $Y_k$.
Step9: It is not immediately obvious that these are the frequencies contained in the signal. However, recall, from the definition given at the outset, that the frequencies are related to the index $k$ via
$$ f_k = \frac{k f_s}{N}, $$
where $f_s$ is the sampling frequency (i.e. one divided by the sampling period). Let's see what happens if we plot the $X_k$ against the $f_k$ using the following bit of code
Step10: Here we see that the three main peaks correspond to the frequencies contained in the input signal viz. $f_1 = 1$Hz, $f_2 = 2$Hz and $f_3 = 3$Hz. But what do the other peaks mean? The additional frequency peaks are a consequence of the following facts
Step11: That is almost a factor of ten difference. Lets compare this to numpy's built in FFT
Step13: That seems amazing! The numpy FFT is about 1000 times faster than our vectorised implementation. But how does numpy achieve this speed up? Well, by using the fast Fourier transform of course.
2.8.6. Fast Fourier transforms<a id='math
Step14: Lets confirm that this function returns the correct result by comparing fith numpy's FFT. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
2. Mathematical Groundwork
Previous: 2.7 Fourier Theorems
Next: 2.9 Sampling Theory
Import standard modules:
End of explanation
from IPython.display import HTML
from ipywidgets import interact
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
def loop_DFT(x):
Implementing the DFT in a double loop
Input: x = the vector we want to find the DFT of
#Get the length of the vector (will only work for 1D arrays)
N = x.size
#Create vector to store result in
X = np.zeros(N,dtype=complex)
for k in range(N):
for n in range(N):
X[k] += np.exp(-1j*2.0*np.pi*k*n/N)*x[n]
return X
Explanation: 2.8. The Discrete Fourier Transform (DFT) and the Fast Fourier Transform (FFT)<a id='math:sec:the_discrete_fourier_transform_and_the_fast_fourier_transform'></a>
The continuous version of the Fourier transform can only be computed when the integrals involved can be evaluated analytically, something which is not always possible in real life applications. This is true for a number of reasons, the most relevant of which are:
We don't always have the parametrisation of the signal that we want to find the Fourier transform of.
Signals are measured and recorded at a finite number of points.
Measured signals are contaminated by noise.
In such cases the discrete equivalent of the Fourier transform, called the discrete Fourier transform (DFT), is very useful. In fact, where the scale of the problem necessitates using a computer to perform calculations, the Fourier transform can only be implemented as the discrete equivalent. There are some subtleties we should be aware of when implementing the DFT. These mainly arise because it is very difficult to capture the full information present in a continuous signal with a finite number of samples. In this chapter we review the DFT and extend some of the most useful identities derived in the previous sections to the case where we only have acces to a finite number of samples. The subtleties that arise due to limited sampling will be discussed in the next section.
The Discrete Fourier transform
The Discrete Time Fourier transform (DTFE): definition
The Discrete Fourier transform (DFT): definition
The Discrete convolution: definition and discrete convolution theorem
Numerically Implementing the DFT
Fast Fourier transforms
2.8.1 The discrete time Fourier transform (DTFT): definition<a id='math:sec:the_discrete_time_fourier_transform_definition'></a>
We start by introducing the discrete time Fourier transform (DTFT). The DTFT of a set $\left{y_n \in \mathbb{C}\right}_{n ~ \in ~ \mathbb{Z}}$ results in a Fourier series (see $\S$ 2.3 ➞) of the form
<a id='math:eq:8_001'></a><!--\label{math:eq:8_001}-->$$
Y_{2\pi}(\omega) = \sum_{n\,=\,-\infty}^{\infty} y_n\,e^{-\imath \omega n} \quad \mbox{where} \quad n \in \mathbb{Z}.
$$
The resulting function is a periodic function of the frequency variable $\omega$. In the above definition we assume that $\omega$ is expressed in normalised units of radians/sample so that the periodicity is $2\pi$. In terms of the usual time frequency variable $f$, where $\omega = 2\pi f$, we would define it as
<a id='math:eq:8_002'></a><!--\label{math:eq:8_002}-->$$
Y_{f_s}(f) = \sum_{n\,=\,-\infty}^{\infty} y_n\,e^{-2\pi\imath f t_n},
$$
where $t_n$ is a time coordinate and the subscript $f_s$ denotes the period of $Y_{f_s}(f)$. As we will see in $\S$ 2.9 ➞ the DTFT (more correctly the DFT introduced below) arises naturally when we take the Fourier transform of a sampled continuous function.
As with the continuous Fourier transform, it is only possible to compute the DTFT analytically in a limited number of cases (eg. when the limit of the infinite series is known analytically or when the signal is band limited i.e. the signal contains a finite number of frequency components. For what follows we will find it useful to review the concept of periodic summation and the Poisson summation formula. Note that the DTFT is defined over the entire field of complex numbers and that there are an infinite number of components involved in the definition.
2.8.1.1 Periodic summation and the DTFT <a id='math:sec:Periodic_summation'></a>
The idea behind periodic summation is to construct a periodic function, $g_{\tau}(t)$ say, from a contnuous function $g(t)$. Consider the following construction
$$ g_\tau(t) = \sum_{n=-\infty}^{\infty} g(t + n\tau) = \sum_{n=-\infty}^{\infty} g(t - n\tau). $$
Clearly $g_\tau(t)$ has period $\tau$ and looks like an infinite number of copies of the function $g(t)$ for $t$ in the interval $0 \leq t \leq \tau$. We call $g_\tau(t)$ a periodic summation of $g(t)$. Note that we recover $g(t)$ when $n = 0$ and that a similar construction is obviously possible in the frequency domain. Actually the DTFT naturally results in a periodic function of the form
$$Y_{f_s}(f) = \sum_{k = -\infty}^{\infty} Y(f - k f_s), $$
such that $Y_{f_s}(f)$ is the periodic summation of $Y(f)$. As we will see later, the period $f_s$ is set by the number of samples $N$ at which we have the signal. In $\S$ 2.9 ➞ we will find it useful to think of $Y(f)$ as the spectrum of a bandlimited signal, $y(t)$ say. When the maximum frequency present in the signal is below a certain threshold the $Y_{f_s}(f)$ with $k \neq 0$ are exact copies of $Y(f)$ which we call aliases. This will become clearer after we have proved the Nyquist-Shannon sampling theorem.
2.8.1.2 Poisson summation formula <a id='math:sec:Poisson_summation'></a>
The Poisson summation formula is a result from analysis which is very important in Fourier theory. A general proof of this result will not add much to the current discussion. Instead we will simply point out its implications for Fourier theory as this will result in a particularly transparent proof of the Nyquist-Shannon sampling theorem.
Basically the Poisson summation formula can be used to relate the Fourier series coefficients of a periodic summation of a function to values which are proportional to the function's continuous Fourier transform. Suppose $Y(f)$ is the Fourier transform of the (Schwartz) function $y(t)$. Then
<a id='math:eq:8_003'></a><!--\label{math:eq:8_003}-->$$
\sum_{n = -\infty}^{\infty} \Delta t ~ y(\Delta t n) e^{-2\pi\imath f \Delta t n} = \sum_{k = -\infty}^{\infty} Y(f - \frac{k}{\Delta t}) = \sum_{k = -\infty}^{\infty} Y(f - kf_s) = Y_{f_s}(f). $$
This shows that the series $y_n = \Delta t y(\Delta t n)$ is sufficient to construct a periodic summation of of $Y(f)$. The utility of this construction will become apparent a bit later. For now simply note that it is possible to construct $Y_{f_s}(f)$ from the Fourier series of the function $y(t)$ (scaled by $\Delta t$).
The above discussion will mainly serve as a theoretical tool. It does not provide an obvious way to perform the Fourier transform in practice because it still requires an infinite number of components $y_n$. Before illustrating its utility we should construct a practical way to implement the Fourier transform.
2.8.2. The discrete Fourier transform: definition<a id='math:sec:the_discrete_fourier_transform_definition'></a>
Let $y= \left{y_n \in \mathbb{C}\right}{n = 0, \ldots, N-1}$ be a finite set of complex numbers. Then the discrete Fourier transform (DFT) of $y$, denoted $\mathscr{F}{\rm D}{y}$, is defined as
<a id='math:eq:8_004'></a><!--\label{math:eq:8_004}-->$$
\mathscr{F}{\rm D}: \left{y_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1} \rightarrow \left{Y_k \in \mathbb{C}\right}{k \,=\, 0, \ldots, N-1}\
\mathscr{F}{\rm D}{y} = \left{Y_k\in\mathbb{C}\right}{k \,=\, 0, \ldots, N-1} \quad \mbox{where} \quad
Y_k = \sum{n\,=\,0}^{N-1} y_n\,e^{-2\pi\imath f_k t_n} = \sum_{n\,=\,0}^{N-1} y_n\,e^{-\imath 2\pi \frac{nk}{N}}.
$$
In the above definition $f_k$ is the $k$-th frequency sample and $t_n$ is the $n$-th sampling instant. When the samples are spaced at uniform intervals $\Delta t$ apart these are given by
$$ t_n = t_0 + n\Delta t \quad \mbox{and} \quad f_k = \frac{kf_s}{N} \quad \mbox{where} \quad f_s = \frac{1}{\Delta t}. $$
Most of the proofs shown below are easiest to establish when thinking of the DFT in terms of the actual indices $k$ and $n$. This definition also has the advantage that the samples do not have to be uniformly spaced apart. In this section we use the notation
$$ \mathscr{F}{\rm D}{y}_k = Y_k = \sum{n\,=\,0}^{N-1} y_n\,e^{-\imath 2\pi \frac{nk}{N}}, $$
where the subscript $k$ on the LHS denotes the index not involved in the summation. Varaibles such as $Y_k$ and $y_n$ which are related as in the above expression are sometimes refered to as Fourier pairs or Fourier duals.
The number of Fourier transformed components $Y_k$ is the same as the number of samples of $y_n$. Denoting the set of Fourier transformed components by $Y = \left{Y_k \in \mathbb{C}\right}{k = 0, \ldots, N-1}$, we can define the inverse discrete Fourier transform of $Y$, denoted $\mathscr{F}{\rm D}^{-1}{Y}$, as
<a id='math:eq:8_005'></a><!--\label{math:eq:8_005}-->$$
\mathscr{F}{\rm D}^{-1}: \left{Y_k \in \mathbb{C}\right}{k \,=\, 0, \ldots, N-1} \rightarrow \left{y_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1}\
\mathscr{F}{\rm D}^{-1}{Y} = \left{y_n\in\mathbb{C}\right}{n = 0, \ldots, N-1}
\quad \mbox{where} \quad y_n = \frac{1}{N} \sum{k \ = \ 0}^{N-1} Y_k e^{\imath 2\pi \frac{nk}{N}} \ ,
$$
or in the abbreviated notation
$$ \mathscr{F}{\rm D}^{-1}{Y}_n = y_n = \frac{1}{N} \sum{k\,=\,0}^{N-1} Y_k\,e^{\imath 2\pi \frac{nk}{N}}. $$
The factor of $\frac{1}{N}$ appearing in the definition of the inverse DFT is a normalisation factor. We should mention that this normalisation is sometimes implemented differently by including a factor of $\sqrt{\frac{1}{N}}$ in the definition of both the forward and the inverse DFT. Some texts even omit it completely. We will follow the above convention throughout the course. The inverse DFT is the inverse operation with respect to the discrete Fourier transform (restricted to the original domain). This can be shown as follows:<br><br>
<a id='math:eq:8_006'></a><!--\label{math:eq:8_006}-->$$
\begin{align}
\mathscr{F}{\rm D}^{-1}\left{\mathscr{F}{\rm D}\left{y\right}\right}{n^\prime} \,&=\, \frac{1}{N}\sum{k\,=\,0}^{N-1} \left(\sum_{n\,=\,0}^{N-1} y_n e^{-\imath 2\pi\frac{kn}{N}}\right)e^{\imath 2\pi\frac{kn^\prime}{N}}\
&=\,\frac{1}{N}\sum_{k\,=\,0}^{N-1} \sum_{n\,=\,0}^{N-1} \left( y_n e^{-\imath 2\pi\frac{kn}{N}}e^{\imath 2\pi\frac{kn^\prime}{N}}\right)\
&=\,\frac{1}{N}\left(\sum_{k\,=\,0}^{N-1} y_{n^\prime}+\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} \sum_{k\,=\,0}^{N-1} y_n e^{-\imath 2\pi\frac{kn}{N}}e^{\imath 2\pi\frac{kn^\prime}{N}}\right)\
&=\,\frac{1}{N}\left(\sum_{k\,=\,0}^{N-1} y_{n^\prime}+\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} \sum_{k\,=\,0}^{N-1} y_n e^{\imath 2\pi\frac{k(n^\prime-n)}{N}}\right)\
&=\,y_{n^\prime}+\frac{1}{N}\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} y_n \sum_{k\,=\,0}^{N-1} \left(e^{\imath 2\pi\frac{(n^\prime-n)}{N}}\right)^k\
&=\,y_{n^\prime}+\frac{1}{N}\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} y_n \frac{1-\left(e^{\imath 2\pi\frac{(n^\prime-n)}{N}}\right)^N}{1-\left(e^{\imath 2\pi\frac{(n^\prime-n)}{N}}\right)}\
&=\,y_{n^\prime}+\frac{1}{N}\sum_{\begin{split}n\,&=\,0\n\,&\neq\,n^\prime\end{split}}^{N-1} y_n \frac{1-e^{\imath 2\pi(n^\prime-n)}}{1-e^{\imath 2\pi\frac{(n^\prime-n)}{N}}}\
&\underset{n,n^\prime \in \mathbb{N}}{=}\,y_{n^\prime},\
\end{align}
$$
where we made use of the identity $\sum_{n\,=\,0}^{N-1}x^n \,=\, \frac{1-x^N}{1-x}$ and used the orthogonality of the sinusoids in the last step.
Clearly both the DFT and its inverse are periodic with period $N$
<a id='math:eq:8_007'></a><!--\label{math:eq:8_007}-->$$
\begin{align}
\mathscr{F}{\rm D}{y }_k \,&=\,\mathscr{F}{\rm D}{y }{k \pm N} \
\mathscr{F}{\rm D}^{-1}{Y }{n} \,&=\,\mathscr{F}{\rm D}^{-1}{Y }_{n \pm N}.\
\end{align}
$$
As is the case for the continuous Fourier transform, the inverse DFT can be expressed in terms of the forward DFT (without proof, but it's straightforward)
<a id='math:eq:8_008'></a><!--\label{math:eq:8_008}-->$$
\begin{align}
\mathscr{F}{\rm D}^{-1}{Y}_n \,&=\, \frac{1}{N} \mathscr{F}{\rm D}{Y}{-n} \
&=\,\frac{1}{N} \mathscr{F}{\rm D}{Y}_{N-n}.\
\end{align}
$$
The DFT of a real-valued set of numbers $y = \left{y_n \in \mathbb{R}\right}_{n\,=\,0, \ldots, \,N-1}$ is Hermitian (and vice versa)
<a id='math:eq:8_009'></a><!--\label{math:eq:8_009}-->$$
\begin{split}
\mathscr{F}{\rm D}{y}_k\,&=\, \left(\mathscr{F}{\rm D}{y}_{-k}\right)^\
&=\, \left(\mathscr{F}{\rm D}{y}{N-k}\right)^ \ .
\end{split}
$$
2.8.3. The Discrete convolution: definition and discrete convolution theorem<a id='math:sec:the_discrete_convolution_definition_and_discrete_convolution_theorem'></a>
For two sets of complex numbers $y = \left{y_n \in \mathbb{C}\right}{n = 0, \ldots, N-1}$ and $z = \left{z_n \in \mathbb{C}\right}{n = 0, \ldots, N-1}$ the discrete convolution is, in analogy to the analytic convolution, defined as
<a id='math:eq:8_010'></a><!--\label{math:eq:8_010}-->$$
\circ: \left{y_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1}\times \left{z_n \in \mathbb{C}\right}{n \,=\, 0, \ldots, N-1} \rightarrow \left{r_k \in \mathbb{C}\right}{k \,=\, 0, \ldots, N-1}\
(y\circ z)_k = r_k = \sum{n\,=\,0}^{N-1} y_n z_{k-n}.\
$$
However there is a bit of a subtlety in this definition. We have to take into account that if $n > k$ the index $k-n$ will be negative. Since we have defined our indices as being strictly positive, this requires introducing what is sometimes referred to as the "wraparound" convention. Recal that complex numbers $r_k = e^{\frac{\imath 2\pi k}{N}}$ have the property that $r_{k \pm mN} = r_k$, where $m \in \mathbb{Z}$ is an integer. In the "wraparound" convention we map indices lying outside the range $0, \cdots , N-1$ into this range using the modulo operator. In other words we amend the definition as follows
$$ (y\circ z)k = r_k = \sum{n\,=\,0}^{N-1} y_n z_{(k-n) \, mod \, N}, $$
where $mod$ denotes the modulo operation. Just like the ordinary convolution, the discrete convolution is commutative. One important effect evident from this equation is that if the two series are "broad" enough, the convolution will be continued at the beginning of the series, an effect called aliasing.
The convolution theorem (i.e. that convolution in one domain is the pointwise product in the other domain) is also valid for the DFT and the discrete convolution operator. We state the theorem here without proof (it is similar to the proof for the continuous case). Let $(y \odot z)_n \underset{def}{=} y_n ~ z_n$ (this is the Hadamard or component-wise product, we will encounter it again in $\S$ 2.10 ➞). Then, for Fourier pairs $Y_k$ and $y_n$, and $Z_k$ and $z_n$, we have
<a id='math:eq:8_011'></a><!--\label{math:eq:8_011}-->$$
\forall N\,\in\, \mathbb{N}\
\begin{align}
y \,&=\, \left{y_n \in \mathbb{C}\right}{n\,=\,0, \ldots, \,N-1}\
z \,&=\, \left{z_n \in \mathbb{C}\right}{n\,=\,0, \ldots, \,N-1}\
Y \,&=\, \left{Y_k \in \mathbb{C}\right}{k\,=\,0, \ldots, \,N-1}\
Z \,&=\, \left{Z_k \in \mathbb{C}\right}{k\,=\,0, \ldots, \,N-1}\
\end{align}\
\begin{split}
\mathscr{F}{\rm D}{y\odot z}\,&=\,\frac{1}{N}\mathscr{F}{\rm D}{y}\circ \mathscr{F}{\rm D}{z}\
\mathscr{F}{\rm D}^{-1}{Y\odot Z}\,&=\,\mathscr{F}{\rm D}{Y}\circ \mathscr{F}{\rm D}{Z}\
\mathscr{F}{\rm D}{y\circ z}\,&=\,\mathscr{F}{\rm D}{y} \odot \mathscr{F}{\rm D}{z}\
\mathscr{F}{\rm D}^{-1}{Y\circ Z}\,&=\,\frac{1}{N}\mathscr{F}{\rm D}{Y} \odot \mathscr{F}{\rm D}{Z}\
\end{split}
$$
2.8.5.Numerically implementing the DFT <a id='math:sec:numerical_DFT'></a>
We now turn to how the DFT is implemented numerically. The most direct way to do this is to sum the components in a double loop of the form
End of explanation
def matrix_DFT(x):
Implementing the DFT in vectorised form
Input: x = the vector we want to find the DFT of
#Get the length of the vector (will only work for 1D arrays)
N = x.size
#Create vector to store result in
n = np.arange(N)
k = n.reshape((N,1))
K = np.exp(-1j*2.0*np.pi*k*n/N)
return K.dot(x)
Explanation: Althought this would produce the correct result, this way of implementing the DFT is going to be incredibly slow. The DFT can be implemented in matrix form. Convince yourself that a vectorised implementation of this operation can be achieved with
$$ X = K x $$
where $K$ is the kernel matrix, it stores the values $K_{kn} = e^{\frac{-\imath 2 \pi k n}{N}}$. This is implemented numerically as follows
End of explanation
x = np.random.random(256) #create random vector to take the DFT of
np.allclose(loop_DFT(x),matrix_DFT(x)) #compare the result using numpy's built in function
Explanation: This function will be much faster than the previous implementation. We should check that they both return the same result
End of explanation
x = np.random.random(256) #create random vector to take the DFT of
np.allclose(np.fft.fft(x),matrix_DFT(x)) #compare the result using numpy's built in function
Explanation: Just to be sure our DFT really works, let's also compare the output of our function to numpy's built in DFT function (note numpy automatically implements a faster version of the DFT called the FFT, see the discussion below)
End of explanation
#First we simulate a time series as the sum of a number of sinusoids each with a different frequency
N = 512 #The number of samples of the time series
tmin = -10 #The minimum value of the time coordinate
tmax = 10 #The maximum value of the time coordinate
t = np.linspace(tmin,tmax,N) #The time coordinate
f1 = 1.0 #The frequency of the first sinusoid
f2 = 2.0 #The frequency of the second sinusoid
f3 = 3.0 #The frequency of the third sinusoid
#Generate the signal
y = np.sin(2.0*np.pi*f1*t) + np.sin(2.0*np.pi*f2*t) + np.sin(2.0*np.pi*f3*t)
#Take the DFT
Y = matrix_DFT(y)
#Plot the absolute value, real and imaginary parts
plt.figure(figsize=(15, 6))
plt.subplot(121)
plt.stem(abs(Y))
plt.xlabel('$k$',fontsize=18)
plt.ylabel(r'$|Y_k|$',fontsize=18)
plt.subplot(122)
plt.stem(np.angle(Y))
plt.xlabel('$k$',fontsize=18)
plt.ylabel(r'phase$(Y_k)$',fontsize=18)
Explanation: Great! Our function is returning the correct result. Next we do an example to demonstrate the duality between the spectral (frequency domain) and temporal (time domain) representations of a function. As the following example shows, the Fourier transform of a time series returns the frequencies contained in the signal.
The following code simulates a signal of the form
$$ y = \sin(2\pi f_1 t) + \sin(2\pi f_2 t) + \sin(2\pi f_3 t), $$
takes the DFT and plots the amplitude and phase of the resulting components $Y_k$.
End of explanation
#Get the sampling frequency
delt = t[1] - t[0]
fs = 1.0/delt
k = np.arange(N)
fk = k*fs/N
plt.figure(figsize=(15, 6))
plt.subplot(121)
plt.stem(fk,abs(Y))
plt.xlabel('$f_k$',fontsize=18)
plt.ylabel(r'$|Y_k|$',fontsize=18)
plt.subplot(122)
plt.stem(fk,np.angle(Y))
plt.xlabel('$f_k$',fontsize=18)
plt.ylabel(r'phase$(Y_k)$',fontsize=18)
Explanation: It is not immediately obvious that these are the frequencies contained in the signal. However, recall, from the definition given at the outset, that the frequencies are related to the index $k$ via
$$ f_k = \frac{k f_s}{N}, $$
where $f_s$ is the sampling frequency (i.e. one divided by the sampling period). Let's see what happens if we plot the $X_k$ against the $f_k$ using the following bit of code
End of explanation
%timeit loop_DFT(x)
%timeit matrix_DFT(x)
Explanation: Here we see that the three main peaks correspond to the frequencies contained in the input signal viz. $f_1 = 1$Hz, $f_2 = 2$Hz and $f_3 = 3$Hz. But what do the other peaks mean? The additional frequency peaks are a consequence of the following facts:
the DFT of a real valued signal is Hermitian (see Hermitian property of real valued signals ⤵<!--\ref{math:eq:8_009}-->) so that $Y_{-k} = Y_k^*$,
the DFT is periodic with period $N$ (see Periodicity of the DFT ⤵<!--\ref{math:eq:8_007}-->) so that $Y_{k} = Y_{k+N}$. <br>
When used together the above facts imply that $Y_{N-k} = Y_k^*$. This will be important in $\S$ 2.9 ➞ when we discuss aliasing. Note that these additional frequency peaks contain no new information.
We have not explained some of the features of the signal viz.
Why are there non-zero components of $Y_k$ at frequencies that are not present in the input signal?
Why do the three main peaks not contain the same amount of power? This is a bit unexpected since all three components of the input signal have the same amplitude.
As we will see in $\S$ 2.9 ➞, these features result from the imperfect sampling of the signal. This is unavoidable in any practical application involving the DFT and will be a reoccurring theme throughout this course. You are encouraged to play with the parameters (eg. the minimum $t_{min}$ and maximum $t_{max}$ values of the time coordinate, the number of samples $N$ (do not use $N > 10^5$ points or you might be here for a while), the frequencies of the input components etc.) to get a feel for what does and does not work. In particular try setting the number of samples to $N = 32$ and see if you can explain the output. It might also be a good exercise to and implement the inverse DFT.
We already mentioned that the vectorised version of the DFT above will be much faster than the loop version. We can see exactly how much faster with the following commands
End of explanation
%timeit np.fft.fft(x)
Explanation: That is almost a factor of ten difference. Lets compare this to numpy's built in FFT
End of explanation
def one_layer_FFT(x):
An implementation of the 1D Cooley-Tukey FFT using one layer
N = x.size
if N%2>0:
print "Warning: length of x in not a power of two, returning DFT"
return matrix_DFT(x)
else:
X_even = matrix_DFT(x[::2])
X_odd = matrix_DFT(x[1::2])
factor = np.exp(-2j * np.pi * np.arange(N) / N)
return np.concatenate([X_even + factor[:N / 2] * X_odd,X_even + factor[N / 2:] * X_odd])
Explanation: That seems amazing! The numpy FFT is about 1000 times faster than our vectorised implementation. But how does numpy achieve this speed up? Well, by using the fast Fourier transform of course.
2.8.6. Fast Fourier transforms<a id='math:sec:fast_fourier_tranforms'></a>
The DFT is a computationally expensive operation. As evidenced by the double loop required to implement the DFT the computational complexity of a naive implementation such as ours scales like $\mathcal{O}(N^2)$ where $N$ is the number of data points. Even a vectorised version of the DFT will scale like $\mathcal{O}(N^2)$ since, in the end, there are still the same number of complex exponentiations and multiplications involved.
By exploiting the symmetries of the DFT, it is not difficult to identify potential ways to safe computing time. Looking at the definition of the discrete Fourier transform discrete Fourier transform ⤵<!--\ref{math:eq:8_004}-->, one can see that, under certain circumstances, the same summands occur multiple times. Recall that the DFT is periodic i.e. $Y_k = Y_{N+k}$, where $N$ is the number of data points. Now suppose that $N = 8$. In calculating the component $Y_2$ we would have to compute the quantity $y_2\,e^{-2{\pi}\imath\frac{2 \cdot 2}{8}}$ i.e. when $n = 2$. However, using the periodicity of the kernel $e^{-2\pi\imath \frac{kn}{N}} = e^{-2\pi\imath \frac{k(n+N)}{N}}$, we can see that this same quantity will also have to be computed when calculating the component $Y_6$ since $y_2\,e^{-2{\pi}\imath\frac{2\cdot2}{8}}=y_2e^{-2{\pi}\imath\frac{6\cdot2}{8}} = y_2e^{-2{\pi}\imath\frac{12}{8}}$. If we were calculating the DFT by hand, it would be a waste of time to calculate this summand twice. To see how we can exploit this, lets first split the DFT into its odd and even $n$ indices as follows
\begin{eqnarray}
Y_{k} &=& \sum_{n = 0}^{N-1} y_n e^{-2\pi\imath \frac{kn}{N}}\
&=& \sum_{m = 0}^{N/2-1} y_{2m} e^{-2\pi\imath \frac{k(2m)}{N}} + \sum_{m = 0}^{N/2-1} y_{2m+1} e^{-2\pi\imath \frac{k(2m+1)}{N}}\
&=& \sum_{m = 0}^{N/2-1} y_{2m} e^{-2\pi\imath \frac{km}{N/2}} + e^{-2\pi\imath \frac{k}{N}}\sum_{m = 0}^{N/2-1} y_{2m+1} e^{-2\pi\imath \frac{km}{N/2}}
\end{eqnarray}
Notice that we have split the DFT into two terms which look very much like DFT's of length $N/2$, only with a slight adjustment on the indices. Importantly the form of the kernel (i.e. $e^{-2\pi\imath \frac{km}{N/2}}$) looks the same for both the odd and the even $n$ indices. Now, while $k$ is in the range $0, \cdots , N-1$, $n$ only ranges through $0,\cdots,N/2 - 1$. The DFT written in the above form will therefore be periodic with period $N/2$ and we can exploit this periodic property to compute the DFT with half the number of computations. See the code below for an explicit implementation.
End of explanation
np.allclose(np.fft.fft(x),one_layer_FFT(x))
Explanation: Lets confirm that this function returns the correct result by comparing fith numpy's FFT.
End of explanation |
7,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TFX Components Walk-through
Learning Objectives
Develop a high level understanding of TFX pipeline components.
Learn how to use a TFX Interactive Context for prototype development of TFX pipelines.
Work with the Tensorflow Data Validation (TFDV) library to check and analyze input data.
Utilize the Tensorflow Transform (TFT) library for scalable data preprocessing and feature transformations.
Employ the Tensorflow Model Analysis (TFMA) library for model evaluation.
In this lab, you will work with the Covertype Data Set and use TFX to analyze, understand, and pre-process the dataset and train, analyze, validate, and deploy a multi-class classification model to predict the type of forest cover from cartographic features.
You will utilize TFX Interactive Context to work with the TFX components interactivelly in a Jupyter notebook environment. Working in an interactive notebook is useful when doing initial data exploration, experimenting with models, and designing ML pipelines. You should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts. In a production deployment of TFX on GCP, you will use an orchestrator such as Kubeflow Pipelines, or Cloud Composer. In an interactive mode, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells. In a production deployment, ML Metadata will be managed in a scalabe database like MySQL, and artifacts in apersistent store such as Google Cloud Storage. In an interactive mode, both properties and payloads are stored in a local file system of the Jupyter host.
Setup Note
Step1: Note
Step2: If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab.
Step3: Configure lab settings
Set constants, location paths and other environment settings.
Step4: Creating Interactive Context
TFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the interactive context you can specifiy the following parameters
Step5: Ingesting data using ExampleGen
In any ML development process the first step is to ingest the training and test datasets. The ExampleGen component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the TFRecord format, which will be used by other TFX components. It can also shuffle the data and split into an arbitrary number of partitions.
<img src=../../images/ExampleGen.png width="300">
Configure and run CsvExampleGen
In this exercise, you use the CsvExampleGen specialization of ExampleGen to ingest CSV files from a GCS location and emit them as tf.Example records for consumption by downstream TFX pipeline components. Your task is to configure the component to create 80-20 train and eval splits. Hint
Step6: Examine the ingested data
Step7: Generating statistics using StatisticsGen
The StatisticsGen component generates data statistics that can be used by other TFX components. StatisticsGen uses TensorFlow Data Validation. StatisticsGen generate statistics for each split in the ExampleGen component's output. In our case there two splits
Step8: Visualize statistics
The generated statistics can be visualized using the tfdv.visualize_statistics() function from the TensorFlow Data Validation library or using a utility method of the InteractiveContext object. In fact, most of the artifacts generated by the TFX components can be visualized using InteractiveContext.
Step9: Infering data schema using SchemaGen
Some TFX components use a description input data called a schema. The schema is an instance of schema.proto. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. SchemaGen automatically generates the schema by inferring types, categories, and ranges from data statistics. The auto-generated schema is best-effort and only tries to infer basic properties of the data. It is expected that developers review and modify it as needed. SchemaGen uses TensorFlow Data Validation.
The SchemaGen component generates the schema using the statistics for the train split. The statistics for other splits are ignored.
<img src=../../images/SchemaGen.png width="200">
Configure and run the SchemaGen components
Step10: Visualize the inferred schema
Step11: Updating the auto-generated schema
In most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the covertype dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the Slope feature should be in the 0-90 range. You can manually add these constraints to the auto-generated schema by setting the feature domain.
Load the auto-generated schema proto file
Step12: Modify the schema
You can use the protocol buffer APIs to modify the schema.
Hint
Step13: Save the updated schema
Step14: Importing the updated schema using Importer
The Importer component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow.
Configure and run the Importer component
Step15: Visualize the imported schema
Step16: Validating data with ExampleValidator
The ExampleValidator component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the StatisticsGen component against a schema generated by SchemaGen or imported by Importer.
ExampleValidator can detect different classes of anomalies. For example it can
Step17: Visualize validation results
The file anomalies.pbtxt can be visualized using context.show.
Step18: In our case no anomalies were detected in the eval split.
For a detailed deep dive into data validation and schema generation refer to the lab-31-tfdv-structured-data lab.
Preprocessing data with Transform
The Transform component performs data transformation and feature engineering. The Transform component consumes tf.Examples emitted from the ExampleGen component and emits the transformed feature data and the SavedModel graph that was used to process the data. The emitted SavedModel can then be used by serving components to make sure that the same data pre-processing logic is applied at training and serving.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.
<img src=../../images/Transform.png width="400">
Define the pre-processing module
To configure Trainsform, you need to encapsulate your pre-processing code in the Python preprocessing_fn function and save it to a python module that is then provided to the Transform component as an input. This module will be loaded by transform and the preprocessing_fn function will be called when the Transform component runs.
In most cases, your implementation of the preprocessing_fn makes extensive use of TensorFlow Transform for performing feature engineering on your dataset.
Step19: Configure and run the Transform component.
Step20: Examine the Transform component's outputs
The Transform component has 2 outputs
Step21: And the transform.examples artifact
Step22: Train your TensorFlow model with the Trainer component
The Trainer component trains a model using TensorFlow.
Trainer takes
Step23: Create and run the Trainer component
Note that the Trainer component supports passing the field num_steps through the train_args and eval_args arguments.
Step24: Analyzing training runs with TensorBoard
In this step you will analyze the training run with TensorBoard.dev. TensorBoard.dev is a managed service that enables you to easily host, track and share your ML experiments.
Retrieve the location of TensorBoard logs
Each model run's train and eval metric logs are written to the model_run directory by the Tensorboard callback defined in model.py.
Step25: Upload the logs and start TensorBoard.dev
Open a new JupyterLab terminal window
From the terminal window, execute the following command
tensorboard dev upload --logdir [YOUR_LOGDIR]
Where [YOUR_LOGDIR] is an URI retrieved by the previous cell.
You will be asked to authorize TensorBoard.dev using your Google account. If you don't have a Google account or you don't want to authorize TensorBoard.dev you can skip this exercise.
After the authorization process completes, follow the link provided to view your experiment.
Evaluating trained models with Evaluator
The Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is important in this particular use case or domain.
The Evaluator can also optionally validate a newly trained model against a previous model. In this lab, you only train one model, so the Evaluator automatically will label the model as "blessed".
<img src=../../images/Evaluator.png width="400">
Configure and run the Evaluator component
Use the ResolverNode to pick the previous model to compare against. The model resolver is only required if performing model validation in addition to evaluation. In this case we validate against the latest blessed model. If no model has been blessed before (as in this case) the evaluator will make our candidate the first blessed model.
Step26: Configure evaluation metrics and slices.
Step27: Check the model performance validation status
Step28: Visualize evaluation results
You can visualize the evaluation results using the tfma.view.render_slicing_metrics() function from TensorFlow Model Analysis library.
Setup Note
Step29: InfraValidator
The InfraValidator component acts as an additional early warning layer by validating a candidate model in a sandbox version of its serving infrastructure to prevent an unservable model from being pushed to production. Compared to the Evaluator component above which validates a model's performance, the InfraValidator component is validating that a model is able to generate predictions from served examples in an environment configured to match production. The config below takes a model and examples, launches the model in a sand-boxed TensorflowServing model server from the latest image in a local docker engine, and optionally checks that the model binary can be loaded and queried before "blessing" it for production.
<img src=../../images/InfraValidator.png width="400">
Step30: Check the model infrastructure validation status
Step31: Deploying models with Pusher
The Pusher component checks whether a model has been "blessed", and if so, deploys it by pushing the model to a well known file destination.
<img src=../../images/Pusher.png width="400">
Configure and run the Pusher component
Step32: Examine the output of Pusher | Python Code:
import os
import time
from pprint import pprint
import absl
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
import tfx
from tensorflow_metadata.proto.v0 import schema_pb2
from tfx.components import (
CsvExampleGen,
Evaluator,
ExampleValidator,
InfraValidator,
Pusher,
SchemaGen,
StatisticsGen,
Trainer,
Transform,
)
from tfx.components.trainer import executor as trainer_executor
from tfx.dsl.components.base import executor_spec
from tfx.dsl.components.common.importer import Importer
from tfx.dsl.components.common.resolver import Resolver
from tfx.dsl.input_resolution.strategies.latest_blessed_model_strategy import (
LatestBlessedModelStrategy,
)
from tfx.orchestration import metadata, pipeline
from tfx.orchestration.experimental.interactive.interactive_context import (
InteractiveContext,
)
from tfx.proto import (
example_gen_pb2,
infra_validator_pb2,
pusher_pb2,
trainer_pb2,
)
from tfx.types import Channel
from tfx.types.standard_artifacts import Model, ModelBlessing
Explanation: TFX Components Walk-through
Learning Objectives
Develop a high level understanding of TFX pipeline components.
Learn how to use a TFX Interactive Context for prototype development of TFX pipelines.
Work with the Tensorflow Data Validation (TFDV) library to check and analyze input data.
Utilize the Tensorflow Transform (TFT) library for scalable data preprocessing and feature transformations.
Employ the Tensorflow Model Analysis (TFMA) library for model evaluation.
In this lab, you will work with the Covertype Data Set and use TFX to analyze, understand, and pre-process the dataset and train, analyze, validate, and deploy a multi-class classification model to predict the type of forest cover from cartographic features.
You will utilize TFX Interactive Context to work with the TFX components interactivelly in a Jupyter notebook environment. Working in an interactive notebook is useful when doing initial data exploration, experimenting with models, and designing ML pipelines. You should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts. In a production deployment of TFX on GCP, you will use an orchestrator such as Kubeflow Pipelines, or Cloud Composer. In an interactive mode, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells. In a production deployment, ML Metadata will be managed in a scalabe database like MySQL, and artifacts in apersistent store such as Google Cloud Storage. In an interactive mode, both properties and payloads are stored in a local file system of the Jupyter host.
Setup Note:
Currently, TFMA visualizations do not render properly in JupyterLab. It is recommended to run this notebook in Jupyter Classic Notebook. To switch to Classic Notebook select Launch Classic Notebook from the Help menu.
End of explanation
print("Tensorflow Version:", tf.__version__)
print("TFX Version:", tfx.__version__)
print("TFDV Version:", tfdv.__version__)
print("TFMA Version:", tfma.VERSION_STRING)
absl.logging.set_verbosity(absl.logging.INFO)
Explanation: Note: this lab was developed and tested with the following TF ecosystem package versions:
Tensorflow Version: 2.6.2
TFX Version: 1.4.0
TFDV Version: 1.4.0
TFMA Version: 0.35.0
If you encounter errors with the above imports (e.g. TFX component not found), check your package versions in the cell below.
End of explanation
os.environ["PATH"] += os.pathsep + "/home/jupyter/.local/bin"
Explanation: If the versions above do not match, update your packages in the current Jupyter kernel below. The default %pip package installation location is not on your system installation PATH; use the command below to append the local installation path to pick up the latest package versions. Note that you may also need to restart your notebook kernel to pick up the specified package versions and re-run the imports cell above before proceeding with the lab.
End of explanation
ARTIFACT_STORE = os.path.join(os.sep, "home", "jupyter", "artifact-store")
SERVING_MODEL_DIR = os.path.join(os.sep, "home", "jupyter", "serving_model")
DATA_ROOT = "../../data"
Explanation: Configure lab settings
Set constants, location paths and other environment settings.
End of explanation
PIPELINE_NAME = "tfx-covertype-classifier"
PIPELINE_ROOT = os.path.join(
ARTIFACT_STORE, PIPELINE_NAME, time.strftime("%Y%m%d_%H%M%S")
)
os.makedirs(PIPELINE_ROOT, exist_ok=True)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=None,
)
Explanation: Creating Interactive Context
TFX Interactive Context allows you to create and run TFX Components in an interactive mode. It is designed to support experimentation and development in a Jupyter Notebook environment. It is an experimental feature and major changes to interface and functionality are expected. When creating the interactive context you can specifiy the following parameters:
- pipeline_name - Optional name of the pipeline for ML Metadata tracking purposes. If not specified, a name will be generated for you.
- pipeline_root - Optional path to the root of the pipeline's outputs. If not specified, an ephemeral temporary directory will be created and used.
- metadata_connection_config - Optional metadata_store_pb2.ConnectionConfig instance used to configure connection to a ML Metadata connection. If not specified, an ephemeral SQLite MLMD connection contained in the pipeline_root directory with file name "metadata.sqlite" will be used.
End of explanation
output_config = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[
# TODO: Your code to configure train data split
# TODO: Your code to configure eval data split
]
)
)
example_gen = tfx.components.CsvExampleGen(
input_base=DATA_ROOT, output_config=output_config
).with_id("CsvExampleGen")
context.run(example_gen)
Explanation: Ingesting data using ExampleGen
In any ML development process the first step is to ingest the training and test datasets. The ExampleGen component ingests data into a TFX pipeline. It consumes external files/services to generate a set file files in the TFRecord format, which will be used by other TFX components. It can also shuffle the data and split into an arbitrary number of partitions.
<img src=../../images/ExampleGen.png width="300">
Configure and run CsvExampleGen
In this exercise, you use the CsvExampleGen specialization of ExampleGen to ingest CSV files from a GCS location and emit them as tf.Example records for consumption by downstream TFX pipeline components. Your task is to configure the component to create 80-20 train and eval splits. Hint: review the ExampleGen proto definition to split your data with hash buckets.
End of explanation
examples_uri = example_gen.outputs["examples"].get()[-1].uri
tfrecord_filenames = [
os.path.join(examples_uri, "Split-train", name)
for name in os.listdir(os.path.join(examples_uri, "Split-train"))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(2):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField("bytes_list"):
value = feature.bytes_list.value
if feature.HasField("float_list"):
value = feature.float_list.value
if feature.HasField("int64_list"):
value = feature.int64_list.value
print(f"{name}: {value}")
print("******")
Explanation: Examine the ingested data
End of explanation
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs["examples"]
).with_id("StatisticsGen")
context.run(statistics_gen)
Explanation: Generating statistics using StatisticsGen
The StatisticsGen component generates data statistics that can be used by other TFX components. StatisticsGen uses TensorFlow Data Validation. StatisticsGen generate statistics for each split in the ExampleGen component's output. In our case there two splits: train and eval.
<img src=../../images/StatisticsGen.png width="200">
Configure and run the StatisticsGen component
End of explanation
context.show(statistics_gen.outputs["statistics"])
Explanation: Visualize statistics
The generated statistics can be visualized using the tfdv.visualize_statistics() function from the TensorFlow Data Validation library or using a utility method of the InteractiveContext object. In fact, most of the artifacts generated by the TFX components can be visualized using InteractiveContext.
End of explanation
schema_gen = SchemaGen(
statistics=statistics_gen.outputs["statistics"], infer_feature_shape=False
).with_id("SchemaGen")
context.run(schema_gen)
Explanation: Infering data schema using SchemaGen
Some TFX components use a description input data called a schema. The schema is an instance of schema.proto. It can specify data types for feature values, whether a feature has to be present in all examples, allowed value ranges, and other properties. SchemaGen automatically generates the schema by inferring types, categories, and ranges from data statistics. The auto-generated schema is best-effort and only tries to infer basic properties of the data. It is expected that developers review and modify it as needed. SchemaGen uses TensorFlow Data Validation.
The SchemaGen component generates the schema using the statistics for the train split. The statistics for other splits are ignored.
<img src=../../images/SchemaGen.png width="200">
Configure and run the SchemaGen components
End of explanation
context.show(schema_gen.outputs["schema"])
Explanation: Visualize the inferred schema
End of explanation
schema_proto_path = "{}/{}".format(
schema_gen.outputs["schema"].get()[0].uri, "schema.pbtxt"
)
schema = tfdv.load_schema_text(schema_proto_path)
Explanation: Updating the auto-generated schema
In most cases the auto-generated schemas must be fine-tuned manually using insights from data exploration and/or domain knowledge about the data. For example, you know that in the covertype dataset there are seven types of forest cover (coded using 1-7 range) and that the value of the Slope feature should be in the 0-90 range. You can manually add these constraints to the auto-generated schema by setting the feature domain.
Load the auto-generated schema proto file
End of explanation
# TODO: Your code to restrict the categorical feature Cover_Type between the values of 0 and 6.
# TODO: Your code to restrict the numeric feature Slope between 0 and 90.
tfdv.display_schema(schema=schema)
Explanation: Modify the schema
You can use the protocol buffer APIs to modify the schema.
Hint: Review the TFDV library API documentation on setting a feature's domain. You can use the protocol buffer APIs to modify the schema. Review the Tensorflow Metadata proto definition for configuration options.
End of explanation
schema_dir = os.path.join(ARTIFACT_STORE, "schema")
tf.io.gfile.makedirs(schema_dir)
schema_file = os.path.join(schema_dir, "schema.pbtxt")
tfdv.write_schema_text(schema, schema_file)
!cat {schema_file}
Explanation: Save the updated schema
End of explanation
schema_importer = Importer(
source_uri=schema_dir, artifact_type=tfx.types.standard_artifacts.Schema
).with_id("SchemaImporter")
context.run(schema_importer)
Explanation: Importing the updated schema using Importer
The Importer component allows you to import an external artifact, including the schema file, so it can be used by other TFX components in your workflow.
Configure and run the Importer component
End of explanation
context.show(schema_importer.outputs["result"])
Explanation: Visualize the imported schema
End of explanation
# TODO: Complete ExampleValidator
# Hint: review the visual above and review the documentation on ExampleValidator's inputs and outputs:
# https://www.tensorflow.org/tfx/guide/exampleval
# Make sure you use the output of the schema_importer component created above.
example_validator = ExampleValidator()
context.run(example_validator)
Explanation: Validating data with ExampleValidator
The ExampleValidator component identifies anomalies in data. It identifies anomalies by comparing data statistics computed by the StatisticsGen component against a schema generated by SchemaGen or imported by Importer.
ExampleValidator can detect different classes of anomalies. For example it can:
perform validity checks by comparing data statistics against a schema
detect training-serving skew by comparing training and serving data.
detect data drift by looking at a series of data.
The ExampleValidator component validates the data in the eval split only. Other splits are ignored.
<img src=../../images/ExampleValidator.png width="350">
Configure and run the ExampleValidator component
End of explanation
context.show(example_validator.outputs["anomalies"])
Explanation: Visualize validation results
The file anomalies.pbtxt can be visualized using context.show.
End of explanation
TRANSFORM_MODULE = "preprocessing.py"
!cat {TRANSFORM_MODULE}
Explanation: In our case no anomalies were detected in the eval split.
For a detailed deep dive into data validation and schema generation refer to the lab-31-tfdv-structured-data lab.
Preprocessing data with Transform
The Transform component performs data transformation and feature engineering. The Transform component consumes tf.Examples emitted from the ExampleGen component and emits the transformed feature data and the SavedModel graph that was used to process the data. The emitted SavedModel can then be used by serving components to make sure that the same data pre-processing logic is applied at training and serving.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.
<img src=../../images/Transform.png width="400">
Define the pre-processing module
To configure Trainsform, you need to encapsulate your pre-processing code in the Python preprocessing_fn function and save it to a python module that is then provided to the Transform component as an input. This module will be loaded by transform and the preprocessing_fn function will be called when the Transform component runs.
In most cases, your implementation of the preprocessing_fn makes extensive use of TensorFlow Transform for performing feature engineering on your dataset.
End of explanation
transform = Transform(
examples=example_gen.outputs["examples"],
schema=schema_importer.outputs["result"],
module_file=TRANSFORM_MODULE,
).with_id("Transform")
context.run(transform)
Explanation: Configure and run the Transform component.
End of explanation
os.listdir(transform.outputs["transform_graph"].get()[0].uri)
Explanation: Examine the Transform component's outputs
The Transform component has 2 outputs:
transform_graph - contains the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
transformed_examples - contains the preprocessed training and evaluation data.
Take a peek at the transform_graph artifact: it points to a directory containing 3 subdirectories:
End of explanation
os.listdir(transform.outputs["transformed_examples"].get()[0].uri)
transform_uri = transform.outputs["transformed_examples"].get()[0].uri
tfrecord_filenames = [
os.path.join(transform_uri, "Split-train", name)
for name in os.listdir(os.path.join(transform_uri, "Split-train"))
]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
for tfrecord in dataset.take(2):
example = tf.train.Example()
example.ParseFromString(tfrecord.numpy())
for name, feature in example.features.feature.items():
if feature.HasField("bytes_list"):
value = feature.bytes_list.value
if feature.HasField("float_list"):
value = feature.float_list.value
if feature.HasField("int64_list"):
value = feature.int64_list.value
print(f"{name}: {value}")
print("******")
Explanation: And the transform.examples artifact
End of explanation
TRAINER_MODULE_FILE = "model.py"
!cat {TRAINER_MODULE_FILE}
Explanation: Train your TensorFlow model with the Trainer component
The Trainer component trains a model using TensorFlow.
Trainer takes:
tf.Examples used for training and eval.
A user provided module file that defines the trainer logic.
A data schema created by SchemaGen or imported by Importer.
A proto definition of train args and eval args.
An optional transform graph produced by upstream Transform component.
An optional base models used for scenarios such as warmstarting training.
<img src=../../images/Trainer.png width="400">
Define the trainer module
To configure Trainer, you need to encapsulate your training code in a Python module that is then provided to the Trainer as an input.
End of explanation
trainer = Trainer(
custom_executor_spec=executor_spec.ExecutorClassSpec(
trainer_executor.GenericExecutor
),
module_file=TRAINER_MODULE_FILE,
transformed_examples=transform.outputs["transformed_examples"],
schema=schema_importer.outputs["result"],
transform_graph=transform.outputs["transform_graph"],
train_args=trainer_pb2.TrainArgs(splits=["train"], num_steps=2),
eval_args=trainer_pb2.EvalArgs(splits=["eval"], num_steps=1),
).with_id("Trainer")
context.run(trainer)
Explanation: Create and run the Trainer component
Note that the Trainer component supports passing the field num_steps through the train_args and eval_args arguments.
End of explanation
logs_path = trainer.outputs["model_run"].get()[0].uri
print(logs_path)
Explanation: Analyzing training runs with TensorBoard
In this step you will analyze the training run with TensorBoard.dev. TensorBoard.dev is a managed service that enables you to easily host, track and share your ML experiments.
Retrieve the location of TensorBoard logs
Each model run's train and eval metric logs are written to the model_run directory by the Tensorboard callback defined in model.py.
End of explanation
model_resolver = Resolver(
strategy_class=LatestBlessedModelStrategy,
model=Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=Channel(type=tfx.types.standard_artifacts.ModelBlessing),
).with_id("LatestBlessedModelResolver")
context.run(model_resolver)
Explanation: Upload the logs and start TensorBoard.dev
Open a new JupyterLab terminal window
From the terminal window, execute the following command
tensorboard dev upload --logdir [YOUR_LOGDIR]
Where [YOUR_LOGDIR] is an URI retrieved by the previous cell.
You will be asked to authorize TensorBoard.dev using your Google account. If you don't have a Google account or you don't want to authorize TensorBoard.dev you can skip this exercise.
After the authorization process completes, follow the link provided to view your experiment.
Evaluating trained models with Evaluator
The Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is important in this particular use case or domain.
The Evaluator can also optionally validate a newly trained model against a previous model. In this lab, you only train one model, so the Evaluator automatically will label the model as "blessed".
<img src=../../images/Evaluator.png width="400">
Configure and run the Evaluator component
Use the ResolverNode to pick the previous model to compare against. The model resolver is only required if performing model validation in addition to evaluation. In this case we validate against the latest blessed model. If no model has been blessed before (as in this case) the evaluator will make our candidate the first blessed model.
End of explanation
# TODO: Your code here to create a tfma.MetricThreshold.
# Review the API documentation here: https://www.tensorflow.org/tfx/model_analysis/api_docs/python/tfma/MetricThreshold
# Hint: Review the API documentation for tfma.GenericValueThreshold to constrain accuracy between 50% and 99%.
accuracy_threshold =
metrics_specs = tfma.MetricsSpec(
metrics=[
tfma.MetricConfig(
class_name="SparseCategoricalAccuracy", threshold=accuracy_threshold
),
tfma.MetricConfig(class_name="ExampleCount"),
]
)
eval_config = tfma.EvalConfig(
model_specs=[tfma.ModelSpec(label_key="Cover_Type")],
metrics_specs=[metrics_specs],
slicing_specs=[
tfma.SlicingSpec(),
tfma.SlicingSpec(feature_keys=["Wilderness_Area"]),
],
)
eval_config
model_analyzer = Evaluator(
examples=example_gen.outputs["examples"],
model=trainer.outputs["model"],
baseline_model=model_resolver.outputs["model"],
eval_config=eval_config,
).with_id("ModelEvaluator")
context.run(model_analyzer, enable_cache=False)
Explanation: Configure evaluation metrics and slices.
End of explanation
model_blessing_uri = model_analyzer.outputs["blessing"].get()[0].uri
!ls -l {model_blessing_uri}
Explanation: Check the model performance validation status
End of explanation
evaluation_uri = model_analyzer.outputs["evaluation"].get()[0].uri
evaluation_uri
!ls {evaluation_uri}
eval_result = tfma.load_eval_result(evaluation_uri)
eval_result
tfma.view.render_slicing_metrics(eval_result)
tfma.view.render_slicing_metrics(eval_result, slicing_column="Wilderness_Area")
Explanation: Visualize evaluation results
You can visualize the evaluation results using the tfma.view.render_slicing_metrics() function from TensorFlow Model Analysis library.
Setup Note: Currently, TFMA visualizations don't render in JupyterLab. Make sure that you run this notebook in Classic Notebook.
End of explanation
infra_validator = InfraValidator(
model=trainer.outputs["model"],
examples=example_gen.outputs["examples"],
serving_spec=infra_validator_pb2.ServingSpec(
tensorflow_serving=infra_validator_pb2.TensorFlowServing(
tags=["latest"]
),
local_docker=infra_validator_pb2.LocalDockerConfig(),
),
validation_spec=infra_validator_pb2.ValidationSpec(
max_loading_time_seconds=60,
num_tries=5,
),
request_spec=infra_validator_pb2.RequestSpec(
tensorflow_serving=infra_validator_pb2.TensorFlowServingRequestSpec(),
num_examples=5,
),
).with_id("ModelInfraValidator")
context.run(infra_validator, enable_cache=False)
Explanation: InfraValidator
The InfraValidator component acts as an additional early warning layer by validating a candidate model in a sandbox version of its serving infrastructure to prevent an unservable model from being pushed to production. Compared to the Evaluator component above which validates a model's performance, the InfraValidator component is validating that a model is able to generate predictions from served examples in an environment configured to match production. The config below takes a model and examples, launches the model in a sand-boxed TensorflowServing model server from the latest image in a local docker engine, and optionally checks that the model binary can be loaded and queried before "blessing" it for production.
<img src=../../images/InfraValidator.png width="400">
End of explanation
infra_blessing_uri = infra_validator.outputs["blessing"].get()[0].uri
!ls -l {infra_blessing_uri}
Explanation: Check the model infrastructure validation status
End of explanation
trainer.outputs["model"]
pusher = Pusher(
model=trainer.outputs["model"],
model_blessing=model_analyzer.outputs["blessing"],
infra_blessing=infra_validator.outputs["blessing"],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=SERVING_MODEL_DIR
)
),
).with_id("ModelPusher")
context.run(pusher)
Explanation: Deploying models with Pusher
The Pusher component checks whether a model has been "blessed", and if so, deploys it by pushing the model to a well known file destination.
<img src=../../images/Pusher.png width="400">
Configure and run the Pusher component
End of explanation
pusher.outputs
latest_pushed_model = os.path.join(
SERVING_MODEL_DIR, max(os.listdir(SERVING_MODEL_DIR))
)
ls $latest_pushed_model
Explanation: Examine the output of Pusher
End of explanation |
7,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
05 - Support Vector Machines
by Alejandro Correa Bahnsen and Jesus Solano
version 1.4, January 2019
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Rick Muller, Sandia National Laboratories
Previously we introduced supervised machine learning.
There are many supervised learning algorithms available; here we'll go into brief detail one of the most powerful and interesting methods
Step1: Motivating Support Vector Machines
Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for classification or for regression. SVMs are a discriminative classifier
Step2: A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see a problem
Step3: These are three very different separaters which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently!
How can we improve on this?
Support Vector Machines
Step4: Notice here that if we want to maximize this width, the middle fit is clearly the best.
This is the intuition of support vector machines, which optimize a linear discriminant model in conjunction with a margin representing the perpendicular distance between the datasets.
Fitting a Support Vector Machine
Now we'll fit a Support Vector Machine Classifier to these points. While the mathematical details of the likelihood model are interesting, we'll let you read about those elsewhere. Instead, we'll just treat the scikit-learn algorithm as a black box which accomplishes the above task.
Step6: To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us
Step7: Notice that the dashed lines touch a couple of the points
Step8: Let's use IPython's interact functionality to explore how the distribution of points affects the support vectors and the discriminative fit.
(This is only available in IPython 2.0+, and will not work in a static view)
Step9: Notice the unique thing about SVM is that only the support vectors matter
Step10: Clearly, no linear discrimination will ever separate these data.
One way we can adjust this is to apply a kernel, which is some functional transformation of the input data.
For example, one simple model we could use is a radial basis function
Step11: If we plot this along with our data, we can see the effect of it
Step12: We can see that with this additional dimension, the data becomes trivially linearly separable!
This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using kernel='rbf', short for radial basis function | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('bmh')
Explanation: 05 - Support Vector Machines
by Alejandro Correa Bahnsen and Jesus Solano
version 1.4, January 2019
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Rick Muller, Sandia National Laboratories
Previously we introduced supervised machine learning.
There are many supervised learning algorithms available; here we'll go into brief detail one of the most powerful and interesting methods: Support Vector Machines (SVMs).
End of explanation
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50);
Explanation: Motivating Support Vector Machines
Support Vector Machines (SVMs) are a powerful supervised learning algorithm used for classification or for regression. SVMs are a discriminative classifier: that is, they draw a boundary between clusters of data.
Let's show a quick example of support vector classification. First we need to create a dataset:
End of explanation
xfit = np.linspace(-1, 3.5)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
Explanation: A discriminative classifier attempts to draw a line between the two sets of data. Immediately we see a problem: such a line is ill-posed! For example, we could come up with several possibilities which perfectly discriminate between the classes in this example:
End of explanation
xfit = np.linspace(-1, 3.5)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none', color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
Explanation: These are three very different separaters which perfectly discriminate between these samples. Depending on which you choose, a new data point will be classified almost entirely differently!
How can we improve on this?
Support Vector Machines: Maximizing the Margin
Support vector machines are one way to address this.
What support vector machined do is to not only draw a line, but consider a region about the line of some given width. Here's an example of what it might look like:
End of explanation
from sklearn.svm import SVC # "Support Vector Classifier"
clf = SVC(kernel='linear')
clf.fit(X, y)
Explanation: Notice here that if we want to maximize this width, the middle fit is clearly the best.
This is the intuition of support vector machines, which optimize a linear discriminant model in conjunction with a margin representing the perpendicular distance between the datasets.
Fitting a Support Vector Machine
Now we'll fit a Support Vector Machine Classifier to these points. While the mathematical details of the likelihood model are interesting, we'll let you read about those elsewhere. Instead, we'll just treat the scikit-learn algorithm as a black box which accomplishes the above task.
End of explanation
import warnings
warnings.filterwarnings('ignore')
def plot_svc_decision_function(clf, ax=None):
Plot the decision function for a 2D SVC
if ax is None:
ax = plt.gca()
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros_like(X)
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[i, j] = clf.decision_function([[xi, yj]])
# plot the margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plot_svc_decision_function(clf)
Explanation: To better visualize what's happening here, let's create a quick convenience function that will plot SVM decision boundaries for us:
End of explanation
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none');
Explanation: Notice that the dashed lines touch a couple of the points: these points are the pivotal pieces of this fit, and are known as the support vectors (giving the algorithm its name).
In scikit-learn, these are stored in the support_vectors_ attribute of the classifier:
End of explanation
from IPython.html.widgets import interact
def plot_svm(N=10):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
clf = SVC(kernel='linear')
clf.fit(X, y)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plt.xlim(-1, 4)
plt.ylim(-1, 6)
plot_svc_decision_function(clf, plt.gca())
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=200, facecolors='none')
interact(plot_svm, N=[10, 200], kernel='linear');
Explanation: Let's use IPython's interact functionality to explore how the distribution of points affects the support vectors and the discriminative fit.
(This is only available in IPython 2.0+, and will not work in a static view)
End of explanation
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
# plot_svc_decision_function(clf);
Explanation: Notice the unique thing about SVM is that only the support vectors matter: that is, if you moved any of the other points without letting them cross the decision boundaries, they would have no effect on the classification results!
Going further: Kernel Methods
Where SVM gets incredibly exciting is when it is used in conjunction with kernels.
To motivate the need for kernels, let's look at some data which is not linearly separable:
End of explanation
r = np.exp(-(X[:, 0] ** 2 + X[:, 1] ** 2))
Explanation: Clearly, no linear discrimination will ever separate these data.
One way we can adjust this is to apply a kernel, which is some functional transformation of the input data.
For example, one simple model we could use is a radial basis function
End of explanation
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30):
plt.figure(figsize=(8,8))
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50)
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180));
Explanation: If we plot this along with our data, we can see the effect of it:
End of explanation
clf = SVC(kernel='rbf')
clf.fit(X, y)
plt.figure(figsize=(8,8))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50)
plot_svc_decision_function(clf)
Explanation: We can see that with this additional dimension, the data becomes trivially linearly separable!
This is a relatively simple kernel; SVM has a more sophisticated version of this kernel built-in to the process. This is accomplished by using kernel='rbf', short for radial basis function:
End of explanation |
7,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ACM SIGKDD Austin
Advanced Machine Learning with Python
Class 1
Step1: Ever since I discovered Watermark, I love it because it automatically documents the package versions, and information about the machine I ran the code on. This is great for reproducibility when sharing your work/results with others.
Why Pre-Process the data?
Scikit Learn - Machine Learning Package for python
- Very easy to run machine learning algorithms
- More difficult to use the algorithms correctly
- Garbage in, garbage out!
So scikit learn is a machine learning package for python. I think it is the gold standard for the way packages should be in the scientific community. It is so easy to pick up quickly and use to build awesome models and create machine learning algorithms with. So it's easy to use, but the real work is in 2 areas, first we need to make sure our data is cleaned and properly formatted for the various machine learning methods, and we need to make sure that the data looks the way that it should for the given model. We want to make sure that we are satisfying the assumptions of the model before using it. Garbage in, garbage out. So this is why we need to preprocess our data.
Various issues in the data can include
Step2: I like seaborn's visualization capability and integration with pandas so I'm going to download the dataset from there instead.
Step3: Filling in Missing Values
Step4: scikit-learn has an Imputer function. This has a fit_transform function and can be part of a pre-processing pipeline.
Step5: Strategy can take values 'mean', 'median', 'most_frequent'. Can also give the Imputer function what the missing value looks like in the data. Sometimes people use -1 or 99999.
Step6: Now something to think about might be whether want to set the mean of all the data as the imputed value. Looking at the data by the given species, for petal length, the means vary vastly between the 3 species. So we many want to impute using the mean within the species and not over the whole data set.
Step7: Looking at this data, another method could be to use clustering or regression to model the data without missing values and then see where the data with some missing feature values would be. The thing to note about imputing the data in a more specialized way is that the preprocessing.impute() function would need to be called on the data itself and the Imputer could not be part of the pipeline.
Dealing with Different Types (Numeric, Categorical)
Step8: Let's look at a histogram of the numeric variable - pulse
Step9: Obviously this is a simple example, we could just easily do this with a one line pandas call, but again the advantage of doing this through scikit-learn is that it can be part of the pipeline.
pandas one liner
Step10: OneHotEncoder expects the data to be numeric, so LabelEncoder would need to be applied first to convert everything to a numeric value.
Way to do it in pandas, because 1 and 0 don't have any meaning really, it doesn't matter which one got the 1 label and which one got the 0 label.
Step11: Let's make a deep copy of the exercise data frame so we can start modifying it.
Step12: Let's identify the categorical columns
Step13: Now we need to convert the diet, kind, and time columns into "dummy variables"
Step14: It's much easier to visualize what is happening using pandas, so I'll include that here as well.
Step15: So most of the time I do go through the pandas approach because it's more readable and then at the end I'll use the .values function to get the values out of the data frame. Pandas might be a way to start exploring the data quickly, but once the algorithm is finalized, then putting the scikit learn pipeline can be what is in production.
Scaling and Normalizing
Step16: Example of pipeline
Step17: I was excited to find the FunctionTransformer functionality, this means we can create our own modification of the data. This is newly available in scikit-learn v 0.17 which just recently was released. | Python Code:
%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
%load_ext watermark
%watermark -a "Jaya Zenchenko" -n -t -z -u -h -m -w -v -p scikit-learn,matplotlib,pandas,seaborn,numpy,scipy,conda
Explanation: ACM SIGKDD Austin
Advanced Machine Learning with Python
Class 1: Pre-Model Workflow
Jaya Zenchenko
Nov 18th, 2015
We will primarily be working out of the Scikit Learn Cookbook. I love to use Jupyter notebooks so this presentation will be slides from the notebook. Recently started using the live-reveal extension to make slides of my notebook. You can download it and play around with the examples yourself later. Also a reminder that this course is intended to be at the intermediate level.
Pre-Model Workflow : Scikit Learn Cookbook
Why Pre-process data?
Filling in Missing Values
Dealing with Numerical and Categorical Variables
Scaling/Normalizing
Pipeline for Pre-processing data
References
So we will be going over pre-model workflow primarily out of the scikit learn cookbook, but I'll also be including other examples not in the book. How many people here have used scikit-learn before? How many people here have had experience with data cleaning? I have had some experience with data preprocessing, I have learned a lot by dealing with very messy data. I think the best way to have the importance of data cleaning and preprocessing understood is by dealing with it often. Always learning some new trick or finding some new aberration in data. I would love for others to share as we go through some of these sections if they have examples of atrocious data and how they worked around it.
Install Watermark for Reproducibility:
End of explanation
from sklearn.datasets import load_iris
import matplotlib.pyplot as plt
import numpy
import seaborn as sns
import pandas as pd
%matplotlib inline
from sklearn.pipeline import Pipeline
from sklearn import preprocessing
sns.set()
iris = load_iris()
iris.feature_names
iris.data[0:5]
Explanation: Ever since I discovered Watermark, I love it because it automatically documents the package versions, and information about the machine I ran the code on. This is great for reproducibility when sharing your work/results with others.
Why Pre-Process the data?
Scikit Learn - Machine Learning Package for python
- Very easy to run machine learning algorithms
- More difficult to use the algorithms correctly
- Garbage in, garbage out!
So scikit learn is a machine learning package for python. I think it is the gold standard for the way packages should be in the scientific community. It is so easy to pick up quickly and use to build awesome models and create machine learning algorithms with. So it's easy to use, but the real work is in 2 areas, first we need to make sure our data is cleaned and properly formatted for the various machine learning methods, and we need to make sure that the data looks the way that it should for the given model. We want to make sure that we are satisfying the assumptions of the model before using it. Garbage in, garbage out. So this is why we need to preprocess our data.
Various issues in the data can include:
- Noisy data (out-of-range, impossible combinations, human error, etc)
- Missing Data (General noise/error, MCAR, MNAR, MAR)
- Data of different types (numeric, categorical)
- Too many attributes
- Too much data
- Data does not fit the assumed characteristic for a given method
For each of these issues, we have different solutions:
- Imputing (filling in missing values)
- Binary Features
- Categorical
- Binarizing
- Correlation Analysis/Chi-Squared Test of Independance
- Aggregating
- Random Sampling
- Scaling/Normalizing
So what kinds of issues can we find in our data? Data has been known to "lie". It comes from various sources such as people, sensors, the internet, and all these have been known to lie sometimes. We can have noisy data, missing data, different types of data, too much data, and data values not as expected. I'll be going over a few examples of these today, primarily ways of filling in missing data, dealing with data of different types, and scaling and normalizing.
Example Datasets:
Download Data Sets:
scikit learn datasets
UCI Machine Learning Repository
Kaggle Data Sets
Local government/open data sets
Create Data Sets:
Create data with specific properties (distributions, number of clusters, noise, etc)
Scikit learn has many built in data sets, it's a good place to start to play with these techniques and other methods for future classes. Other data sets include the UCI Machine Learning repository, Kaggle data sets, and local government/open data sets.
Another option is to create your own fake data set with different properties (distribution, number of clusters, noise, etc). This can be a good approach to test out different algorithms and their performance on data sets that are behaving with the assumptions in mind.
Remember that when using real data, to always spend enough time doing exploratory data analysis to understand the data before applying different methods.
Download Example Data set from Scikit Learn:
End of explanation
df = sns.load_dataset("iris")
df.head()
Explanation: I like seaborn's visualization capability and integration with pandas so I'm going to download the dataset from there instead.
End of explanation
numpy.random.seed(seed=40)
sample_idx = numpy.random.random_integers(0,df.shape[0]-1, 10 )
feature_idx = numpy.random.random_integers(0,df.shape[1]-2, 10)
print "sample_idx", sample_idx
print "feature_idx", feature_idx
for idx, jdx in zip(sample_idx, feature_idx):
df.ix[idx, jdx] = None
df.head(15)
Explanation: Filling in Missing Values:
Let's randomly select samples to remove so that we have missing values in our data.
End of explanation
imputer = preprocessing.Imputer()
imputer
Explanation: scikit-learn has an Imputer function. This has a fit_transform function and can be part of a pre-processing pipeline.
End of explanation
imputed_df = df.copy()
imputed_data = imputer.fit_transform(df.ix[:,0:4])
imputed_df.ix[:,0:4] = imputed_data
imputed_df.head(15)
print df.mean()
df.groupby('species').mean()
Explanation: Strategy can take values 'mean', 'median', 'most_frequent'. Can also give the Imputer function what the missing value looks like in the data. Sometimes people use -1 or 99999.
End of explanation
sns.pairplot(df, hue="species")
plt.show()
Explanation: Now something to think about might be whether want to set the mean of all the data as the imputed value. Looking at the data by the given species, for petal length, the means vary vastly between the 3 species. So we many want to impute using the mean within the species and not over the whole data set.
End of explanation
sns.set(style="ticks")
exercise = sns.load_dataset("exercise")
exercise.head()
Explanation: Looking at this data, another method could be to use clustering or regression to model the data without missing values and then see where the data with some missing feature values would be. The thing to note about imputing the data in a more specialized way is that the preprocessing.impute() function would need to be called on the data itself and the Imputer could not be part of the pipeline.
Dealing with Different Types (Numeric, Categorical):
Binarizing:
Binarizing is the process of converting a variable to a 0 or 1 given a certain threshold.
To show an example for binarizing, I wanted to have a data set with both categorical and numeric data. I downloaded an 'exercise' dataset from seaborn.
End of explanation
exercise.pulse.hist()
plt.title('Histogram of Pulse')
plt.show()
exercise.ix[:,'high_pulse'] = preprocessing.binarize(exercise.pulse, threshold=120)[0]
exercise.head()
exercise[exercise.high_pulse==1].head()
Explanation: Let's look at a histogram of the numeric variable - pulse:
End of explanation
encoder = preprocessing.LabelEncoder()
exercise.diet.unique()
encoder.fit_transform(exercise.diet)
Explanation: Obviously this is a simple example, we could just easily do this with a one line pandas call, but again the advantage of doing this through scikit-learn is that it can be part of the pipeline.
pandas one liner: exercise['high_pulse'] = exercise.pulse>120
Create numerical features from the Categorical:
Since we can't just plug in the exercise data as in into a machine learning algorithm, we need to transform the data so that it only contains numerical data. So there are 2 primary ways of doing this. One is to create a numeric value for each of the categories in a given column, or another way is to create new features very similar to what is called "creating dummy variables" in statistics.
LabelEncoder()
OneHotEncoder()
End of explanation
exercise.diet.cat.codes.head()
Explanation: OneHotEncoder expects the data to be numeric, so LabelEncoder would need to be applied first to convert everything to a numeric value.
Way to do it in pandas, because 1 and 0 don't have any meaning really, it doesn't matter which one got the 1 label and which one got the 0 label.
End of explanation
exercise_numeric_df = exercise.copy()
exercise.columns
exercise.head()
Explanation: Let's make a deep copy of the exercise data frame so we can start modifying it.
End of explanation
cat_columns = ['diet','kind', 'time']
# Pandas: exercise_numeric_df[cat_columns] = exercise[cat_columns].apply(lambda x: x.cat.codes)
exercise_numeric_df[cat_columns] = exercise[cat_columns].apply(lambda x: encoder.fit_transform(x))
exercise_numeric_df.head()
Explanation: Let's identify the categorical columns:
End of explanation
one_hot_encoder = preprocessing.OneHotEncoder(categorical_features=[2,4,5])
one_hot_encoder
exercise_numeric_encoded_matrix = one_hot_encoder.fit_transform(exercise_numeric_df.values)
exercise_numeric_encoded_matrix.toarray()[0:10,:]
exercise_numeric_encoded_matrix.shape
Explanation: Now we need to convert the diet, kind, and time columns into "dummy variables":
End of explanation
pd.get_dummies(exercise).head()
pd.get_dummies(exercise).shape
Explanation: It's much easier to visualize what is happening using pandas, so I'll include that here as well.
End of explanation
exercise_numeric_encoded_matrix
standard_scaler = preprocessing.StandardScaler(with_mean=True)
standard_scaler
exercise_numeric_encoded_matrix.toarray()[0:5,0:8]
exercise_data_scaled = standard_scaler.fit_transform(exercise_numeric_encoded_matrix.toarray()[:,0:8])
numpy.mean(exercise_data_scaled, axis=0)
numpy.linalg.norm(exercise_data_scaled[0,:])
normalizer = preprocessing.Normalizer()
normalizer
exercise_data_scaled_normalized = normalizer.fit_transform(exercise_data_scaled)
numpy.linalg.norm(exercise_data_scaled_normalized[0,:])
exercise_data_scaled_normalized[0:5,:]
Explanation: So most of the time I do go through the pandas approach because it's more readable and then at the end I'll use the .values function to get the values out of the data frame. Pandas might be a way to start exploring the data quickly, but once the algorithm is finalized, then putting the scikit learn pipeline can be what is in production.
Scaling and Normalizing:
StandardScaler() - z score normalization - subtract the mean, divide by the std.
MinMaxScaler() - data is scaled to a fixed range, usually 0 to 1.
Normalizer() - normalized to have length 1
Z score normalization makes the data normally distributed which is an assumption for many algorithms. Standardizing the features so that they are centered around 0 with a standard deviation of 1 is not only important if we are comparing measurements that have different units, but it is also a general requirement for many machine learning algorithms. Min-max scaling transforms the data so that there is smaller standard deviations for outliers than z score. Z score standardization is performed more frequently than min max. However min-max scaling is used in image processing, where pixel intensities have to be normalized to fit within a certain range (i.e., 0 to 255 for the RGB color range).
End of explanation
from sklearn import pipeline
Explanation: Example of pipeline:
preprocessing_pipeline = preprocessing.Pipeline([('impute_missing', imputer), ('cat_to_numeric', label_encoder), ('one_hot_encoding', one_hot_encoding), ('standard_scaler', standard_scaler), ('normalizer', normalizer)])
preprocessing_pipeline.fit_transform(X)
FunctionTransformer() - Can create your own transformer function to include in the pipeline
The last item can be an estimator with fit_predict and score functions
End of explanation
my_function = preprocessing.FunctionTransformer(func=lambda x: x.toarray()[:,0:8], \
validate=True, accept_sparse=True, pass_y=False)
preprocessing_pipeline = pipeline.Pipeline([('one_hot_encoding', one_hot_encoder), \
('my_function', my_function), \
('standard_scaler', standard_scaler), \
('normalizer', normalizer)])
preprocessing_pipeline.fit_transform(exercise_numeric_df.values)[0:5,:]
Explanation: I was excited to find the FunctionTransformer functionality, this means we can create our own modification of the data. This is newly available in scikit-learn v 0.17 which just recently was released.
End of explanation |
7,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Orbit Plot
REBOUND comes with a simple way to plot instantaneous orbits of planetary systems. To show how this works, let's setup a test simulation with 4 planets.
Step1: To plot these initial orbits in the $xy$-plane, we can simply call the OrbitPlot function and give it the simulation as an argument.
Step2: Note that the OrbitPlot function chooses reasonable limits for the axes for you. There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!). | Python Code:
import rebound
sim = rebound.Simulation()
sim.add(m=1)
sim.add(m=0.1, e=0.041, a=0.4, inc=0.2, f=0.43, Omega=0.82, omega=2.98)
sim.add(m=1e-3, e=0.24, a=1.0, pomega=2.14)
sim.add(m=1e-3, e=0.24, a=1.5, omega=1.14, l=2.1)
sim.add(a=-2.7, e=1.4, f=-1.5,omega=-0.7) # hyperbolic orbit
Explanation: Orbit Plot
REBOUND comes with a simple way to plot instantaneous orbits of planetary systems. To show how this works, let's setup a test simulation with 4 planets.
End of explanation
%matplotlib inline
fig = rebound.OrbitPlot(sim)
Explanation: To plot these initial orbits in the $xy$-plane, we can simply call the OrbitPlot function and give it the simulation as an argument.
End of explanation
fig = rebound.OrbitPlot(sim, unitlabel="[AU]", color=True, trails=True, periastron=True)
fig = rebound.OrbitPlot(sim, unitlabel="[AU]", periastron=True, lw=2)
Explanation: Note that the OrbitPlot function chooses reasonable limits for the axes for you. There are various ways to customize the plot. Have a look at the arguments used in the following examples, which are pretty much self-explanatory (if in doubt, check the documentation!).
End of explanation |
7,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-lr', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MPI-M
Source ID: MPI-ESM-1-2-LR
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
7,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This quickstart guide explains how to join two tables A and B using TF-IDF
similarity measure. First, you need to import the required packages
as follows (if you have installed py_stringsimjoin it will
automatically install the dependencies py_stringmatching and pandas)
Step1: Joining two tables using TD-IDF measure typically consists of six steps
Step2: 2. Profiling the tables
Before performing the join, we may want to profile the tables to
know about the characteristics of the attributes. This can help identify
Step3: Based on the profile output, we find that the 'Title' attribute in both tables does
not contain any missing values. Hence, for the purpose of this guide, we will now
join tables A and B on 'Title' attribute using TF-IDF measure. Next, we need to decide
on what threshold to use for the join. For this guide, we will use a threshold of 0.5.
Specifically, the join will now find tuple pairs from A and B such that the TF-IDF score
over the 'Title' attributes is at least 0.5.
Naively, performing the join will involve enumerating the cartesian product
AxB (3022 x 3099 = 9365178) and computing TF-IDF score for every pair. But, this can be
very time consuming. Hence, we can optimize by first appplying an overlap filter over tables
A and B to find pairs sharing at least one token in the 'Title' attribute. The intuition here
is that in order for TF-IDF score to be above zero, there must be at least one common token
between the attributes. Finally, we apply the TF-IDF measure over the candidate pairs
to obtain the join output.
3. Creating a tokenizer
Since TF-IDF measure treats input strings as bags of tokens, we
need to select a tokenizer which can be used to tokenize each string
into a bag of tokens. Currently, we support tokenizers from py_stringmatching
package which provides five different tokenizer types
Step4: 4. Applying overlap filter
Step5: If you want to include pairs with missing value in the output,
you need to set the allow_missing flag to True when creating
the overlap filter as shown below
Step6: Now, when you apply the filter, pairs with missing values will also
be included in the output.
5. Creating the corpus for TF-IDF matcher
The next step is to create the corpus required for TF-IDF measure.
Specifically, the corpus consists of the list of tokens in the 'Title'
attribute. The corpus can be created as follows
Step7: 6. Applying the TF-IDF matcher
Finally, you need to create and apply the TF-IDF matcher as shown below | Python Code:
# Import libraries
import py_stringsimjoin as ssj
import py_stringmatching as sm
import pandas as pd
import os
import sys
print('python version: ' + sys.version)
print('py_stringsimjoin version: ' + ssj.__version__)
print('py_stringmatching version: ' + sm.__version__)
print('pandas version: ' + pd.__version__)
Explanation: This quickstart guide explains how to join two tables A and B using TF-IDF
similarity measure. First, you need to import the required packages
as follows (if you have installed py_stringsimjoin it will
automatically install the dependencies py_stringmatching and pandas):
End of explanation
# construct the path of the tables to be loaded. Since we are loading a
# dataset from the package, we need to access the data from the path
# where the package is installed. If you need to load your own data, you can directly
# provide your table path to the read_csv command.
table_A_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'books_table_A.csv.gz'])
table_B_path = os.sep.join([ssj.get_install_path(), 'datasets', 'data', 'books_table_B.csv.gz'])
# Load csv files as dataframes. Since we are reading a compressed csv file,
# we provide the compression argument. If you are reading an uncompressed
# csv file, you should not specify the compression argument.
A = pd.read_csv(table_A_path, compression='gzip')
B = pd.read_csv(table_B_path, compression='gzip')
print('Number of records in A: ' + str(len(A)))
print('Number of records in B: ' + str(len(B)))
A.head(1)
B.head(1)
Explanation: Joining two tables using TD-IDF measure typically consists of six steps:
1. Loading the input tables
2. Profiling the tables
3. Creating a tokenizer
4. Applying overlap filter
5. Creating the corpus for TF-IDF matcher
6. Applying the TF-IDF matcher
1. Loading the input tables
We begin by loading the two tables. For the purpose of this
guide, we use the books dataset that comes with the package.
End of explanation
# profile attributes in table A
ssj.profile_table_for_join(A)
# profile attributes in table B
ssj.profile_table_for_join(B)
Explanation: 2. Profiling the tables
Before performing the join, we may want to profile the tables to
know about the characteristics of the attributes. This can help identify:
a) unique attributes in the table which can be used as key attribute when performing
the join. A key attribute is needed to uniquely identify a tuple.
b) the number of missing values present in each attribute. This can
help you in deciding the attribute on which to perform the join.
For example, an attribute with a lot of missing values may not be a good
join attribute. Further, based on the missing value information you
need to decide on how to handle missing values when performing the join
You can profile the attributes in a table using the following command:
End of explanation
# create whitespace tokenizer for tokenizing 'Title' attribute
ws = sm.WhitespaceTokenizer()
ws.tokenize('The Maze Runner Series Complete Collection')
Explanation: Based on the profile output, we find that the 'Title' attribute in both tables does
not contain any missing values. Hence, for the purpose of this guide, we will now
join tables A and B on 'Title' attribute using TF-IDF measure. Next, we need to decide
on what threshold to use for the join. For this guide, we will use a threshold of 0.5.
Specifically, the join will now find tuple pairs from A and B such that the TF-IDF score
over the 'Title' attributes is at least 0.5.
Naively, performing the join will involve enumerating the cartesian product
AxB (3022 x 3099 = 9365178) and computing TF-IDF score for every pair. But, this can be
very time consuming. Hence, we can optimize by first appplying an overlap filter over tables
A and B to find pairs sharing at least one token in the 'Title' attribute. The intuition here
is that in order for TF-IDF score to be above zero, there must be at least one common token
between the attributes. Finally, we apply the TF-IDF measure over the candidate pairs
to obtain the join output.
3. Creating a tokenizer
Since TF-IDF measure treats input strings as bags of tokens, we
need to select a tokenizer which can be used to tokenize each string
into a bag of tokens. Currently, we support tokenizers from py_stringmatching
package which provides five different tokenizer types: alphabetical tokenizer,
alphanumeric tokenizer, delimiter-based tokenizer, qgram tokenizer,
and whitespace tokenizer.
For the purpose of this guide, we will use a whitespace tokenizer. Once
we have selected a tokenizer type, we need to create a tokenizer object as
shown below:
End of explanation
# create overlap filter with whitespace tokenizer and threshold of 1.
of = ssj.OverlapFilter(ws, 1)
# apply overlap filter to tables A and B to find tuple pairs
# sharing at least 1 token in Title attribute
C = of.filter_tables(A, B, 'ID', 'ID', 'Title', 'Title', n_jobs=-1)
len(C)
C.head(5)
Explanation: 4. Applying overlap filter
End of explanation
of = ssj.OverlapFilter(ws, 1, allow_missing=True)
Explanation: If you want to include pairs with missing value in the output,
you need to set the allow_missing flag to True when creating
the overlap filter as shown below:
End of explanation
# create a list of tokens
A_tokens = A['Title'].apply(ws.tokenize).tolist()
B_tokens = B['Title'].apply(ws.tokenize).tolist()
# merge both the lists of tokens to create the corpus
corpus = A_tokens + B_tokens
Explanation: Now, when you apply the filter, pairs with missing values will also
be included in the output.
5. Creating the corpus for TF-IDF matcher
The next step is to create the corpus required for TF-IDF measure.
Specifically, the corpus consists of the list of tokens in the 'Title'
attribute. The corpus can be created as follows:
End of explanation
# create tf-idf object with the generated corpus
tfidf = sm.TfIdf(corpus, dampen=True)
# apply the matcher with a threshold of 0.5. This will find pairs from C
# with TF-IDF score >= 0.5. Setting n_jobs=-1 exploits all CPU cores available.
output_pairs = ssj.apply_matcher(C, 'l_ID', 'r_ID', A, B, 'ID', 'ID', 'Title', 'Title',
ws, tfidf.get_sim_score, 0.5,
l_out_attrs=['Title'], r_out_attrs=['Title'], n_jobs=-1)
len(output_pairs)
output_pairs.head()
Explanation: 6. Applying the TF-IDF matcher
Finally, you need to create and apply the TF-IDF matcher as shown below:
End of explanation |
7,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Accuracy
Step1: Next, we evaluate scikit-learn accuracy where we predict feed implementation based on latency.
Step2: As you can see, scikit-learn has a 99% accuracy rate. We now do the same thing with tensorflow.
Step3: Looks like tensorflow has a 98% accuracy rate which is 1% less than scikit-learn algo. Let us use Tensorflow to look at the accuracy of predicting cloud vendor based on throughput. | Python Code:
import graphviz
import pandas
from sklearn import tree
from sklearn.model_selection import train_test_split
clf = tree.DecisionTreeClassifier()
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/throughput.csv")
data = input[input.columns[6:9]]
target = input['cloud']
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.4, random_state=0)
clf = clf.fit(X_train, y_train)
clf.score(X_test, y_test)
Explanation: Comparing Accuracy: scikit-learn vs tensorflow
In this notebook, we train then test the model in a 60 / 40 split for the decision tree algo on both scikit-learn and tensorflow. First, we start with scikit-learn where we predict cloud vendor based on throughput.
End of explanation
import graphviz
import pandas
from sklearn import tree
from sklearn.model_selection import train_test_split
clf = tree.DecisionTreeClassifier()
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/latency.csv")
data = input[input.columns[7:9]]
data['cloud'] = input['cloud'].apply(lambda x: 1.0 if x == 'GKE' else 0.0)
target = input['feed']
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.4, random_state=0)
clf = clf.fit(X_train, y_train)
clf.score(X_test, y_test)
Explanation: Next, we evaluate scikit-learn accuracy where we predict feed implementation based on latency.
End of explanation
import tensorflow as tf
import numpy as np
import pandas
from tensorflow.python.ops import parsing_ops
from tensorflow.contrib.tensor_forest.python import tensor_forest
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from sklearn.model_selection import train_test_split
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/latency.csv")
data = input[input.columns[7:9]]
data['cloud'] = input['cloud'].apply(lambda x: 1.0 if x == 'GKE' else 0.0)
X_train, X_test, y_train, y_test = train_test_split(data, input['feed'], test_size=0.4, random_state=0)
X_train_np = np.array(X_train, dtype=np.float32)
y_train_np = np.array(y_train, dtype=np.int32)
X_test_np = np.array(X_test, dtype=np.float32)
y_test_np = np.array(y_test, dtype=np.int32)
hparams = tensor_forest.ForestHParams(num_classes=7,
num_features=3,
num_trees=1,
regression=False,
max_nodes=500).fill()
classifier = tf.contrib.tensor_forest.client.random_forest.TensorForestEstimator(hparams)
c = classifier.fit(x=X_train_np, y=y_train_np)
c.evaluate(x=X_test_np, y=y_test_np)
Explanation: As you can see, scikit-learn has a 99% accuracy rate. We now do the same thing with tensorflow.
End of explanation
import tensorflow as tf
import numpy as np
import pandas
from tensorflow.python.ops import parsing_ops
from tensorflow.contrib.tensor_forest.python import tensor_forest
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from sklearn.model_selection import train_test_split
input = pandas.read_csv("/home/glenn/git/clojure-news-feed/client/ml/etl/throughput.csv")
data = input[input.columns[6:9]]
target = input['cloud'].apply(lambda x: 1.0 if x == 'GKE' else 0.0)
X_train, X_test, y_train, y_test = train_test_split(data, target, test_size=0.4, random_state=0)
X_train_np = np.array(X_train, dtype=np.float32)
y_train_np = np.array(y_train, dtype=np.int32)
X_test_np = np.array(X_test, dtype=np.float32)
y_test_np = np.array(y_test, dtype=np.int32)
hparams = tensor_forest.ForestHParams(num_classes=3,
num_features=3,
num_trees=1,
regression=False,
max_nodes=500).fill()
classifier = tf.contrib.tensor_forest.client.random_forest.TensorForestEstimator(hparams)
c = classifier.fit(x=X_train_np, y=y_train_np)
c.evaluate(x=X_test_np, y=y_test_np)
Explanation: Looks like tensorflow has a 98% accuracy rate which is 1% less than scikit-learn algo. Let us use Tensorflow to look at the accuracy of predicting cloud vendor based on throughput.
End of explanation |
7,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
Step1: 2. Read in the hanford.csv file in the data/ folder
Step2: 3. Calculate the basic descriptive statistics on the data
Step3: 4. Find a reasonable threshold to say exposure is high and recode the data
Step 01. Need to prepare features
Step4: 5. Create a logistic regression model
Step5: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50 | Python Code:
import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
End of explanation
df =pd.read_csv('data/hanford.csv')
df.head()
df['Mortality'] = [ float(x) for x in df['Mortality']]
Explanation: 2. Read in the hanford.csv file in the data/ folder
End of explanation
df.describe()
df.info()
Explanation: 3. Calculate the basic descriptive statistics on the data
End of explanation
def high_exposure(x):
if x > 6.41:
return 1
else:
return 0
df['Exposure_classification'] = df['Exposure'].apply(high_exposure)
df.head()
Explanation: 4. Find a reasonable threshold to say exposure is high and recode the data
Step 01. Need to prepare features
End of explanation
from sklearn.linear_model import LogisticRegression
lm = LogisticRegression()
x = np.asarray(df[['Mortality']])
y = np.asarray(df['Exposure_classification'])
x
y
lm = lm.fit(x,y)
lm.score(x,y)
lm.coef_
lm.intercept_
Explanation: 5. Create a logistic regression model
End of explanation
lm.predict([0,0,1])
lm.predict([0,0,1])
Explanation: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50
End of explanation |
7,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example time line of chords represented as binary pitch class sets in a single song
Step1: Distribution of pitch classes accross all songs
Step2: Observation
Step3: Distribution of chords usage
Step4: Distribution of chord root tones (ie. chord stripped of their quality) - excluding silence
Step5: Still [A,D,G,E,C] is the set of most favorite pitch classes.
Step6: Distribution of chord segment duration
Step7: Let's do some machine learning | Python Code:
draw_track(find_track('Yesterday')[0])
Explanation: Example time line of chords represented as binary pitch class sets in a single song:
End of explanation
pc_histogram = pd.DataFrame({'pitch_class': pcs_columns, 'relative_count': nonsilent_chords[pcs_columns].mean()})
stem(pc_histogram['relative_count'])
gca().set_xticks(np.arange(12))
gca().set_xticklabels(pcs_columns);
pc_histogram.sort('relative_count', ascending=False, inplace=True)
plot(pc_histogram['relative_count'],'o:')
gca().set_xticks(np.arange(12))
gca().set_xticklabels(pc_histogram['pitch_class']);
ylim(0, 1);
xlim(-.1, 11.1);
Explanation: Distribution of pitch classes accross all songs:
End of explanation
chord_histogram = all_chords['label'].value_counts()
chord_histogram
print('number of unique chords (including silence):', len(chord_histogram))
Explanation: Observation: Five most used pitch classes in Beates songs are A, E, D, B and G.
End of explanation
plot(chord_histogram);
Explanation: Distribution of chords usage:
End of explanation
chord_root_histogram = nonsilent_chords['root'].value_counts()
# convert index from integers to symbolic names
chord_root_histogram.index = pd.Series(pcs_columns)[chord_root_histogram.index].values
chord_root_histogram
Explanation: Distribution of chord root tones (ie. chord stripped of their quality) - excluding silence:
End of explanation
#all_chords[pcs_columns + ['track_id']]
all_chords
Explanation: Still [A,D,G,E,C] is the set of most favorite pitch classes.
End of explanation
duration = all_chords['duration']
duration.hist(bins=100);
sns.distplot(duration[duration < 10], bins=100)
xlabel('duration (sec)');
Explanation: Distribution of chord segment duration:
End of explanation
X = all_chords[['duration'] + pcs_columns].astype(np.float32)
y = all_chords['track_id'].astype(np.int32)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.25, random_state=42)
len(X_train), len(X_valid), len(X_test)
from sklearn.linear_model import LogisticRegression
lr_model = LogisticRegression()
lr_model.fit(X_train, y_train)
from sklearn.metrics import classification_report, confusion_matrix
y_pred_lr = lr_model.predict(X_valid)
lr_model.score(X_valid, y_valid)
print(classification_report(y_valid, y_pred_lr))
matshow(confusion_matrix(y_valid, y_pred_lr), cmap=cm.Spectral_r)
colorbar();
import theanets
import climate # some utilities for command line interfaces
climate.enable_default_logging()
exp = theanets.Experiment(
theanets.Classifier,
layers=(13, 50, 180),
hidden_l1=0.1)
exp.train(
(X_train, y_train),
(X_valid, y_valid),
optimize='rmsprop',
learning_rate=0.01,
momentum=0.5)
y_pred_nn = exp.network.classify(X_valid)
y_pred_nn
print(classification_report(y_valid, y_pred_nn))
matshow(confusion_matrix(y_valid, y_pred_nn), cmap=cm.Spectral_r)
colorbar();
Explanation: Let's do some machine learning :)
Task: predict track id (song) - based on chords and segment duration
Start with a independent data points (no context) and logistic regression
First prepare the data into training, validation and test set.
End of explanation |
7,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This example shows how to use k-NN classifier to classify a dataset. We will first generate a binary classification dataset consisitng of 2D feature vectors, randomly sampled from two Gaussian distributions. We will then learn a k-NN classifier to separate the two classes.
Step1: Lets create our 2D dataset D by selecting samples from each feature X1 and X2 as follows. We are taking random samples from a 2D Gassian as our class. We use two Gaussians that differ by their means only. Covariances are the same.
Step2: The next few lines are for enabling plottng within ipython. A nice feature to have but has nothing to do with k-NN. So do not worry too much about matploylib.
Step3: Now that we have imported matplotlib, which is a plotting utility lets visualize our two classes, positives in blue dots and negatives in red dots.
Step4: They do look decently separated. We might be able to learn a binary classifier using k-NNs.
Lets split our dataset into equal train and test portions.
Step5: To confirm ourselves that the train and test datasets have equal numbers of pos and neg instances lets print the number of instances in each list.
Step6: Lets assign pos (+1) and neg (-1) labels to our train and test instances.
Step7: To see that we have properly made tuples of (label, instance) lets print train and test data.
Step8: Before we do the actual k-NN implementation lets create the cosine similarity function that we will use to find the neighbours.
Step9: Now lets implement a function that predicts the label of a test instance using the k-NN algorithm.
Step10: Take a moment to study the predict function. k-NN happens here. We are given k and the instance to be classified, x. The first thing we do is computing the similarity scores between x and each instance z in our train dataset. We must also store the labels so that we can later find the majority label.
Next, we need to find the neighbours. For that we sort this list of tuples by the value of the second item in tuples, which is similarity. lambda expressions are convenient ways to write in-place functions. Here, we take two elements from our list, compare their similarity scores and return -1 or +1. The sort function will then use this to sort the list. In this case, it will sort in the descending order of similarity scores.
If you would like to confirm that it is indeed the descending order you can print the list after sorting (uncomment that line).
Next, we must find the majority label. Since we are doing binary classification and our labels are -1 and +1, when we add the labels for the nearest neigbours if we get a positive value then there must be more +1s than -1s, vice versa. You might have to do more complicated stuff for finding the majority label if there were more than 2 classes. But it is easy for the binary case as shown here.
Step11: Lets compute the accuracy of our k-NN classifier. | Python Code:
import numpy
N = 5
Explanation: This example shows how to use k-NN classifier to classify a dataset. We will first generate a binary classification dataset consisitng of 2D feature vectors, randomly sampled from two Gaussian distributions. We will then learn a k-NN classifier to separate the two classes.
End of explanation
pos = numpy.random.multivariate_normal([0,0], [[1,0],[0,1]], 2 * N)
neg = numpy.random.multivariate_normal([2,2], [[1,0],[0,1]], 2 * N)
Explanation: Lets create our 2D dataset D by selecting samples from each feature X1 and X2 as follows. We are taking random samples from a 2D Gassian as our class. We use two Gaussians that differ by their means only. Covariances are the same.
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
Explanation: The next few lines are for enabling plottng within ipython. A nice feature to have but has nothing to do with k-NN. So do not worry too much about matploylib.
End of explanation
plt.scatter(neg[:,0], neg[:,1], c='r')
plt.scatter(pos[:,0], pos[:,1], c='b')
Explanation: Now that we have imported matplotlib, which is a plotting utility lets visualize our two classes, positives in blue dots and negatives in red dots.
End of explanation
train_pos = pos[:N,:]
test_pos = pos[N:,:]
train_neg = neg[:N,:]
test_neg = neg[:N,:]
Explanation: They do look decently separated. We might be able to learn a binary classifier using k-NNs.
Lets split our dataset into equal train and test portions.
End of explanation
print len(train_pos), len(train_neg)
print len(test_pos), len(test_neg)
Explanation: To confirm ourselves that the train and test datasets have equal numbers of pos and neg instances lets print the number of instances in each list.
End of explanation
train_data = [(1, x) for x in train_pos]
train_data.extend([(-1, x) for x in train_neg])
test_data = [(1, x) for x in test_pos]
test_data.extend([(-1, x) for x in test_neg])
Explanation: Lets assign pos (+1) and neg (-1) labels to our train and test instances.
End of explanation
print train_data
print test_data
Explanation: To see that we have properly made tuples of (label, instance) lets print train and test data.
End of explanation
def sim(p, q):
score = numpy.dot(p,q) / (numpy.linalg.norm(p) * numpy.linalg.norm(q))
return score
Explanation: Before we do the actual k-NN implementation lets create the cosine similarity function that we will use to find the neighbours.
End of explanation
def predict(x, k):
L = [(y, sim(x, z)) for (y,z) in train_data]
L.sort(lambda a,b: -1 if a[1] > b[1] else 1)
#print L[:k]
score = sum([e[0] for e in L[:k]])
if score > 0:
return 1
else:
return -1
Explanation: Now lets implement a function that predicts the label of a test instance using the k-NN algorithm.
End of explanation
print predict(test_data[0][1], 5)
Explanation: Take a moment to study the predict function. k-NN happens here. We are given k and the instance to be classified, x. The first thing we do is computing the similarity scores between x and each instance z in our train dataset. We must also store the labels so that we can later find the majority label.
Next, we need to find the neighbours. For that we sort this list of tuples by the value of the second item in tuples, which is similarity. lambda expressions are convenient ways to write in-place functions. Here, we take two elements from our list, compare their similarity scores and return -1 or +1. The sort function will then use this to sort the list. In this case, it will sort in the descending order of similarity scores.
If you would like to confirm that it is indeed the descending order you can print the list after sorting (uncomment that line).
Next, we must find the majority label. Since we are doing binary classification and our labels are -1 and +1, when we add the labels for the nearest neigbours if we get a positive value then there must be more +1s than -1s, vice versa. You might have to do more complicated stuff for finding the majority label if there were more than 2 classes. But it is easy for the binary case as shown here.
End of explanation
corrects = 0
k = 5
for (y,x) in test_data:
if y == predict(x, k):
corrects += 1
accuracy = float(corrects) / float(len(test_data))
print "Accuracy =", accuracy
Explanation: Lets compute the accuracy of our k-NN classifier.
End of explanation |
7,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Initial assessment of how accuracy of DESeq2 changes with the BD window used to select 'heavy' fraction samples
Just testing on 'validation' run which just has 100% incorporators
User variables
Step1: Init
Step2: Using nestly
Step3: Plotting results
Step4: SANDBOX | Python Code:
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/'
genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
Explanation: Goal
Initial assessment of how accuracy of DESeq2 changes with the BD window used to select 'heavy' fraction samples
Just testing on 'validation' run which just has 100% incorporators
User variables
End of explanation
import glob
from os.path import abspath
import nestly
import itertools
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
Explanation: Init
End of explanation
# building tree structure
nest = nestly.Nest()
## params that vary
BD_min = np.arange(1.67, 1.77, 0.02).tolist()
BD_max = [x + 0.04 for x in BD_min]
f = lambda x: {'BD_range': str(x[0]) + '-' + str(x[1]),
'BD_min':x[0],
'BD_max':x[1]}
BD_range = [f(x) for x in itertools.product(BD_min, BD_max)
if x[0] < x[1]]
nest.add('BD_range', BD_range, update=True)
## set params
nest.add('padj', [0.1], create_dir=False)
nest.add('log2', [0.25], create_dir=False)
nest.add('topTaxaToPlot', [100], create_dir=False)
## input/output files
### phyloseq & BD-shift files
nest.add('inFileDir', [os.path.join(workDir, 'validation')], create_dir=False)
nest.add('phyloseq', ['OTU_n2_abs1e10_sub20000'], create_dir=False)
nest.add('BD_shift', ['ampFrags_kde_dif_incorp_BD-shift.txt'], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
# building directory tree
buildDir = os.path.join(workDir, 'DESeq_BD-range')
nest.build(buildDir)
bashFile = os.path.join(workDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
# copy input files to prevent parallel file reading errors
cp {inFileDir}/{phyloseq}.physeq {phyloseq}.physeq
cp {inFileDir}/{BD_shift} {BD_shift}
#-- R analysis --#
export PATH={R_dir}:$PATH
## filtering phyloseq object to just taxa/samples of interest
phyloseq_edit.r \
{phyloseq}.physeq \
--BD_min {BD_min} \
--BD_max {BD_max} \
> {phyloseq}_filt.physeq
## making ordination
phyloseq_ordination.r \
{phyloseq}_filt.physeq \
{phyloseq}_bray-NMDS.pdf
## DESeq2
phyloseq_DESeq2.r \
{phyloseq}_filt.physeq \
--log2 {log2} \
--hypo greater \
> {phyloseq}_DESeq2
## Confusion matrix
DESeq2_confuseMtx.r \
{BD_shift} \
{phyloseq}_DESeq2 \
--padj {padj}
!chmod 775 $bashFile
!cd $workDir; \
nestrun -j 30 --template-file $bashFile -d DESeq_BD-range --log-file log.txt
# aggregating confusion matrix data
## table
!cd $workDir; \
nestagg delim \
-d DESeq_BD-range \
-k BD_min,BD_max \
-o ./DESeq_BD-range/DESeq2-cMtx_table.csv \
DESeq2-cMtx_table.csv
## overall
!cd $workDir; \
nestagg delim \
-d DESeq_BD-range \
-k BD_min,BD_max\
-o ./DESeq_BD-range/DESeq2-cMtx_overall.csv \
DESeq2-cMtx_overall.csv
## byClass
!cd $workDir; \
nestagg delim \
-d DESeq_BD-range \
-k BD_min,BD_max \
-o ./DESeq_BD-range/DESeq2-cMtx_byClass.csv \
DESeq2-cMtx_byClass.csv
Explanation: Using nestly: different BD ranges
End of explanation
%%R -i workDir -w 600 -h 600
setwd(workDir)
byClass = read.csv('./DESeq_BD-range/DESeq2-cMtx_byClass.csv')
byClass$byClass[is.na(byClass$byClass)] = 0
cat.str = function(x,y, col=':'){
x = as.character(x)
y = as.character(y)
z = paste(c(x,y),collapse=col)
return(z)
}
byClass = byClass %>%
mutate(BD_range = mapply(cat.str, BD_min, BD_max)) %>%
mutate(BD_min = as.character(BD_min),
BD_max = as.character(BD_max))
p = ggplot(byClass, aes(X, byClass, color=BD_max)) +
geom_point(alpha=0.5, size=3) +
labs(y='value') +
facet_grid(BD_min ~ .) +
theme_bw() +
theme(
text=element_text(size=16),
axis.text.x=element_text(angle=90, hjust=1, vjust=0.5),
axis.title.x=element_blank()
)
p
%%R -w 850 -h 300
col2keep = c('Balanced Accuracy', 'Sensitivity','Specificity')
byClass.f = byClass %>%
filter(X %in% col2keep) %>%
mutate(BD_min = as.numeric(BD_min),
BD_max = as.numeric(BD_max),
byClass = as.numeric(byClass),
byClass.inv = 1 - byClass)
byClass.fs = byClass.f %>%
group_by(X) %>%
summarize(byClass.max = max(byClass))
just.true = function(x){
if(x == TRUE){
return(1)
} else{
return(NA)
}
}
byClass.j = inner_join(byClass.f, byClass.fs, c('X' = 'X')) %>%
mutate(max_val = as.numeric(byClass == byClass.max),
byClass.txt = round(byClass, 2))
byClass.jf = byClass.j %>%
filter(max_val == 1)
x.breaks = unique(byClass.j$BD_min)
p = ggplot(byClass.j, aes(BD_min, BD_max, fill=byClass.inv)) +
geom_tile() +
geom_text(aes(label=byClass.txt), color=c('white'), size=4) +
geom_text(data=byClass.jf, aes(label=byClass.txt), color=c('red'), size=4) +
scale_x_continuous(breaks=x.breaks) +
labs(x='Minimum Buoyant Density', y='Maximum Buoyant Density') +
facet_grid(. ~ X) +
theme_bw() +
theme(
text=element_text(size=16),
axis.text.x=element_text(angle=90, hjust=1, vjust=0.5)
)
p
Explanation: Plotting results
End of explanation
%%R -i workDir -w 600 -h 700
setwd(workDir)
byClass = read.csv('./DESeq_BD-range/DESeq2-cMtx_byClass.csv')
byClass$byClass[is.na(byClass$byClass)] = 0
#byClass$percIncorp = factor(byClass$percIncorp, levels=as.character(unique(sort(byClass$percIncorp))))
cat.str = function(x,y, col=':'){
x = as.character(x)
y = as.character(y)
z = paste(c(x,y),collapse=col)
return(z)
}
byClass = byClass %>%
mutate(BD_range = mapply(cat.str, BD_min, BD_max))
p = ggplot(byClass, aes(byClass, ymin=BD_min, ymax=BD_max, color=BD_range)) +
geom_linerange() +
labs(y='value') +
facet_grid(X ~ .) +
#coord_flip() +
theme_bw() +
theme(
text=element_text(size=16),
axis.text.x=element_text(angle=90, hjust=1, vjust=0.5),
axis.title.x=element_blank()
)
p
%%bash -s $workDir
cd $1
export PATH=/home/nick/notebook/SIPSim/lib/R/:$PATH
## symlinking OTU subsample phyloseq file
# done by NESTLY?
# files: FILE.physeq, ampFrags_kde_dif_incorp_BD-shift.txt
## filtering phyloseq object to just taxa/samples of interest
phyloseq_edit.r \
{otu_table}.physeq \
--BD_min {BD_min} --BD_max {BD_max} \
> {otu_table}_filt.physeq
# Chuck's method
## DESeq2
phyloseq_DESeq2.r \
{otu_table}_filt.physeq \
--log2 0.25 \
> {otu_table}_filt_DESeq2
## Confusion matrix
DESeq2_confuseMtx.r \
ampFrags_kde_dif_incorp_BD-shift.txt \
{otu_table}_filt_DESeq2 \
--padjBH 0.1
# altHypothesis = 'greater'
## DESeq2
phyloseq_DESeq2.r \
OTU_n2_abs1e10_sub20000_filt.physeq \
--log2 0.25 \
--hypo greater \
> OTU_n2_abs1e10_sub20000_DESeq2
## Confusion matrix
DESeq2_confuseMtx.r \
ampFrags_kde_dif_incorp_BD-shift.txt \
OTU_n2_abs1e10_sub20000_DESeq2 \
--padj 0.1
%%writefile $bashFile
#!/bin/bash
#-- R analysis --#
export PATH={R_dir}:$PATH
# plotting taxon abundances
OTU_taxonAbund.r \
OTU_n2_abs{abs}_sub{subsample}.txt \
-r {topTaxaToPlot} \
-o OTU_n2_abs{abs}_sub{subsample}
# running DeSeq2 and making confusion matrix on predicting incorporators
## making phyloseq object from OTU table
phyloseq_make.r \
OTU_n2_abs{abs}_sub{subsample}_w.txt \
-s OTU_n2_abs{abs}_sub{subsample}_meta.txt \
> OTU_n2_abs{abs}_sub{subsample}.physeq
## filtering phyloseq object to just taxa/samples of interest
phyloseq_edit.r \
OTU_n2_abs{abs}_sub{subsample}.physeq \
--BD_min {BD_min} --BD_max {BD_max} \
> OTU_n2_abs{abs}_sub{subsample}_filt.physeq
## making ordination
phyloseq_ordination.r \
OTU_n2_abs{abs}_sub{subsample}_filt.physeq \
OTU_n2_abs{abs}_sub{subsample}_bray-NMDS.pdf
## DESeq2
phyloseq_DESeq2.r \
OTU_n2_abs{abs}_sub{subsample}_filt.physeq \
> OTU_n2_abs{abs}_sub{subsample}_DESeq2
## Confusion matrix
DESeq2_confuseMtx.r \
{fileName}_kde_dif_incorp_BD-shift.txt \
OTU_n2_abs{abs}_sub{subsample}_DESeq2 \
--padj {padj} --log2 {log2}
Explanation: SANDBOX
End of explanation |
7,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
1b. Fixed flux spinodal decomposition on a square domain
Use Binder For Live Examples
Define $f_0$
Define the Equation
Solve the Equation
Run the Example Locally
Movie of Evolution
1b. Fixed flux spinodal decomposition on a square domain
Use Binder For Live Examples
The free energy is given by,
$$ f_0\left[ c \left( \vec{r} \right) \right] =
- \frac{A}{2} \left(c - c_m\right)^2
+ \frac{B}{4} \left(c - c_m\right)^4
+ \frac{c_{\alpha}}{4} \left(c - c_{\alpha} \right)^4
+ \frac{c_{\beta}}{4} \left(c - c_{\beta} \right)^4 $$
In FiPy we write the evolution equation as
$$ \frac{\partial c}{\partial t} = \nabla \cdot \left[
D \left( c \right) \left( \frac{ \partial^2 f_0 }{ \partial c^2} \nabla c - \kappa \nabla \nabla^2 c \right)
\right] $$
Let's start by calculating $ \frac{ \partial^2 f_0 }{ \partial c^2} $ using sympy. It's easy for this case, but useful in the general case for taking care of difficult book keeping in phase field problems.
Step1: The first step in implementing any problem in FiPy is to define the mesh. For Problem 1a the solution domain is just a square domain, but the boundary conditions are periodic, so a PeriodicGrid2D object is used. No other boundary conditions are required.
Step2: The next step is to define the parameters and create a solution variable.
Step3: Now we need to define the initial conditions given by,
Set $c\left(\vec{r}, t\right)$ such that
$$ c\left(\vec{r}, 0\right) = \bar{c}_0 + \epsilon \cos \left( \vec{q} \cdot \vec{r} \right) $$
Step4: Define $f_0$
To define the equation with FiPy first define f_0 in terms of FiPy. Recall f_0 from above calculated using Sympy. Here we use the string representation and set it equal to f_0_var using the exec command.
Step5: Define the Equation
Step6: Solve the Equation
To solve the equation a simple time stepping scheme is used which is decreased or increased based on whether the residual decreases or increases. A time step is recalculated if the required tolerance is not reached.
Step7: Run the Example Locally
The following cell will dumpy a file called fipy_hackathon1b.py to the local file system to be run. The images are saved out at each time step.
Step8: Movie of Evolution
The movie of the evolution for 300 steps.
The movie was generated with the output files of the form image*.png using the following commands,
$ rename 's/\d+/sprintf("%05d",$&)/e' image*
$ ffmpeg -f image2 -r 6 -i 'image%05d.png' output.mp4 | Python Code:
%matplotlib inline
import sympy
import fipy as fp
import numpy as np
A, c, c_m, B, c_alpha, c_beta = sympy.symbols("A c_var c_m B c_alpha c_beta")
f_0 = - A / 2 * (c - c_m)**2 + B / 4 * (c - c_m)**4 + c_alpha / 4 * (c - c_alpha)**4 + c_beta / 4 * (c - c_beta)**4
print f_0
sympy.diff(f_0, c, 2)
Explanation: Table of Contents
1b. Fixed flux spinodal decomposition on a square domain
Use Binder For Live Examples
Define $f_0$
Define the Equation
Solve the Equation
Run the Example Locally
Movie of Evolution
1b. Fixed flux spinodal decomposition on a square domain
Use Binder For Live Examples
The free energy is given by,
$$ f_0\left[ c \left( \vec{r} \right) \right] =
- \frac{A}{2} \left(c - c_m\right)^2
+ \frac{B}{4} \left(c - c_m\right)^4
+ \frac{c_{\alpha}}{4} \left(c - c_{\alpha} \right)^4
+ \frac{c_{\beta}}{4} \left(c - c_{\beta} \right)^4 $$
In FiPy we write the evolution equation as
$$ \frac{\partial c}{\partial t} = \nabla \cdot \left[
D \left( c \right) \left( \frac{ \partial^2 f_0 }{ \partial c^2} \nabla c - \kappa \nabla \nabla^2 c \right)
\right] $$
Let's start by calculating $ \frac{ \partial^2 f_0 }{ \partial c^2} $ using sympy. It's easy for this case, but useful in the general case for taking care of difficult book keeping in phase field problems.
End of explanation
mesh = fp.Grid2D(nx=50, ny=50, dx=0.5, dy=0.5)
Explanation: The first step in implementing any problem in FiPy is to define the mesh. For Problem 1a the solution domain is just a square domain, but the boundary conditions are periodic, so a PeriodicGrid2D object is used. No other boundary conditions are required.
End of explanation
c_alpha = 0.05
c_beta = 0.95
A = 2.0
kappa = 2.0
c_m = (c_alpha + c_beta) / 2.
B = A / (c_alpha - c_m)**2
D = D_alpha = D_beta = 2. / (c_beta - c_alpha)
c_0 = 0.45
q = np.sqrt((2., 3.))
epsilon = 0.01
c_var = fp.CellVariable(mesh=mesh, name=r"$c$", hasOld=True)
Explanation: The next step is to define the parameters and create a solution variable.
End of explanation
r = np.array((mesh.x, mesh.y))
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
viewer = fp.Viewer(c_var)
Explanation: Now we need to define the initial conditions given by,
Set $c\left(\vec{r}, t\right)$ such that
$$ c\left(\vec{r}, 0\right) = \bar{c}_0 + \epsilon \cos \left( \vec{q} \cdot \vec{r} \right) $$
End of explanation
out = sympy.diff(f_0, c, 2)
exec "f_0_var = " + repr(out)
#f_0_var = -A + 3*B*(c_var - c_m)**2 + 3*c_alpha*(c_var - c_alpha)**2 + 3*c_beta*(c_var - c_beta)**2
f_0_var
Explanation: Define $f_0$
To define the equation with FiPy first define f_0 in terms of FiPy. Recall f_0 from above calculated using Sympy. Here we use the string representation and set it equal to f_0_var using the exec command.
End of explanation
eqn = fp.TransientTerm(coeff=1.) == fp.DiffusionTerm(D * f_0_var) - fp.DiffusionTerm((D, kappa))
eqn
Explanation: Define the Equation
End of explanation
elapsed = 0.0
steps = 0
dt = 0.01
total_sweeps = 2
tolerance = 1e-1
total_steps = 100
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
c_var.updateOld()
from fipy.solvers.pysparse import LinearLUSolver as Solver
solver = Solver()
while steps < total_steps:
res0 = eqn.sweep(c_var, dt=dt, solver=solver)
for sweeps in range(total_sweeps):
res = eqn.sweep(c_var, dt=dt, solver=solver)
if res < res0 * tolerance:
steps += 1
elapsed += dt
dt *= 1.1
c_var.updateOld()
else:
dt *= 0.8
c_var[:] = c_var.old
viewer.plot()
print 'elapsed_time:',elapsed
Explanation: Solve the Equation
To solve the equation a simple time stepping scheme is used which is decreased or increased based on whether the residual decreases or increases. A time step is recalculated if the required tolerance is not reached.
End of explanation
%%writefile fipy_hackathon_1b.py
import fipy as fp
import numpy as np
mesh = fp.Grid2D(nx=400, ny=400, dx=0.5, dy=0.5)
c_alpha = 0.05
c_beta = 0.95
A = 2.0
kappa = 2.0
c_m = (c_alpha + c_beta) / 2.
B = A / (c_alpha - c_m)**2
D = D_alpha = D_beta = 2. / (c_beta - c_alpha)
c_0 = 0.45
q = np.sqrt((2., 3.))
epsilon = 0.01
c_var = fp.CellVariable(mesh=mesh, name=r"$c$", hasOld=True)
r = np.array((mesh.x, mesh.y))
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
f_0_var = -A + 3*B*(c_var - c_m)**2 + 3*c_alpha*(c_var - c_alpha)**2 + 3*c_beta*(c_var - c_beta)**2
eqn = fp.TransientTerm(coeff=1.) == fp.DiffusionTerm(D * f_0_var) - fp.DiffusionTerm((D, kappa))
elapsed = 0.0
steps = 0
dt = 0.01
total_sweeps = 2
tolerance = 1e-1
total_steps = 300
c_var[:] = c_0 + epsilon * np.cos((q[:, None] * r).sum(0))
c_var.updateOld()
from fipy.solvers.pysparse import LinearLUSolver as Solver
solver = Solver()
viewer = fp.Viewer(c_var)
while steps < total_steps:
res0 = eqn.sweep(c_var, dt=dt, solver=solver)
for sweeps in range(total_sweeps):
res = eqn.sweep(c_var, dt=dt, solver=solver)
print ' '
print 'steps',steps
print 'res',res
print 'sweeps',sweeps
print 'dt',dt
if res < res0 * tolerance:
steps += 1
elapsed += dt
dt *= 1.1
if steps % 1 == 0:
viewer.plot('image{0}.png'.format(steps))
c_var.updateOld()
else:
dt *= 0.8
c_var[:] = c_var.old
Explanation: Run the Example Locally
The following cell will dumpy a file called fipy_hackathon1b.py to the local file system to be run. The images are saved out at each time step.
End of explanation
from IPython.display import YouTubeVideo
scale = 1.5
YouTubeVideo('S-EYiO0EltM', width=420 * scale, height=315 * scale, rel=0)
Explanation: Movie of Evolution
The movie of the evolution for 300 steps.
The movie was generated with the output files of the form image*.png using the following commands,
$ rename 's/\d+/sprintf("%05d",$&)/e' image*
$ ffmpeg -f image2 -r 6 -i 'image%05d.png' output.mp4
End of explanation |
7,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Index
Modeling
E-R Diagrams
Relational model
SQL
SELECT
JOIN
Exercises
Using SQLAlchemy
Why do we normalize?
SQL security issues
1) Modeling
When the amount or the complexity of the data we work with overwhelms us, we look for tools able to help us. Databases are one of the greatest tools for this job. There are several kinds of them, depending on the data they are designed to work with
Step1: Solution (Relational model)
Step2: 2) SQL
Once the relational model has been defined it needs to be implemented into the relational database.
Most interactions with relational databases are done through textual commands using a specific declarative language called SQL (Structured Query Language).
2.A) SELECT
SQL is very powerful, but for this course we will only have a look at the SELECT statement to filter, group, aggregate and retrieve information from the database.
Here is a simplified syntax of the SELECT statement
Step3: Select the brand, model and price of the cheapest shoe
Step4: Count how many different models does each brand have
Step5: Select how many pairs of shoes were sold in order number 4521
Step6: How many shoes have been sold per brand?
Step7: And per brand and model?
Step8: Select the shipping address of the last 3 orders placed by the customer named 'Joan Clarke'
Step9: Select the brand, model, color and size of all shows ever bought by the customer named 'Grace Hopper'
Step10: How much did she spent?
Step11: 3) Using SQLAlchemy
SQLAlchemy is a Python database toolkit that provides a unified and very powerful API for database access.
These are the steps to connect to a database and issue SQL statements
Step12: 2. Create an Engine and establish a connection
Step13: 3. Build the SQL statement
Step14: 4. Execute and retrieve results
Step15: 4) Why do we normalize?
Suppose you have the following table. Each row stores a galaxy id, its position and measured fluxes in several bands. Not all fluxes may be present, those missing will have a NULL value.
Galaxy
===========
*id
ra
dec
flux_g
flug_r
flux_i
flux_z
flux_y
Before going on resolving the following SQL sentences, think about
Step16: Select id from galaxies with all fluxes present
Step17: Select all galaxies with 3 fluxes present
Step18: Considerations
I cannot stress too much how important it is to have a good relational model in order to be able to work efficiently with all the data.
Non-normalized models may appear "simpler" and "easier", but it is just a mirage. Behind the simple facade, such models are more difficult to maintain and evolve. Also, information present in them can be very hard to extract.
Not all data model requirements cannot be determined at the beginning. That means we must plan and prepare our data models for change. CHANGE IS UNAVOIDABLE, and data models must be able to adapt to the evolution of requirements.
Exercise
Could you propose a normalized model?
Solution
Step19: Exercise
Select id from galaxies with non-null flux on g band
Step20: Select id from galaxies with 3 fluxes present
Step21: Select id from galaxies with at least 3 fluxes present
Step22: Considerations
What do we do if we want to measure on more bands?
How can we store also the flux_error for every band?
And the magnitude?
And the magnitude error?
5) SQL security issues
Although mastering SQL is a must if we work with relational databases, it becomes tedious to manually write all those queries. Also, it is prone to errors and one has to validate each and every user-provided input or it could suffer from massive and fata security issues.
The most common security problem with hand-crafted SQL is SQL injection, where user provides some kind of parameter to the query. If this parameter is not secured enough or does not pass the proper validation, the user is efectibly able to run ANY statement on OUR database. It could steal our customers, our credentials, delete all our data, or worse, modifying critical data without our knowledge.
As an example, in our online shop we have a section to browse the shoes we sell. When a user select a particular brand, we display all the models and their price. Suppose we use the following query
Step23: Issue 1
Step24: Issue 2
Step25: Issue 3
Step26: Issue 4 | Python Code:
%load -r 1-16 solutions/12_01_Databases.py
Explanation: Index
Modeling
E-R Diagrams
Relational model
SQL
SELECT
JOIN
Exercises
Using SQLAlchemy
Why do we normalize?
SQL security issues
1) Modeling
When the amount or the complexity of the data we work with overwhelms us, we look for tools able to help us. Databases are one of the greatest tools for this job. There are several kinds of them, depending on the data they are designed to work with:
document database: books, blog posts, scientific papers, tweets, ...
timeseries database: weather station readings
graph database: friendships, likes, retweets
relational databases: customers, orders, providers, ...
and many more!
Even if in this course we will only discuss relational databases, it is important for you to know that other alternatives exists, and that they might be more suitable for some problem of yours in the future.
Relational databases are the ones mostly used because they are very versatile and can accommodate a very broad spectrum of data.
1.A) E-R Diagrams
To be able to store information in a relational database, it is essential to define the structure of the data, also called relational model. For this, we will use Entity-Relationship (E-R) diagrams.
An E-R diagram is composed from:
- Entities: shall refer to common real-world concepts, and are describet by a set of attributes.
- Relations: logical associations between entities, which may also have attributes. By cardinality, we find:
- 1-1 -----
- 1-N ----<
- N-1 >----
- N-N >---<
We will start with a basic example, defining the E-R diagram of the data needed to handle the enrollments of students in a university. As the starting point, we shall have a textual description of the problem:
UAB offers a broad range of subjects for its students.
For each student, we must store their DNI, full name and birth date.
For each subject, we must store the name and its price.
Also, each subject is divided in several groups, so that we can adapt to the number and different schedule of the students. For each group, we must store its name, the schedule (either morning, afternoon or evening) and the teacher's name.
Finally, for each time a student enrolls in a subject, we must store a unique code, the date of the enrollment and whether they have paid yet or not.
With this, we have enough information to build our E-R diagram. First, we shall identify the entities and its corresponding attributes:
- student: dni, full name, birth date
- subject: name, price
- group: name, schedule, teacher name
We proceed identifying the relations:
- A subject has several groups, but a group only corresponds to a unique subject
- A student may be taking several subjects, enrolling on those groups which fit their schedule. In every group, there may be multiple students enrolled. Attributes: code, enrollment date, whether has been paid or not.
The resulting E-R diagram is a follows:
Subject Group Student
======= ---< ================ >---------------< =======
name name code dni
price schedule enrollment date full name
teacher's name has paid birth date
1.B) Relational model
Next step is to transform that E-R diagram into a relational model specification. We shall adhere to the following convention:
- Each entity becomes a table, mapping each attribute to a different column.
- All values of a column belong to the same data domain (data type).
- For each table, there shall be at least one subset of columns that uniquely identifies each row: it is called the primary key.
- 1-1 / 1-N / N-1 relationships between tables are implemented by adding a subset of columns (at least the primary key) from the referenced table into the referencing one. These subsets are called foreign keys.
- N-N relationships are implemented by unfolding them into a pair of 1-N / N-1 relationships with an intermediate table. This table will contain a foreign key to each of the referencing tables and any additional attribute defined for that relationship.
Among all the guidelines a good model should follow, the ones refering to "normalization" are a MUST. The science behind the normalization of a model is vast and complex, for now we can think of it as:
- Values in a cell must be atomic
- Values are stored in only one "place"
- Columns not part of the PK, must depend ONLY on the PK
First, we define the tables:
- Student: dni, full_name, birth_date
- Subject: name, price
- Group: name, schedule, teacher_name
- Enrollment: code, enrollment_date, has_paid
Then, we define the primary keys () and foreign keys (~):
- Student: dni, full_name, birth_date
- Subject: name, price
- Group: name, schedule, teacher_name, ~subject_name
- Enrollment: *code, date, has_paid, ~student_dni, ~group_name
And the diagram:
Subject Group Enrollment Student
======= ---< ============== ---< ============= >--- =======
*name *name *code *dni
price schedule date full_name
teacher_name has_paid birth_date
~subject_name ~group_name
~student_dni
Exercise
We will describe the data model of an online show shop:
Our shop sells lots of different shoes. Each shoe has a brand, model and its price.
To be able to buy in our shop, customers must register their personal data, such as full name, email and password.
Customers may order multiple shoes, indicating the number of pairs for each shoe, the size and the color. For each order, we will store a unique code, the date and the shipping address.
Solution (E-R)
End of explanation
%load -r 20-41 solutions/12_01_Databases.py
Explanation: Solution (Relational model)
End of explanation
%load_ext sql
%sql sqlite:///../resources/shop.sqlite
Explanation: 2) SQL
Once the relational model has been defined it needs to be implemented into the relational database.
Most interactions with relational databases are done through textual commands using a specific declarative language called SQL (Structured Query Language).
2.A) SELECT
SQL is very powerful, but for this course we will only have a look at the SELECT statement to filter, group, aggregate and retrieve information from the database.
Here is a simplified syntax of the SELECT statement:
SELECT expression [, ...]
FROM table
[ JOIN table ON condition ]
WHERE condition
GROUP BY expression
HAVING condition
ORDER BY expression
LIMIT number
To learn how to use the SELECT statement, we will use examples on top the relational models previously described.
Select all columns from student table
SELECT *
FROM student
Select the name of all Subjects, and retrieve only 3 entries
SELECT name
FROM subject
LIMIT 3
Select the average price of all Subjects
SELECT AVG(price)
FROM subject
Select the name and price of the most expensive Subject
SELECT name, price
FROM subject
ORDER BY price DESC
LIMIT 1
Select the names of all Subjects cheaper than 1000
SELECT name
FROM subject
WHERE price < 1000
Select how many Students we have
SELECT COUNT(*)
FROM student
Select how many Groups we have, grouped by schedule
SELECT schedule, COUNT(*)
FROM group
GROUP BY schedule
2.B) JOIN
We can also combine data from more than one table. Here is how we can use JOIN to achieve this.
Select all the Groups with its corresponding Subject
SELECT *
FROM group
JOIN subject
ON group.subject_name = subject.name
Select all Subject names given by a teacher named 'Ada Lovelace'
SELECT subject.name
FROM subject
JOIN group
ON subject.name = group.subject_name
WHERE group.teacher = 'Ada Lovelace'
Select the names of all Students who have some pending payment
SELECT name
FROM student
JOIN enrollment
ON student.id = enrollment.student_id
GROUP BY student.id
HAVING bool_and(group.has_paid) = FALSE
Select the teachers of a Student with id 1234
SELECT group.teacher
FROM student
JOIN enrollment
ON student.id = enrollment.student_id
JOIN group
ON group.name = enrollment.group_name
WHERE student.id = 1234
2.C) Exercises
End of explanation
%load -r 50-59 solutions/12_01_Databases.py
Explanation: Select the brand, model and price of the cheapest shoe
End of explanation
%load -r 60-69 solutions/12_01_Databases.py
Explanation: Count how many different models does each brand have
End of explanation
%load -r 70-79 solutions/12_01_Databases.py
Explanation: Select how many pairs of shoes were sold in order number 4521
End of explanation
%load -r 80-89 solutions/12_01_Databases.py
Explanation: How many shoes have been sold per brand?
End of explanation
%load -r 90-99 solutions/12_01_Databases.py
Explanation: And per brand and model?
End of explanation
%load -r 100-109 solutions/12_01_Databases.py
Explanation: Select the shipping address of the last 3 orders placed by the customer named 'Joan Clarke'
End of explanation
%load -r 110-119 solutions/12_01_Databases.py
Explanation: Select the brand, model, color and size of all shows ever bought by the customer named 'Grace Hopper'
End of explanation
%load -r 130-139 solutions/12_01_Databases.py
Explanation: How much did she spent?
End of explanation
dsn = "sqlite:///../resources/shop.sqlite"
Explanation: 3) Using SQLAlchemy
SQLAlchemy is a Python database toolkit that provides a unified and very powerful API for database access.
These are the steps to connect to a database and issue SQL statements:
1. Build a DSN (Data Source Name)
End of explanation
from sqlalchemy import create_engine
engine = create_engine(dsn)
conn = engine.connect()
Explanation: 2. Create an Engine and establish a connection
End of explanation
from sqlalchemy.sql import text
sql = text(
"SELECT * FROM shoe WHERE brand = :name"
).bindparams(
name = 'Maggio'
)
Explanation: 3. Build the SQL statement
End of explanation
for shoe in conn.execute(sql):
print("The «{name}» model from Maggio costs {price}".format(
name = shoe.model,
price = shoe.price
))
Explanation: 4. Execute and retrieve results
End of explanation
%load -r 150-159 solutions/12_01_Databases.py
Explanation: 4) Why do we normalize?
Suppose you have the following table. Each row stores a galaxy id, its position and measured fluxes in several bands. Not all fluxes may be present, those missing will have a NULL value.
Galaxy
===========
*id
ra
dec
flux_g
flug_r
flux_i
flux_z
flux_y
Before going on resolving the following SQL sentences, think about:
- Is this model normalized?
- What do we do if we want to measure on more bands?
- How can we store also the flux_error for every band?
Exercise
Select id from galaxies with non-null flux on g band
End of explanation
%load -r 160-169 solutions/12_01_Databases.py
Explanation: Select id from galaxies with all fluxes present
End of explanation
%load -r 170-179 solutions/12_01_Databases.py
Explanation: Select all galaxies with 3 fluxes present
End of explanation
%load -r 180-189 solutions/12_01_Databases.py
Explanation: Considerations
I cannot stress too much how important it is to have a good relational model in order to be able to work efficiently with all the data.
Non-normalized models may appear "simpler" and "easier", but it is just a mirage. Behind the simple facade, such models are more difficult to maintain and evolve. Also, information present in them can be very hard to extract.
Not all data model requirements cannot be determined at the beginning. That means we must plan and prepare our data models for change. CHANGE IS UNAVOIDABLE, and data models must be able to adapt to the evolution of requirements.
Exercise
Could you propose a normalized model?
Solution
End of explanation
%load -r 190-199 solutions/12_01_Databases.py
Explanation: Exercise
Select id from galaxies with non-null flux on g band
End of explanation
%load -r 200-209 solutions/12_01_Databases.py
Explanation: Select id from galaxies with 3 fluxes present
End of explanation
%load -r 210-219 solutions/12_01_Databases.py
Explanation: Select id from galaxies with at least 3 fluxes present
End of explanation
%%sql
SELECT brand, model, price
FROM shoe
WHERE brand = 'Maggio'
Explanation: Considerations
What do we do if we want to measure on more bands?
How can we store also the flux_error for every band?
And the magnitude?
And the magnitude error?
5) SQL security issues
Although mastering SQL is a must if we work with relational databases, it becomes tedious to manually write all those queries. Also, it is prone to errors and one has to validate each and every user-provided input or it could suffer from massive and fata security issues.
The most common security problem with hand-crafted SQL is SQL injection, where user provides some kind of parameter to the query. If this parameter is not secured enough or does not pass the proper validation, the user is efectibly able to run ANY statement on OUR database. It could steal our customers, our credentials, delete all our data, or worse, modifying critical data without our knowledge.
As an example, in our online shop we have a section to browse the shoes we sell. When a user select a particular brand, we display all the models and their price. Suppose we use the following query:
If we have a query like this one:
SELECT brand, model, price
FROM shoe
WHERE brand = {$ parameter $}
Normal use
parameter = 'Maggio'
End of explanation
%%sql
SELECT brand, model, price
FROM shoe
WHERE brand = 'TheBestBrandInDaWorld'
Explanation: Issue 1: Ask for a non existing brand
parameter = 'TheBestBrandInDaWorld'
End of explanation
%%sql
SELECT brand, model, price
FROM shoe
WHERE brand = ;
Explanation: Issue 2: Make the query fail
parameter = ;
End of explanation
%%sql
SELECT brand, model, price
FROM shoe
WHERE brand = '';SELECT * FROM customer
Explanation: Issue 3: Select anything else
parameter = '';SELECT * FROM customer
End of explanation
%%sql
SELECT brand, model, price
FROM shoe
WHERE brand = '';DROP TABLE you_are_lucky_this_table_does_not_exist
Explanation: Issue 4: DO anything else
parameter = '';DROP TABLE you_are_lucky_this_table_does_not_exist
End of explanation |
7,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EinMix
Step1: Logic of EinMix is very close to the one of einsum.
If you're not familiar with einsum, follow these guides first
Step2: ResMLP — original implementation
Building blocks of ResMLP consist only of linear/affine layers and one activation (GELU). <br />
Let's see how we can rewrite all of the components with Mix.
We start from a reference code for ResMLP block published in the paper
Step3: ResMLP — rewritten
Code below is the result of first rewriting
Step4: ResMLP — rewritten more
Since here in einops-land we care about code being easy to follow, let's make one more transformation.
We group layers from both branches, and now the order of operations matches the order as they are written in the code.
Could we go further? Actually, yes - nn.Linear layers can also be replaced by EinMix,
however they are very organic here since first and last operations in branch_channels show components.
Brevity of nn.Linear is benefitial when the context specifies tensor shapes.
Other interesing observations
Step5: ResMLP — performance
There is some fear of using einsum because historically it lagged in performance.
Below we run a test and verify that performace didn't change after transition to EinMix
Step6: TokenMixer from MLPMixer — original code
Let's now delve into MLPMixer. We start from pytorch implementation by Jake Tae.
We'll focus on two components of MLPMixer that don't exist in convnets. First component is TokenMixer
Step7: TokenMixer from MLPMixer — reimplemented
We can significantly reduce amount of code by using EinMix.
Main caveat addressed by original code is that nn.Linear mixes only last axis. EinMix can mix any axis.
Sequential structure is always preferred as it is easier to follow
Intentionally there is no residual connection in TokenMixer, because honestly it's not work of Mixer and should be done by caller
Step8: You may also like independent implementation of MLPMixer from Phil Wang. <br />
Phil solves the issue by repurposing nn.Conv1d to mix on the second dimension. Hacky, but does the job
MLPMixer's patch embeddings — original
Second interesting part of MLPMixer is derived from vision transformers.
In the very beginning an image is split into patches, and each patch is linearly projected into embedding.
I've taken the part of Jake's code responsible for embedding patches
Step9: MLPMixer's patch embeddings — reimplemented
EinMix does this in a single operation. This may require some training at first to understand.
Let's go step-by-step
Step10: Vision Permutator
As a third example we consider pytorch-like code from ViP paper.
Vision permutator is only slightly more nuanced than previous models, because
1. it operates on spatial dimensions separately, while MLPMixer and its friends just pack all spatial info into one axis.
2. it splits channels into groups called 'segments'
Paper provides pseudo-code, so I reworked that to complete module with minimal changes. Enjoy
Step11: That didn't look readable, right?
This code is also very inflexible
Step12: Great, now let's confirm that performance did not deteriorate. | Python Code:
from einops.layers.torch import EinMix as Mix
Explanation: EinMix: universal toolkit for advanced MLP architectures
Recent progress in MLP-based architectures demonstrated that very specific MLPs can compete with convnets and transformers (and even outperform them).
EinMix allows writing such architectures in a more uniform and readable way.
EinMix — building block of MLPs
End of explanation
# other stuff we use
import torch
from torch import nn
from einops.layers.torch import Rearrange, Reduce
Explanation: Logic of EinMix is very close to the one of einsum.
If you're not familiar with einsum, follow these guides first:
https://rockt.github.io/2018/04/30/einsum
https://towardsdatascience.com/einsum-an-underestimated-function-99ca96e2942e
https://theaisummer.com/einsum-attention/
Einsum uniformly describes a number of operations, however EinMix is defined slightly differently.
Here is a linear layer, a common block in sequence modelling (e.g. in NLP/speech), written with einsum
python
weight = <...create tensor...>
result = torch.einsum('tbc,cd->tbd', embeddings, weight)
EinMix counter-part is:
python
mix_channels = Mix('t b c -> t b c_out', weight_shape='c c_out', ...)
result = mix_channels(embeddings)
Main differences compared to plain einsum are:
layer takes care of the weight initialization & management hassle
weight is not in the comprehension
We'll discuss other changes a bit later, now let's implement ResMLP
End of explanation
# No norm layer
class Affine(nn.Module):
def __init__(self, dim):
super().__init__()
self.alpha = nn.Parameter(torch.ones(dim))
self.beta = nn.Parameter(torch.zeros(dim))
def forward(self, x):
return self.alpha * x + self.beta
class Mlp(nn.Module):
def __init__(self, dim):
super().__init__()
self.fc1 = nn.Linear(dim, 4 * dim)
self.act = nn.GELU()
self.fc2 = nn.Linear(4 * dim, dim)
def forward(self, x):
x = self.fc1(x)
x = self.act(x)
x = self.fc2(x)
return x
class ResMLP_Blocks(nn.Module):
def __init__(self, nb_patches, dim, layerscale_init):
super().__init__()
self.affine_1 = Affine(dim)
self.affine_2 = Affine(dim)
self.linear_patches = nn.Linear(nb_patches, nb_patches) #Linear layer on patches
self.mlp_channels = Mlp(dim) #MLP on channels
self.layerscale_1 = nn.Parameter(layerscale_init * torch.ones((dim))) # LayerScale
self.layerscale_2 = nn.Parameter(layerscale_init * torch.ones((dim))) # parameters
def forward(self, x):
res_1 = self.linear_patches(self.affine_1(x).transpose(1,2)).transpose(1,2)
x = x + self.layerscale_1 * res_1
res_2 = self.mlp_channels(self.affine_2(x))
x = x + self.layerscale_2 * res_2
return x
Explanation: ResMLP — original implementation
Building blocks of ResMLP consist only of linear/affine layers and one activation (GELU). <br />
Let's see how we can rewrite all of the components with Mix.
We start from a reference code for ResMLP block published in the paper:
End of explanation
def Mlp(dim):
return nn.Sequential(
nn.Linear(dim, 4 * dim),
nn.GELU(),
nn.Linear(4 * dim, dim),
)
def init(Mix_layer, scale=1.):
Mix_layer.weight.data[:] = scale
if Mix_layer.bias is not None:
Mix_layer.bias.data[:] = 0
return Mix_layer
class ResMLP_Blocks2(nn.Module):
def __init__(self, nb_patches, dim, layerscale_init):
super().__init__()
self.affine1 = init(Mix('b t c -> b t c', weight_shape='c', bias_shape='c', c=dim))
self.affine2 = init(Mix('b t c -> b t c', weight_shape='c', bias_shape='c', c=dim))
self.mix_patches = Mix('b t c -> b t0 c', weight_shape='t t0', bias_shape='t0', t=nb_patches, t0=nb_patches)
self.mlp_channels = Mlp(dim)
self.linear1 = init(Mix('b t c -> b t c', weight_shape='c', c=dim), scale=layerscale_init)
self.linear2 = init(Mix('b t c -> b t c', weight_shape='c', c=dim), scale=layerscale_init)
def forward(self, x):
res1 = self.mix_patches(self.affine1(x))
x = x + self.linear1(res1)
res2 = self.mlp_channels(self.affine2(x))
x = x + self.linear2(res2)
return x
Explanation: ResMLP — rewritten
Code below is the result of first rewriting:
- combination [transpose -> linear -> transpose back] got nicely packed into a single EinMix (mix_patches) <br />
Mix('b t c -> b t0 c', weight_shape='t t0', bias_shape='t0', t=nb_patches, t0=nb_patches)
- pattern 'b t c -> b t0 c' tells that b and c are unperturbed, while tokens t->t0 were mixed
- explicit parameter shapes are also quite insightful
In new implementation affine layer is also handled by EinMix: <br />
Mix('b t c -> b t c', weight_shape='c', bias_shape='c', c=dim)
from the pattern you can see that there is no mixing at all, only multiplication and shift
multiplication and shift are defined by weight and bias - and those depend only on a channel
thus affine transform is per-channel
Linear layer is also handled by EinMix, the only difference compared to affine layer is absence of bias
We specified that input is 3d and order is btc, not tbc - this is not written explicitly in the original code
The only step back that we had to do is change an initialization schema for EinMix for affine and linear layers
End of explanation
def init(layer: Mix, scale=1.):
layer.weight.data[:] = scale
if layer.bias is not None:
layer.bias.data[:] = 0
return layer
class ResMLP_Blocks3(nn.Module):
def __init__(self, nb_patches, dim, layerscale_init):
super().__init__()
self.branch_patches = nn.Sequential(
init(Mix('b t c -> b t c', weight_shape='c', c=dim), scale=layerscale_init),
Mix('b t c -> b t0 c', weight_shape='t t0', bias_shape='t0', t=nb_patches, t0=nb_patches),
init(Mix('b t c -> b t c', weight_shape='c', bias_shape='c', c=dim)),
)
self.branch_channels = nn.Sequential(
init(Mix('b t c -> b t c', weight_shape='c', c=dim), scale=layerscale_init),
nn.Linear(dim, 4 * dim),
nn.GELU(),
nn.Linear(4 * dim, dim),
init(Mix('b t c -> b t c', weight_shape='c', bias_shape='c', c=dim)),
)
def forward(self, x):
x = x + self.branch_patches(x)
x = x + self.branch_channels(x)
return x
Explanation: ResMLP — rewritten more
Since here in einops-land we care about code being easy to follow, let's make one more transformation.
We group layers from both branches, and now the order of operations matches the order as they are written in the code.
Could we go further? Actually, yes - nn.Linear layers can also be replaced by EinMix,
however they are very organic here since first and last operations in branch_channels show components.
Brevity of nn.Linear is benefitial when the context specifies tensor shapes.
Other interesing observations:
- hard to notice in the original code nn.Linear is preceded by a linear layer (thus latter is redundant or can be fused in the former)
- hard to notice in the original code second nn.Linear is followed by an affine layer (thus latter is again redundant)
Take time to reorganize your code. This may be quite insightful.
End of explanation
x = torch.zeros([32, 128, 128])
for layer in [
ResMLP_Blocks(128, dim=128, layerscale_init=1.),
ResMLP_Blocks2(128, dim=128, layerscale_init=1.),
ResMLP_Blocks3(128, dim=128, layerscale_init=1.),
# scripted versions
torch.jit.script(ResMLP_Blocks(128, dim=128, layerscale_init=1.)),
torch.jit.script(ResMLP_Blocks2(128, dim=128, layerscale_init=1.)),
torch.jit.script(ResMLP_Blocks3(128, dim=128, layerscale_init=1.)),
]:
%timeit -n 10 y = layer(x)
Explanation: ResMLP — performance
There is some fear of using einsum because historically it lagged in performance.
Below we run a test and verify that performace didn't change after transition to EinMix
End of explanation
from torch.nn import functional as F
class MLP(nn.Module):
def __init__(self, num_features, expansion_factor, dropout):
super().__init__()
num_hidden = num_features * expansion_factor
self.fc1 = nn.Linear(num_features, num_hidden)
self.dropout1 = nn.Dropout(dropout)
self.fc2 = nn.Linear(num_hidden, num_features)
self.dropout2 = nn.Dropout(dropout)
def forward(self, x):
x = self.dropout1(F.gelu(self.fc1(x)))
x = self.dropout2(self.fc2(x))
return x
class TokenMixer(nn.Module):
def __init__(self, num_features, num_patches, expansion_factor, dropout):
super().__init__()
self.norm = nn.LayerNorm(num_features)
self.mlp = MLP(num_patches, expansion_factor, dropout)
def forward(self, x):
# x.shape == (batch_size, num_patches, num_features)
residual = x
x = self.norm(x)
x = x.transpose(1, 2)
# x.shape == (batch_size, num_features, num_patches)
x = self.mlp(x)
x = x.transpose(1, 2)
# x.shape == (batch_size, num_patches, num_features)
out = x + residual
return out
Explanation: TokenMixer from MLPMixer — original code
Let's now delve into MLPMixer. We start from pytorch implementation by Jake Tae.
We'll focus on two components of MLPMixer that don't exist in convnets. First component is TokenMixer:
End of explanation
def TokenMixer(num_features: int, n_patches: int, expansion_factor: int, dropout: float):
n_hidden = n_patches * expansion_factor
return nn.Sequential(
nn.LayerNorm(num_features),
Mix('b hw c -> b hid c', weight_shape='hw hid', bias_shape='hid', hw=n_patches, hidden=n_hidden),
nn.GELU(),
nn.Dropout(dropout),
Mix('b hid c -> b hw c', weight_shape='hid hw', bias_shape='hw', hw=n_patches, hidden=n_hidden),
nn.Dropout(dropout),
)
Explanation: TokenMixer from MLPMixer — reimplemented
We can significantly reduce amount of code by using EinMix.
Main caveat addressed by original code is that nn.Linear mixes only last axis. EinMix can mix any axis.
Sequential structure is always preferred as it is easier to follow
Intentionally there is no residual connection in TokenMixer, because honestly it's not work of Mixer and should be done by caller
End of explanation
def check_sizes(image_size, patch_size):
sqrt_num_patches, remainder = divmod(image_size, patch_size)
assert remainder == 0, "`image_size` must be divisibe by `patch_size`"
num_patches = sqrt_num_patches ** 2
return num_patches
class Patcher(nn.Module):
def __init__(
self,
image_size=256,
patch_size=16,
in_channels=3,
num_features=128,
):
num_patches = check_sizes(image_size, patch_size)
super().__init__()
# per-patch fully-connected is equivalent to strided conv2d
self.patcher = nn.Conv2d(
in_channels, num_features, kernel_size=patch_size, stride=patch_size
)
def forward(self, x):
patches = self.patcher(x)
batch_size, num_features, _, _ = patches.shape
patches = patches.permute(0, 2, 3, 1)
patches = patches.view(batch_size, -1, num_features)
return patches
Explanation: You may also like independent implementation of MLPMixer from Phil Wang. <br />
Phil solves the issue by repurposing nn.Conv1d to mix on the second dimension. Hacky, but does the job
MLPMixer's patch embeddings — original
Second interesting part of MLPMixer is derived from vision transformers.
In the very beginning an image is split into patches, and each patch is linearly projected into embedding.
I've taken the part of Jake's code responsible for embedding patches:
End of explanation
def patcher(patch_size=16, in_channels=3, num_features=128):
return Mix('b c_in (h hp) (w wp) -> b (h w) c', weight_shape='c_in hp wp c', bias_shape='c',
c=num_features, hp=patch_size, wp=patch_size, c_in=in_channels)
Explanation: MLPMixer's patch embeddings — reimplemented
EinMix does this in a single operation. This may require some training at first to understand.
Let's go step-by-step:
b c_in (h hp) (w wp) -> - 4-dimensional input tensor (BCHW-ordered) is split into patches of shape hp x wp
weight_shape='c_in hp wp c'. Axes c_in, hp and wp are all absent in the output: three dimensional patch tensor was mixed to produce a vector of length c
-> b (h w) c - output is 3-dimensional. All patches were reorganized from h x w grid to one-dimensional sequence of vectors
We don't need to provide image_size beforehead, new implementation handles images of different dimensions as long as they can be divided into patches
End of explanation
class WeightedPermuteMLP(nn.Module):
def __init__(self, H, W, C, S):
super().__init__()
self.proj_h = nn.Linear(H * S, H * S)
self.proj_w = nn.Linear(W * S, W * S)
self.proj_c = nn.Linear(C, C)
self.proj = nn.Linear(C, C)
self.S = S
def forward(self, x):
B, H, W, C = x.shape
S = self.S
N = C // S
x_h = x.reshape(B, H, W, N, S).permute(0, 3, 2, 1, 4).reshape(B, N, W, H*S)
x_h = self.proj_h(x_h).reshape(B, N, W, H, S).permute(0, 3, 2, 1, 4).reshape(B, H, W, C)
x_w = x.reshape(B, H, W, N, S).permute(0, 1, 3, 2, 4).reshape(B, H, N, W*S)
x_w = self.proj_w(x_w).reshape(B, H, N, W, S).permute(0, 1, 3, 2, 4).reshape(B, H, W, C)
x_c = self.proj_c(x)
x = x_h + x_w + x_c
x = self.proj(x)
return x
Explanation: Vision Permutator
As a third example we consider pytorch-like code from ViP paper.
Vision permutator is only slightly more nuanced than previous models, because
1. it operates on spatial dimensions separately, while MLPMixer and its friends just pack all spatial info into one axis.
2. it splits channels into groups called 'segments'
Paper provides pseudo-code, so I reworked that to complete module with minimal changes. Enjoy:
End of explanation
class WeightedPermuteMLP_new(nn.Module):
def __init__(self, H, W, C, seg_len):
super().__init__()
assert C % seg_len == 0, f"can't divide {C} into segments of length {seg_len}"
self.mlp_c = Mix('b h w c -> b h w c0', weight_shape='c c0', bias_shape='c0', c=C, c0=C)
self.mlp_h = Mix('b h w (n c) -> b h0 w (n c0)', weight_shape='h c h0 c0', bias_shape='h0 c0',
h=H, h0=H, c=seg_len, c0=seg_len)
self.mlp_w = Mix('b h w (n c) -> b h w0 (n c0)', weight_shape='w c w0 c0', bias_shape='w0 c0',
w=W, w0=W, c=seg_len, c0=seg_len)
self.proj = nn.Linear(C, C)
def forward(self, x):
x = self.mlp_c(x) + self.mlp_h(x) + self.mlp_w(x)
return self.proj(x)
Explanation: That didn't look readable, right?
This code is also very inflexible: code in the paper did not support batch dimension, and multiple changes were necessary to allow batch processing. <br />
This process is fragile and easily can result in virtually uncatchable bugs.
Now good news: each of these long method chains can be replaced with a single EinMix layer:
End of explanation
x = torch.zeros([32, 32, 32, 128])
for layer in [
WeightedPermuteMLP(H=32, W=32, C=128, S=4),
WeightedPermuteMLP_new(H=32, W=32, C=128, seg_len=4),
# scripted versions
torch.jit.script(WeightedPermuteMLP(H=32, W=32, C=128, S=4)),
torch.jit.script(WeightedPermuteMLP_new(H=32, W=32, C=128, seg_len=4)),
]:
%timeit -n 10 y = layer(x)
Explanation: Great, now let's confirm that performance did not deteriorate.
End of explanation |
7,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repository AOIs
This notebook gathers all the aois used in this repo and saves them as a geojson file.
To grab a single aoi geojson string to add to your notebook, just uncomment the line that prints the aoi. Be sure to add your notebook to the list of notebooks used by that aoi!
To specify a new aoi or grab an aoi and indicate indicate that the aoi is used by an additional notebook, jump to Specify AOIs.
NOTE
Step1: AOI management
Some classes that make it a little easier to manage aois and create compact geojson string representations of the aois.
Step2: AOIs
Setup
Step3: Specify AOIs
Each AOI is specified by instantiating the Aoi class with the aoi name, notebooks that use the aoi, and aoi geometry coordinates, in that order. Once an AOI is created, append it to the Aois object. Be sure to run this through once to generate the geojson file at the end of the notebook.
NOTE
Step4: Write to File | Python Code:
import json
import geojson
from geojson import Polygon, Feature, FeatureCollection
# # Local code for compact printing of geojson
from utils import CompactFeature, CompactFeatureCollection
Explanation: Repository AOIs
This notebook gathers all the aois used in this repo and saves them as a geojson file.
To grab a single aoi geojson string to add to your notebook, just uncomment the line that prints the aoi. Be sure to add your notebook to the list of notebooks used by that aoi!
To specify a new aoi or grab an aoi and indicate indicate that the aoi is used by an additional notebook, jump to Specify AOIs.
NOTE: Be sure to run the entire notebook so the aois get printed to geojson at the end
Setup
End of explanation
class Aoi(CompactFeature):
def __init__(self, name, used_by, coordinates, id=None):
pg = Polygon(coordinates)
prop = {'name': name, 'used by': used_by}
if id:
prop['id']: id
super(CompactFeature, self).__init__(geometry=pg, properties=prop)
self["type"] = 'Feature'
class Aois(CompactFeatureCollection):
def __init__(self, features=None, **extras):
if features is not None:
for f in features:
self._check_aoi(f)
else:
features = []
super(CompactFeatureCollection, self).__init__(features, **extras)
self.type = 'FeatureCollection'
@staticmethod
def _check_aoi(aoi):
if not isinstance(aoi, Aoi):
raise(Exception('expected instance of Aoi, got {}'.format(type(aoi).__name__)))
def append(self, aoi):
self._check_aoi(aoi)
self.features.append(aoi)
def write(self, filename):
with open(filename, 'w') as dest:
dest.write(self.__str__())
Explanation: AOI management
Some classes that make it a little easier to manage aois and create compact geojson string representations of the aois.
End of explanation
aoi_filename = 'aois.geojson'
aois = Aois()
Explanation: AOIs
Setup
End of explanation
iowa_crops = Aoi(
'iowa_crops',
['crop_classification/classify_cart_l8_ps',
'coverage/calculate_coverage_wgs84'],
[[
[-93.2991294841129, 42.6995987669915],
[-93.2996742314127, 42.8127566482941],
[-93.2884356831875, 42.8619208871588],
[-93.2653319466575, 42.9248165306276],
[-92.9938730936993, 42.9251238519476],
[-92.9938880477425, 42.7736373428868],
[-92.9983961055212, 42.7545290276869],
[-93.0191535706845, 42.6999877495273],
[-93.2991294841129, 42.6995987669915]
]])
aois.append(iowa_crops)
# iowa_crops
sacramento_crops = Aoi('sacramento_crops',
['crop_classification/datasets_identify-1'],
[[
[-121.58460974693298, 38.29170496647727],
[-121.58460974693298, 38.32726528409606],
[-121.5248715877533, 38.32726528409606],
[-121.5248715877533, 38.29170496647727],
[-121.58460974693298, 38.29170496647727]
]])
aois.append(sacramento_crops)
# sacramento_crops
sacramento_crops_2 = Aoi(
'sacramento_crops_2',
['coverage/calculage_coverage_wgs84',
'crossovers/ps_l8_crossovers',
'landsat-ps-comparison/landsat-ps-comparison',
'crop_classification/datasets_identify-2'],
[[
[-121.3113248348236, 38.28911976564886],
[-121.3113248348236, 38.34622533958],
[-121.2344205379486, 38.34622533958],
[-121.2344205379486, 38.28911976564886],
[-121.3113248348236, 38.28911976564886]
]])
aois.append(sacramento_crops_2)
# sacramento_crops_2
golden_gate_park = Aoi('golden_gate_park',
['data-api-tutorials/clip_and_ship_introduction'],
[[
[-122.51103401184083, 37.771596736802074],
[-122.51060485839844, 37.763997637045456],
[-122.45902061462401, 37.76603318676243],
[-122.45773315429689, 37.7654903789825],
[-122.45275497436523, 37.76637243960179],
[-122.45455741882324, 37.775124624817906],
[-122.46597290039062, 37.7738356083287],
[-122.51103401184083, 37.771596736802074]
]])
aois.append(golden_gate_park)
# golden_gate_park
san_francisco_city = Aoi('san_francisco_city',
['data-api-tutorials/planet_cli_introduction'],
[[
[-122.47455596923828, 37.810326435534755],
[-122.49172210693358, 37.795406713958236],
[-122.52056121826172, 37.784282779035216],
[-122.51953124999999, 37.6971326434885],
[-122.38941192626953, 37.69441603823106],
[-122.38872528076173, 37.705010235842614],
[-122.36228942871092, 37.70935613533687],
[-122.34992980957031, 37.727280276860036],
[-122.37773895263672, 37.76230130281876],
[-122.38494873046875, 37.794592824285104],
[-122.40554809570311, 37.813310018173155],
[-122.46150970458983, 37.805715207044685],
[-122.47455596923828, 37.810326435534755]
]])
aois.append(san_francisco_city)
# san_francisco_city
vancouver_island_s = Aoi(
'Vancouver Island South',
['data-api-tutorials/planet_data_api_introduction'],
[[
[-125.29632568359376, 48.37084770238366],
[-125.29632568359376, 49.335861591104106],
[-123.2391357421875, 49.335861591104106],
[-123.2391357421875, 48.37084770238366],
[-125.29632568359376, 48.37084770238366]
]])
aois.append(vancouver_island_s)
# vancouver_island_s
# also ndvi-from-sr/ndvi_planetscope_sr and ndvi/ndvi_planetscope
west_stockton = Aoi('West of Stockton',
['data-api-tutorials/search_and_download_quickstart',
'ndvi-from-sr/ndvi_planetscope_sr',
'ndvi/ndvi_planetscope'
],
[[
[-121.59290313720705, 37.93444993515032],
[-121.27017974853516, 37.93444993515032],
[-121.27017974853516, 38.065932950547484],
[-121.59290313720705, 38.065932950547484],
[-121.59290313720705, 37.93444993515032]
]])
aois.append(west_stockton)
# west_stockton
congo_forest = Aoi('congo_forest',
['forest-monitoring/drc_roads_download'],
[[
[25.42429478260258,1.0255377823058893],
[25.592960813580472,1.0255377823058893],
[25.592960813580472,1.1196578801254304],
[25.42429478260258,1.1196578801254304],
[25.42429478260258,1.0255377823058893]
]])
aois.append(congo_forest)
# congo_forest
# also used in mosaicing/basic_compositing_demo
mt_dana = Aoi('Mt Dana',
['in-class-exercises/mosaicing-and-masking/mosaicing-and-masking-key',
'mosaicing/basic_compositing_demo'
],
[[
[-119.16183471679688,37.82903964181452],
[-119.14947509765626,37.83663205340172],
[-119.13745880126953,37.846392577323286],
[-119.13574218750001,37.856422880849514],
[-119.13883209228514,37.86645181975611],
[-119.12406921386719,37.86916210952103],
[-119.12200927734376,37.875937397778614],
[-119.1212688230194,37.90572368618133],
[-119.13740499245301,37.930641295117404],
[-119.16595458984376,37.92659678938742],
[-119.18243408203126,37.9447389942697],
[-119.2088161252655,37.95257263611974],
[-119.25516469704283,37.92522514171301],
[-119.2630611203827,37.88215253011582],
[-119.25104482399598,37.84474832157969],
[-119.18203695046083,37.82576791597315],
[-119.16183471679688,37.82903964181452]
]])
aois.append(mt_dana)
# mt_dana
hanoi_s = Aoi('S Hanoi',
['label-data/label_maker_pl_geotiff'],
[[
[105.81775409169494, 20.84015810005586],
[105.9111433289945, 20.84015810005586],
[105.9111433289945, 20.925748489914824],
[105.81775409169494, 20.925748489914824],
[105.81775409169494, 20.84015810005586]
]])
aois.append(hanoi_s)
# hanoi_s
myanmar_s = Aoi('S Myanmar',
['orders/ordering_and_delivery',
'orders/tools_and_toolchains'
],
[[
[94.25142652167966,16.942922591218252],
[95.95431374929511,16.587048751480086],
[95.55802198999191,14.851751617790999],
[93.87002080638986,15.209870864141054],
[94.25142652167966,16.942922591218252]
]])
aois.append(myanmar_s)
# myanmar_s
merced_n = Aoi('North of Merced',
['toar/toar_planetscope'],
[[
[-120.53282682046516,37.387200839539496],
[-120.52354973008043,37.420706184624756],
[-120.23858050023456,37.37089230084231],
[-120.24140251133315,37.36077146074112],
[-120.240470649891,37.36060856263429],
[-120.253098881104,37.31418359933723],
[-120.25781268370172,37.29734056539194],
[-120.54356183694901,37.347297317827675],
[-120.53282682046516,37.387200839539496]
]])
aois.append(merced_n)
# merced_n
iowa_crops_2 = Aoi('iowa_crops_2',
['udm/udm', 'udm/udm2'],
[[
[-93.29905768168668,42.69958733505418],
[-93.29965849650593,42.81289914666694],
[-93.28841467631707,42.862022561801815],
[-93.2653691364643,42.924746756580326],
[-92.99388666885065,42.92512385194759],
[-92.99388666885065,42.77359750030287],
[-92.99839277999504,42.75450452618375],
[-93.01916380660347,42.699965805770056],
[-93.29905768168668,42.69958733505418]
]])
aois.append(iowa_crops_2)
# iowa_crops_2
Explanation: Specify AOIs
Each AOI is specified by instantiating the Aoi class with the aoi name, notebooks that use the aoi, and aoi geometry coordinates, in that order. Once an AOI is created, append it to the Aois object. Be sure to run this through once to generate the geojson file at the end of the notebook.
NOTE: Be careful about appending an Aoi to Aois multiple times. This will result in repeat Features in the resulting geojson.
End of explanation
aois.write(aoi_filename)
Explanation: Write to File
End of explanation |
7,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Beaming and Boosting
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Let's make our system so that the boosting effects will be quite noticeable.
Step3: We'll add lc, rv, and mesh datasets so that we can see how they're each affected by beaming and boosting.
Step4: Relevant Parameters
Step5: Influence on Light Curves (fluxes)
Step6: Influence on Radial Velocities
Step7: Influence on Meshes | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Beaming and Boosting
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b['rpole@primary'] = 1.8
b['rpole@secondary'] = 0.96
b['teff@primary'] = 10000
b['gravb_bol@primary'] = 1.0
b['teff@secondary'] = 5200
b['gravb_bol@secondary'] = 0.32
b['q@binary'] = 0.96/1.8
b['incl@binary'] = 88
b['period@binary'] = 1.0
b['sma@binary'] = 6.0
Explanation: Let's make our system so that the boosting effects will be quite noticeable.
End of explanation
times = np.linspace(0,1,101)
b.add_dataset('lc', times=times, dataset='lc01')
b.add_dataset('rv', times=times, dataset='rv01')
b.add_dataset('mesh', times=times[::10], dataset='mesh01')
Explanation: We'll add lc, rv, and mesh datasets so that we can see how they're each affected by beaming and boosting.
End of explanation
b.set_value('irrad_method', 'none')
print b['boosting_method@compute']
print b['boosting_method@compute'].choices
Explanation: Relevant Parameters
End of explanation
b.run_compute(boosting_method='none', model='boosting_none')
b.run_compute(boosting_method='linear', model='boosting_linear')
axs, artists = b['lc01'].plot()
leg = plt.legend()
axs, artists = b['lc01'].plot(ylim=(1.01,1.03))
leg = plt.legend()
Explanation: Influence on Light Curves (fluxes)
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['rv01@boosting_none'].plot(ax=ax1)
axs, artists = b['rv01@boosting_linear'].plot(ax=ax2)
Explanation: Influence on Radial Velocities
End of explanation
fig = plt.figure(figsize=(10,6))
ax1, ax2 = fig.add_subplot(121), fig.add_subplot(122)
axs, artists = b['mesh@boosting_none'].plot(time=0.6, facecolor='boost_factors@lc01', edgecolor=None, ax=ax1)
axs, artists = b['mesh@boosting_linear'].plot(time=0.6, facecolor='boost_factors@lc01', edgecolor=None, ax=ax2)
Explanation: Influence on Meshes
End of explanation |
7,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1><center>How to export 🤗 Transformers Models to ONNX ?<h1><center>
[ONNX](http
Step1: How to leverage runtime for inference over an ONNX graph
As mentionned in the introduction, ONNX is a serialization format and many side projects can load the saved graph and run the actual computations from it. Here, we'll focus on the official onnxruntime. The runtime is implemented in C++ for performance reasons and provides API/Bindings for C++, C, C#, Java and Python.
In the case of this notebook, we will use the Python API to highlight how to load a serialized ONNX graph and run inference workload on various backends through onnxruntime.
onnxruntime is available on pypi
Step2: Preparing for an Inference Session
Inference is done using a specific backend definition which turns on hardware specific optimizations of the graph.
Optimizations are basically of three kinds
Step3: Forwarding through our optimized ONNX model running on CPU
When the model is loaded for inference over a specific provider, for instance CPUExecutionProvider as above, an optimized graph can be saved. This graph will might include various optimizations, and you might be able to see some higher-level operations in the graph (through Netron for instance) such as
Step4: Benchmarking PyTorch model
Note
Step5: Benchmarking PyTorch & ONNX on CPU
Disclamer
Step6: Quantization support from transformers
Quantization enables the use of integers (instead of floatting point) arithmetic to run neural networks models faster. From a high-level point of view, quantization works as mapping the float32 ranges of values as int8 with the less loss in the performances of the model.
Hugging Face provides a conversion tool as part of the transformers repository to easily export quantized models to ONNX Runtime. For more information, please refer to the following
Step7: Benchmarking ONNX quantized model
Step8: Show the inference performance of each providers | Python Code:
import sys
!{sys.executable} -m pip install --upgrade git+https://github.com/huggingface/transformers
!{sys.executable} -m pip install --upgrade torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install --upgrade onnxruntime==1.4.0
!{sys.executable} -m pip install -i https://test.pypi.org/simple/ ort-nightly
!{sys.executable} -m pip install --upgrade onnxruntime-tools
!rm -rf onnx/
from pathlib import Path
from transformers.convert_graph_to_onnx import convert
# Handles all the above steps for you
convert(framework="pt", model="bert-base-cased", output=Path("onnx/bert-base-cased.onnx"), opset=11)
# Tensorflow
# convert(framework="tf", model="bert-base-cased", output="onnx/bert-base-cased.onnx", opset=11)
Explanation: <h1><center>How to export 🤗 Transformers Models to ONNX ?<h1><center>
[ONNX](http://onnx.ai/) is open format for machine learning models. It allows to save your neural network's computation graph in a framework agnostic way, which might be particulary helpful when deploying deep learning models.
Indeed, businesses might have other requirements _(languages, hardware, ...)_ for which the training framework might not be the best suited in inference scenarios. In that context, having a representation of the actual computation graph that can be shared accross various business units and logics across an organization might be a desirable component.
Along with the serialization format, ONNX also provides a runtime library which allows efficient and hardware specific execution of the ONNX graph. This is done through the [onnxruntime](https://microsoft.github.io/onnxruntime/) project and already includes collaborations with many hardware vendors to seamlessly deploy models on various platforms.
Through this notebook we'll walk you through the process to convert a PyTorch or TensorFlow transformers model to the [ONNX](http://onnx.ai/) and leverage [onnxruntime](https://microsoft.github.io/onnxruntime/) to run inference tasks on models from 🤗 __transformers__
## Exporting 🤗 transformers model to ONNX
---
Exporting models _(either PyTorch or TensorFlow)_ is easily achieved through the conversion tool provided as part of 🤗 __transformers__ repository.
Under the hood the process is sensibly the following:
1. Allocate the model from transformers (**PyTorch or TensorFlow**)
2. Forward dummy inputs through the model this way **ONNX** can record the set of operations executed
3. Optionally define dynamic axes on input and output tensors
4. Save the graph along with the network parameters
End of explanation
!pip install transformers onnxruntime-gpu onnx psutil matplotlib
Explanation: How to leverage runtime for inference over an ONNX graph
As mentionned in the introduction, ONNX is a serialization format and many side projects can load the saved graph and run the actual computations from it. Here, we'll focus on the official onnxruntime. The runtime is implemented in C++ for performance reasons and provides API/Bindings for C++, C, C#, Java and Python.
In the case of this notebook, we will use the Python API to highlight how to load a serialized ONNX graph and run inference workload on various backends through onnxruntime.
onnxruntime is available on pypi:
onnxruntime: ONNX + MLAS (Microsoft Linear Algebra Subprograms)
onnxruntime-gpu: ONNX + MLAS + CUDA
End of explanation
# # An optional step unless
# # you want to get a model with mixed precision for perf accelartion on newer GPU
# # or you are working with Tensorflow(tf.keras) models or pytorch models other than bert
# !pip install onnxruntime-tools
# from onnxruntime_tools import optimizer
# # Mixed precision conversion for bert-base-cased model converted from Pytorch
# optimized_model = optimizer.optimize_model("bert-base-cased.onnx", model_type='bert', num_heads=12, hidden_size=768)
# optimized_model.convert_model_float32_to_float16()
# optimized_model.save_model_to_file("bert-base-cased.onnx")
# # optimizations for bert-base-cased model converted from Tensorflow(tf.keras)
# optimized_model = optimizer.optimize_model("bert-base-cased.onnx", model_type='bert_keras', num_heads=12, hidden_size=768)
# optimized_model.save_model_to_file("bert-base-cased.onnx")
# optimize transformer-based models with onnxruntime-tools
from onnxruntime_tools import optimizer
from onnxruntime_tools.transformers.onnx_model_bert import BertOptimizationOptions
# disable embedding layer norm optimization for better model size reduction
opt_options = BertOptimizationOptions('bert')
opt_options.enable_embed_layer_norm = False
opt_model = optimizer.optimize_model(
'onnx/bert-base-cased.onnx',
'bert',
num_heads=12,
hidden_size=768,
optimization_options=opt_options)
opt_model.save_model_to_file('bert.opt.onnx')
from os import environ
from psutil import cpu_count
# Constants from the performance optimization available in onnxruntime
# It needs to be done before importing onnxruntime
environ["OMP_NUM_THREADS"] = str(cpu_count(logical=True))
environ["OMP_WAIT_POLICY"] = 'ACTIVE'
from onnxruntime import GraphOptimizationLevel, InferenceSession, SessionOptions, get_all_providers
from contextlib import contextmanager
from dataclasses import dataclass
from time import time
from tqdm import trange
def create_model_for_provider(model_path: str, provider: str) -> InferenceSession:
assert provider in get_all_providers(), f"provider {provider} not found, {get_all_providers()}"
# Few properties that might have an impact on performances (provided by MS)
options = SessionOptions()
options.intra_op_num_threads = 1
options.graph_optimization_level = GraphOptimizationLevel.ORT_ENABLE_ALL
# Load the model as a graph and prepare the CPU backend
session = InferenceSession(model_path, options, providers=[provider])
session.disable_fallback()
return session
@contextmanager
def track_infer_time(buffer: [int]):
start = time()
yield
end = time()
buffer.append(end - start)
@dataclass
class OnnxInferenceResult:
model_inference_time: [int]
optimized_model_path: str
Explanation: Preparing for an Inference Session
Inference is done using a specific backend definition which turns on hardware specific optimizations of the graph.
Optimizations are basically of three kinds:
Constant Folding: Convert static variables to constants in the graph
Deadcode Elimination: Remove nodes never accessed in the graph
Operator Fusing: Merge multiple instruction into one (Linear -> ReLU can be fused to be LinearReLU)
ONNX Runtime automatically applies most optimizations by setting specific SessionOptions.
Note:Some of the latest optimizations that are not yet integrated into ONNX Runtime are available in optimization script that tunes models for the best performance.
End of explanation
from transformers import BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased")
cpu_model = create_model_for_provider("onnx/bert-base-cased.onnx", "CPUExecutionProvider")
# Inputs are provided through numpy array
model_inputs = tokenizer("My name is Bert", return_tensors="pt")
inputs_onnx = {k: v.cpu().detach().numpy() for k, v in model_inputs.items()}
# Run the model (None = get all the outputs)
sequence, pooled = cpu_model.run(None, inputs_onnx)
# Print information about outputs
print(f"Sequence output: {sequence.shape}, Pooled output: {pooled.shape}")
Explanation: Forwarding through our optimized ONNX model running on CPU
When the model is loaded for inference over a specific provider, for instance CPUExecutionProvider as above, an optimized graph can be saved. This graph will might include various optimizations, and you might be able to see some higher-level operations in the graph (through Netron for instance) such as:
- EmbedLayerNormalization
- Attention
- FastGeLU
These operations are an example of the kind of optimization onnxruntime is doing, for instance here gathering multiple operations into bigger one (Operator Fusing).
End of explanation
from transformers import BertModel
PROVIDERS = {
("cpu", "PyTorch CPU"),
# Uncomment this line to enable GPU benchmarking
# ("cuda:0", "PyTorch GPU")
}
results = {}
for device, label in PROVIDERS:
# Move inputs to the correct device
model_inputs_on_device = {
arg_name: tensor.to(device)
for arg_name, tensor in model_inputs.items()
}
# Add PyTorch to the providers
model_pt = BertModel.from_pretrained("bert-base-cased").to(device)
for _ in trange(10, desc="Warming up"):
model_pt(**model_inputs_on_device)
# Compute
time_buffer = []
for _ in trange(100, desc=f"Tracking inference time on PyTorch"):
with track_infer_time(time_buffer):
model_pt(**model_inputs_on_device)
# Store the result
results[label] = OnnxInferenceResult(
time_buffer,
None
)
Explanation: Benchmarking PyTorch model
Note: PyTorch model benchmark is run on CPU
End of explanation
PROVIDERS = {
("CPUExecutionProvider", "ONNX CPU"),
# Uncomment this line to enable GPU benchmarking
# ("CUDAExecutionProvider", "ONNX GPU")
}
for provider, label in PROVIDERS:
# Create the model with the specified provider
model = create_model_for_provider("onnx/bert-base-cased.onnx", provider)
# Keep track of the inference time
time_buffer = []
# Warm up the model
model.run(None, inputs_onnx)
# Compute
for _ in trange(100, desc=f"Tracking inference time on {provider}"):
with track_infer_time(time_buffer):
model.run(None, inputs_onnx)
# Store the result
results[label] = OnnxInferenceResult(
time_buffer,
model.get_session_options().optimized_model_filepath
)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import os
# Compute average inference time + std
time_results = {k: np.mean(v.model_inference_time) * 1e3 for k, v in results.items()}
time_results_std = np.std([v.model_inference_time for v in results.values()]) * 1000
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(16, 12))
ax.set_ylabel("Avg Inference time (ms)")
ax.set_title("Average inference time (ms) for each provider")
ax.bar(time_results.keys(), time_results.values(), yerr=time_results_std)
plt.show()
Explanation: Benchmarking PyTorch & ONNX on CPU
Disclamer: results may vary from the actual hardware used to run the model
End of explanation
import torch
# Quantize
model_pt_quantized = torch.quantization.quantize_dynamic(
model_pt.to("cpu"), {torch.nn.Linear}, dtype=torch.qint8
)
# Warm up
model_pt_quantized(**model_inputs)
# Benchmark PyTorch quantized model
time_buffer = []
for _ in trange(100):
with track_infer_time(time_buffer):
model_pt_quantized(**model_inputs)
results["PyTorch CPU Quantized"] = OnnxInferenceResult(
time_buffer,
None
)
Explanation: Quantization support from transformers
Quantization enables the use of integers (instead of floatting point) arithmetic to run neural networks models faster. From a high-level point of view, quantization works as mapping the float32 ranges of values as int8 with the less loss in the performances of the model.
Hugging Face provides a conversion tool as part of the transformers repository to easily export quantized models to ONNX Runtime. For more information, please refer to the following:
Hugging Face Documentation on ONNX Runtime quantization supports
Intel's Explanation of Quantization
With this method, the accuracy of the model remains at the same level than the full-precision model. If you want to see benchmarks on model performances, we recommand reading the ONNX Runtime notebook on the subject.
Benchmarking PyTorch quantized model
End of explanation
from transformers.convert_graph_to_onnx import quantize
# Transformers allow you to easily convert float32 model to quantized int8 with ONNX Runtime
quantized_model_path = quantize(Path("bert.opt.onnx"))
# Then you just have to load through ONNX runtime as you would normally do
quantized_model = create_model_for_provider(quantized_model_path.as_posix(), "CPUExecutionProvider")
# Warm up the overall model to have a fair comparaison
outputs = quantized_model.run(None, inputs_onnx)
# Evaluate performances
time_buffer = []
for _ in trange(100, desc=f"Tracking inference time on CPUExecutionProvider with quantized model"):
with track_infer_time(time_buffer):
outputs = quantized_model.run(None, inputs_onnx)
# Store the result
results["ONNX CPU Quantized"] = OnnxInferenceResult(
time_buffer,
quantized_model_path
)
Explanation: Benchmarking ONNX quantized model
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import os
# Compute average inference time + std
time_results = {k: np.mean(v.model_inference_time) * 1e3 for k, v in results.items()}
time_results_std = np.std([v.model_inference_time for v in results.values()]) * 1000
plt.rcdefaults()
fig, ax = plt.subplots(figsize=(16, 12))
ax.set_ylabel("Avg Inference time (ms)")
ax.set_title("Average inference time (ms) for each provider")
ax.bar(time_results.keys(), time_results.values(), yerr=time_results_std)
plt.show()
Explanation: Show the inference performance of each providers
End of explanation |
7,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Metasploit Payload Size
Or
Step1: We'll start with displaying payload size against the fraction of exploits which will work (or not work) for that size. It looks like any payload over 2048 bytes will only work with about 20% of exploits - a little less, in fact! If any of your stagers are just barely above 1024 bytes, it's well worth the effort to trim those last few bytes. Almost a quarter of exploits available in Metasploit have a payload size cutoff at 1024 bytes.
This chart can also be read as a probability
Step2: The humble histogram finishes out our exploration today - note that it's on a log scale. This is significantly less useful than the above charts, since payload size is a cumulative number (i.e. smaller payloads still work in exploits with more than enough space for them), but this view is interesting in that it shows us where there are large clusters of exploits accepting a certain payload size.
But, what about platforms? Are our results being influenced by the category of the exploit? (Windows and Linux vs. Android and iOS, etc.) | Python Code:
%matplotlib inline
import os
import re
import sys
import numpy as np
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
# Set up a path to the Metasploit project's code.
basepath = os.path.join('/', 'home', 'dnelson', 'projects', 'msf-stats')
rootdir = os.path.join(basepath, 'metasploit-framework', 'modules', 'exploits')
rootdir
# Iterate through every exploit module, searching for the amount of space that exploit provides to fit a payload in.
all_sizes = []
for folder, subs, files in os.walk(rootdir):
for filename in files:
with open(os.path.join(folder, filename), 'r') as sploit:
#print("parsing " + filename + "...")
text = sploit.read()
# remove all whitespace
text = ''.join(text.split())
space = re.search("\'Space\'=>(\d+)\,", text)
# Note that if no payload size limit is specified, we simply ignore that module
if space:
all_sizes.append(int(space.group(1)))
print("Modules processed: " + str(len(all_sizes)))
sorted_sizes = np.sort(all_sizes)
cumulative = np.cumsum(sorted_sizes)
print(sorted_sizes)
# looks to me like we should exclude that last one, since it's waaaaaaay larger than the rest
sorted_sizes = sorted_sizes[:-1]
# Plot our sorted sizes against a fraction, 0 to 1, of all exploits.
plt.figure(figsize = (10,5))
plt.plot(sorted_sizes, np.linspace(1,0,len(sorted_sizes)), linewidth=3, color='black')
plt.xlim((0,10000))
plt.xlabel("Payload size (bytes)", size=16)
plt.ylabel("Fraction of exploits which\n accept that payload size", size=16)
# Plot some vertical lines for emphasis; a red line at 512 bytes, green at 1024, and blue at 2048.
plt.axvline(512, color='red', linewidth=2)
plt.axvline(1024, color='green', linewidth=2)
plt.axvline(2048, color='blue', linewidth=2)
plt.show()
Explanation: Metasploit Payload Size
Or: "How big can my stagers be, really?"
The idea here is to parse through the Metasploit Project's available exploits to determine what the distribution of payload sizes is.
This can help make decisions for stager size optimization - if I have a great idea for a stager (or other exploit payload), but can't make it any smaller than 1k, is it worth it? What if it's 2k? And so on.
As it turns out, payloads over 2kb work with less than 20% of available exploits, and payloads over 1kb only work with about 60% - if you can't make your stager under 2k, you shouldn't expect to be able to use it very often at all.
End of explanation
plt.figure(figsize = (10,5))
plt.hist(sorted_sizes, 500)
plt.xlim((0,35000))
plt.yscale('log')
plt.xlabel("Payload Size (bytes)", size=16)
plt.ylabel("Number of Exploits, Log Scale", size=16)
plt.show()
Explanation: We'll start with displaying payload size against the fraction of exploits which will work (or not work) for that size. It looks like any payload over 2048 bytes will only work with about 20% of exploits - a little less, in fact! If any of your stagers are just barely above 1024 bytes, it's well worth the effort to trim those last few bytes. Almost a quarter of exploits available in Metasploit have a payload size cutoff at 1024 bytes.
This chart can also be read as a probability: if I want to send a 1000 byte payload, and I pick an exploit at random (or, I find a vulnerable host at random), I have about a 60% chance that the exploit I end up with will be able to accomodate that payload. If my payload is 500 bytes, that probability becomes more than 90%.
End of explanation
sploitsByCategory = {}
for folder, subs, files in os.walk(rootdir):
for filename in files:
with open(os.path.join(folder, filename), 'r') as sploit:
#print("parsing " + filename + "...")
text = sploit.read()
# remove all whitespace
text = ''.join(text.split())
space = re.search("\'Space\'=>(\d+)\,", text)
# Note that if no payload size limit is specified, we simply ignore that module
if space:
# get the first folder in the exploits directory by
# 1. Stripping off the rootdir using [len(rootdir):]
# 2. Splitting on the folder separator (not platform-independent)
# 3. Taking the first one (the zeroth one is '' since there's always a leading /)
sploitType = folder[len(rootdir):].split('/')[1]
try:
sploitsByCategory[sploitType].append(int(space.group(1)))
except KeyError:
sploitsByCategory[sploitType] = []
sploitsByCategory[sploitType].append(int(space.group(1)))
# Yeah, that's right, a nested list comprehension. To print.
[str(sploits) + ": " + ",".join([str(num) for num in sploitsByCategory[sploits]]) for sploits in sploitsByCategory]
sortedSploits = {}
platforms = ['linux', 'windows', 'unix', 'multi']
for platform in platforms:
sortedSploits[platform] = np.sort(sploitsByCategory[platform])
colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k']
plt.figure(figsize = (10,5))
for i, platform in enumerate(platforms):
plt.plot(sortedSploits[platform], np.linspace(1,0,len(sortedSploits[platform])), linewidth=3, color=colors[i])
plt.xlim((0,10000))
plt.xlabel("Payload size (bytes)", size=16)
plt.ylabel("Fraction of exploits which\n accept that payload size", size=16)
plt.legend([platform for platform in platforms])
plt.show()
Explanation: The humble histogram finishes out our exploration today - note that it's on a log scale. This is significantly less useful than the above charts, since payload size is a cumulative number (i.e. smaller payloads still work in exploits with more than enough space for them), but this view is interesting in that it shows us where there are large clusters of exploits accepting a certain payload size.
But, what about platforms? Are our results being influenced by the category of the exploit? (Windows and Linux vs. Android and iOS, etc.)
End of explanation |
7,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
Step1: Once generate data
Step2: Step 1 - collect data
Step3: (1, 682, 7)
(1, 682, 6)
Step4: Step 2 - Build model
Step5: Tensor("data/strided_slice
Step6: Quick test run
Step7: Conclusion
For one instance that is the most easy case it seems to be trainable, let's get the predicted values to observe how it actually looks like
Step 3 training the network | Python Code:
batch_size = 1
#full_train_size = 55820
#train_size = 55800
#small_train_size = 6000 #just because of performance reasons, no statistics behind this decision
#test_size = 6200
data_path = '../../../../Dropbox/data'
phae_path = data_path + '/price_hist_autoencoder'
csv_in = '../price_history_03_seq_start_suddens_trimmed.csv'
assert path.isfile(csv_in)
npz_one_instance = phae_path + '/price_history_seqs_dates_normed_one_instance_train.npz'
npz_one_instance_prefix = npz_one_instance[:-len('_train.npz')]
Explanation: Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
End of explanation
df = pd.read_csv(csv_in, index_col=0, quoting=csv.QUOTE_ALL, encoding='utf-8')
df.shape
npz_path = phae_path + '/price_history_full_seqs.npz'
%%time
train_pack, test_pack = PriceHistoryAutoEncoderDatasetGenerator(random_state=random_state).createAndSaveDataset(
csv_in = csv_in,
do_global_norm_scale = True,
save_files_dic = {
'train': npz_path,
'test': None,
}
)
#sku_ids
#inputs
#sequence lengths
#masks
#dates
for bb in train_pack.get_data():
print bb.shape
%%time
merged_dic = PriceHistoryAutoEncoderDatasetGenerator.merge_date_info(npz_path=npz_path)
max_seq_len = merged_dic['inputs'].shape[1]
max_seq_len
assert len(np.argwhere(merged_dic['extra_inputs'][0][:, 0] == -1).flatten()) + merged_dic['sequence_lengths'][0] \
== max_seq_len, "just checking if our conversion occured as it should have"
npz_dates = phae_path + '/price_history_full_seqs_dates.npz'
#np.savez(npz_dates, **merged_dic)
dic = np.load(npz_dates)
for key, val in dic.iteritems():
print key, ",", val.shape
years = dic['extra_inputs'][:, :, 3]
years.shape
years_flat = years.flatten()
years_flat.shape
years_filtered = years_flat[years_flat >= 0]
years_filtered.shape
np.mean(years_filtered)
np.std(years_filtered)
%%time
norm_dates_dic = PriceHistoryAutoEncoderDatasetGenerator.normalize_date_info(npz_dates)
for key, val in norm_dates_dic.iteritems():
print key, ",", val.shape
norm_dates_dic['extra_inputs'][:, :, 0]
npz_dates = phae_path + '/price_history_seqs_dates_normed_train.npz'
np.savez(npz_dates, **norm_dates_dic)
final_dic = np.load(npz_dates)
for key, val in final_dic.iteritems():
print key, ",", val.shape
small_dic = {}
for key, val in final_dic.iteritems():
small_dic[key] = val[:1]
print key, ",", small_dic[key].shape
np.savez(npz_one_instance, **small_dic)
# PriceHistoryDatasetGenerator.create_subsampled(inpath=npz_train_full, target_size=55800,
# outpath=npz_train_trimmed, random_state=random_state)
Explanation: Once generate data
End of explanation
dp = PriceHistoryAutoEncDataProvider(npz_path=npz_one_instance_prefix, batch_size=1, with_EOS=False)
for data in dp.datalist:
print data.shape
dp = PriceHistoryAutoEncDataProvider(npz_path=npz_one_instance_prefix, batch_size=1, with_EOS=False, ts_max_len=210)
for data in dp.datalist:
print data.shape
Explanation: Step 1 - collect data
End of explanation
# for item in dp.next():
# print item.shape
Explanation: (1, 682, 7)
(1, 682, 6)
End of explanation
model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)
# graph = model.getGraph(batch_size=batch_size,
# enc_num_units = 10,
# dec_num_units = 10,
# ts_len=210)
Explanation: Step 2 - Build model
End of explanation
#show_graph(graph)
Explanation: Tensor("data/strided_slice:0", shape=(50, 210), dtype=float32)
210
Tensor("inputs/unstack:0", shape=(50, 7), dtype=float32)
Tensor("encoder_rnn_layer/rnn/gru_cell_209/add:0", shape=(50, 10), dtype=float32)
Tensor("encoder_state_out_process/Elu:0", shape=(50, 2), dtype=float32)
Tensor("decoder_state_in_process/Elu:0", shape=(50, 10), dtype=float32)
210
Tensor("dec_extra_ins/unstack:0", shape=(50, 6), dtype=float32)
decoder_outputs len: 210
Tensor("decoder_rnn_layer/rnn/gru_cell/add:0", shape=(50, 10), dtype=float32)
Tensor("decoder_outs/stack:0", shape=(50, 210, 10), dtype=float32)
Tensor("decoder_outs/Reshape:0", shape=(10500, 10), dtype=float32)
Tensor("readout_affine/Identity:0", shape=(10500, 1), dtype=float32)
Tensor("readout_affine/Reshape:0", shape=(50, 210), dtype=float32)
Tensor("error/Select:0", shape=(50, 210), dtype=float32)
Tensor("error/Mean:0", shape=(), dtype=float32)
Tensor("error/Mean:0", shape=(), dtype=float32)
End of explanation
def experiment():
return model.run(npz_path=npz_one_instance_prefix,
epochs=100,
batch_size = batch_size,
enc_num_units = 400,
dec_num_units = 400,
ts_len=210,
learning_rate = 1e-4,
preds_gather_enabled = False,
)
dyn_stats = experiment()
dyn_stats.plotStats()
Explanation: Quick test run
End of explanation
model = PriceHistoryAutoencoder(rng=random_state, dtype=dtype, config=config)
batch_size
npz_test = npz_one_instance_prefix + '_test.npz'
assert path.isfile(npz_test)
path.abspath(npz_test)
def experiment():
return model.run(npz_path=npz_one_instance_prefix,
epochs=100,
batch_size = batch_size,
enc_num_units = 400,
dec_num_units = 400,
ts_len=210,
learning_rate = 1e-4,
preds_gather_enabled = True,
)
#%%time
dyn_stats, preds_dict, targets = get_or_run_nn(experiment, filename='032_autoencoder_000',
nn_runs_folder = data_path + "/nn_runs")
dyn_stats.plotStats()
plt.show()
r2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])
for ind in range(len(targets))]
ind = np.argmin(r2_scores)
ind
reals = targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
#sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]
for ind in range(len(targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(targets))
reals = targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b', label='reals')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
Explanation: Conclusion
For one instance that is the most easy case it seems to be trainable, let's get the predicted values to observe how it actually looks like
Step 3 training the network
End of explanation |
7,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handling Time
When performing feature engineering with temporal data, carefully selecting the data that is used for any calculation is paramount. By annotating dataframes with a Woodwork time index column and providing a cutoff time during feature calculation, Featuretools will automatically filter out any data after the cutoff time before running any calculations.
What is the Time Index?
The time index is the column in the data that specifies when the data in each row became known. For example, let's examine a table of customer transactions
Step1: In this table, there is one row for every transaction and a transaction_time column that specifies when the transaction took place. This means that transaction_time is the time index because it indicates when the information in each row became known and available for feature calculations. For now, ignore the _ft_last_time column. That is a featuretools-generated column that will be discussed later on.
However, not every datetime column is a time index. Consider the customers dataframe
Step2: Here, we have two time columns, join_date and birthday. While either column might be useful for making features, the join_date should be used as the time index because it indicates when that customer first became available in the dataset.
What is the Cutoff Time?
The cutoff_time specifies the last point in time that a row’s data can be used for a feature calculation. Any data after this point in time will be filtered out before calculating features.
For example, let's consider a dataset of timestamped customer transactions, where we want to predict whether customers 1, 2 and 3 will spend $500 between 04
Step3: Even though the entityset contains the complete transaction history for each customer, only data with a time index up to and including the cutoff time was used to calculate the features above.
Using a Cutoff Time DataFrame
Oftentimes, the training examples for machine learning will come from different points in time. To specify a unique cutoff time for each row of the resulting feature matrix, we can pass a dataframe which includes one column for the instance id and another column for the corresponding cutoff time. These columns can be in any order, but they must be named properly. The column with the instance ids must either be named instance_id or have the same name as the target dataframe index. The column with the cutoff time values must either be named time or have the same name as the target dataframe time_index.
The column names for the instance ids and the cutoff time values should be unambiguous. Passing a dataframe that contains both a column with the same name as the target dataframe index and a column named instance_id will result in an error. Similarly, if the cutoff time dataframe contains both a column with the same name as the target dataframe time_index and a column named time an error will be raised.
Step4: We can now see that every row of the feature matrix is calculated at the corresponding time in the cutoff time dataframe. Because we calculate each row at a different time, it is possible to have a repeat customer. In this case, we calculated the feature vector for customer 1 at both 04
Step5: We can see that that the counts for the same feature are lower after we shorten the training window
Step6: Setting a Last Time Index
The training window in Featuretools limits the amount of past data that can be used while calculating a particular feature vector. A row in the dataframe is filtered out if the value of its time index is either before or after the training window. This works for dataframes where a row occurs at a single point in time. However, a row can sometimes exist for a duration.
For example, a customer's session has multiple transactions which can happen at different points in time. If we are trying to count the number of sessions a user has in a given time period, we often want to count all the sessions that had any transaction during the training window. To accomplish this, we need to not only know when a session starts, but also when it ends. The last time that an instance appears in the data is stored in the _ft_last_time column on the dataframe. We can compare the time index and the last time index of the sessions dataframe above
Step7: Featuretools can automatically add last time indexes to every DataFrame in an Entityset by running EntitySet.add_last_time_indexes(). When using a training window, if a last_time_index has been set, Featuretools will check to see if the last_time_index is after the start of the training window. That, combined with the cutoff time, allows DFS to discover which data is relevant for a given training window.
Excluding data at cutoff times
Setting include_cutoff_time to False also impacts how data at the edges
of training windows are included or excluded. Take this slice of data as an example
Step8: Looking at the data, transactions occur every 65 seconds. To check how include_cutoff_time
effects training windows, we can calculate features at the time of a transaction
while using a 65 second training window. This creates a training window with a
transaction at both endpoints of the window. For this example, we'll find the sum
of all transactions for session id 1 that are in the training window.
Step9: With include_cutoff_time=True, the oldest point in the training window
(2014-01-01 00
Step10: Whereas with include_cutoff_time=False, the oldest point in the window is
included and the cutoff time point is excluded. So in this case transaction 116
is included and transaction 371 is exluded, and the sum is 78.92
Step11: Approximating Features by Rounding Cutoff Times
For each unique cutoff time, Featuretools must perform operations to select the data that’s valid for computations. If there are a large number of unique cutoff times relative to the number of instances for which we are calculating features, the time spent filtering data can add up. By reducing the number of unique cutoff times, we minimize the overhead from searching for and extracting data for feature calculations.
One way to decrease the number of unique cutoff times is to round cutoff times to an earlier point in time. An earlier cutoff time is always valid for predictive modeling — it just means we’re not using some of the data we could potentially use while calculating that feature. So, we gain computational speed by losing a small amount of information.
To understand when an approximation is useful, consider calculating features for a model to predict fraudulent credit card transactions. In this case, an important feature might be, "the average transaction amount for this card in the past". While this value can change every time there is a new transaction, updating it less frequently might not impact accuracy.
fm = ft.calculate_feature_matrix(features=features,
entityset=es_transactions,
cutoff_time=ct_transactions,
approximate="1 day")
In this computation, features that can be approximated will be calculated at 1 day intervals, while features that cannot be approximated (e.g "where did this transaction occur?") will be calculated at the exact cutoff time.
Secondary Time Index
It is sometimes the case that information in a dataset is updated or added after a row has been created. This means that certain columns may actually become known after the time index for a row. Rather than drop those columns to avoid leaking information, we can create a secondary time index to indicate when those columns become known.
Step12: For every trip log, the time index is date_scheduled, which is when the airline decided on the scheduled departure and arrival times, as well as what route will be flown. We don't know the rest of the information about the actual departure/arrival times and the details of any delay at this time. However, it is possible to know everything about how a trip went after it has arrived, so we can use that information at any time after the flight lands.
Using a secondary time index, we can indicate to Featuretools which columns in our flight logs are known at the time the flight is scheduled, plus which are known at the time the flight lands.
<img src="../_static/images/flight_ti_2.png" width="400" align="center" alt="flight secondary time index diagram">
In Featuretools, when adding the dataframe to the EntitySet, we set the secondary time index to be the arrival time like this
Step13: Now, let's calculate the feature matrix
Step14: Let's understand the output
Step15: Then passing in window_size='1h' and num_windows=2 makes one row an hour over the last two hours to produce the following new dataframe. The result can be directly passed into DFS to make features at the different time points. | Python Code:
import pandas as pd
pd.options.display.max_columns = 200
import featuretools as ft
es = ft.demo.load_mock_customer(return_entityset=True, random_seed=0)
es['transactions'].head()
Explanation: Handling Time
When performing feature engineering with temporal data, carefully selecting the data that is used for any calculation is paramount. By annotating dataframes with a Woodwork time index column and providing a cutoff time during feature calculation, Featuretools will automatically filter out any data after the cutoff time before running any calculations.
What is the Time Index?
The time index is the column in the data that specifies when the data in each row became known. For example, let's examine a table of customer transactions:
End of explanation
es['customers']
Explanation: In this table, there is one row for every transaction and a transaction_time column that specifies when the transaction took place. This means that transaction_time is the time index because it indicates when the information in each row became known and available for feature calculations. For now, ignore the _ft_last_time column. That is a featuretools-generated column that will be discussed later on.
However, not every datetime column is a time index. Consider the customers dataframe:
End of explanation
fm, features = ft.dfs(entityset=es,
target_dataframe_name='customers',
cutoff_time=pd.Timestamp("2014-1-1 04:00"),
instance_ids=[1,2,3],
cutoff_time_in_index=True)
fm
Explanation: Here, we have two time columns, join_date and birthday. While either column might be useful for making features, the join_date should be used as the time index because it indicates when that customer first became available in the dataset.
What is the Cutoff Time?
The cutoff_time specifies the last point in time that a row’s data can be used for a feature calculation. Any data after this point in time will be filtered out before calculating features.
For example, let's consider a dataset of timestamped customer transactions, where we want to predict whether customers 1, 2 and 3 will spend $500 between 04:00 on January 1 and the end of the day. When building features for this prediction problem, we need to ensure that no data after 04:00 is used in our calculations.
<img src="../_static/images/retail_ct.png" width="400" align="center" alt="retail cutoff time diagram">
End of explanation
cutoff_times = pd.DataFrame()
cutoff_times['customer_id'] = [1, 2, 3, 1]
cutoff_times['time'] = pd.to_datetime(['2014-1-1 04:00',
'2014-1-1 05:00',
'2014-1-1 06:00',
'2014-1-1 08:00'])
cutoff_times['label'] = [True, True, False, True]
cutoff_times
fm, features = ft.dfs(entityset=es,
target_dataframe_name='customers',
cutoff_time=cutoff_times,
cutoff_time_in_index=True)
fm
Explanation: Even though the entityset contains the complete transaction history for each customer, only data with a time index up to and including the cutoff time was used to calculate the features above.
Using a Cutoff Time DataFrame
Oftentimes, the training examples for machine learning will come from different points in time. To specify a unique cutoff time for each row of the resulting feature matrix, we can pass a dataframe which includes one column for the instance id and another column for the corresponding cutoff time. These columns can be in any order, but they must be named properly. The column with the instance ids must either be named instance_id or have the same name as the target dataframe index. The column with the cutoff time values must either be named time or have the same name as the target dataframe time_index.
The column names for the instance ids and the cutoff time values should be unambiguous. Passing a dataframe that contains both a column with the same name as the target dataframe index and a column named instance_id will result in an error. Similarly, if the cutoff time dataframe contains both a column with the same name as the target dataframe time_index and a column named time an error will be raised.
End of explanation
window_fm, window_features = ft.dfs(entityset=es,
target_dataframe_name="customers",
cutoff_time=cutoff_times,
cutoff_time_in_index=True,
training_window="2 hour")
window_fm
Explanation: We can now see that every row of the feature matrix is calculated at the corresponding time in the cutoff time dataframe. Because we calculate each row at a different time, it is possible to have a repeat customer. In this case, we calculated the feature vector for customer 1 at both 04:00 and 08:00.
Training Window
By default, all data up to and including the cutoff time is used. We can restrict the amount of historical data that is selected for calculations using a "training window."
Here's an example of using a two hour training window:
End of explanation
fm[["COUNT(transactions)"]]
window_fm[["COUNT(transactions)"]]
Explanation: We can see that that the counts for the same feature are lower after we shorten the training window:
End of explanation
last_time_index_col = es['sessions'].ww.metadata.get('last_time_index')
es['sessions'][['session_start', last_time_index_col]].head()
Explanation: Setting a Last Time Index
The training window in Featuretools limits the amount of past data that can be used while calculating a particular feature vector. A row in the dataframe is filtered out if the value of its time index is either before or after the training window. This works for dataframes where a row occurs at a single point in time. However, a row can sometimes exist for a duration.
For example, a customer's session has multiple transactions which can happen at different points in time. If we are trying to count the number of sessions a user has in a given time period, we often want to count all the sessions that had any transaction during the training window. To accomplish this, we need to not only know when a session starts, but also when it ends. The last time that an instance appears in the data is stored in the _ft_last_time column on the dataframe. We can compare the time index and the last time index of the sessions dataframe above:
End of explanation
df = es['transactions']
df[df["session_id"] == 1].head()
Explanation: Featuretools can automatically add last time indexes to every DataFrame in an Entityset by running EntitySet.add_last_time_indexes(). When using a training window, if a last_time_index has been set, Featuretools will check to see if the last_time_index is after the start of the training window. That, combined with the cutoff time, allows DFS to discover which data is relevant for a given training window.
Excluding data at cutoff times
Setting include_cutoff_time to False also impacts how data at the edges
of training windows are included or excluded. Take this slice of data as an example:
End of explanation
from featuretools.primitives import Sum
sum_log = ft.Feature(
es['transactions'].ww['amount'],
parent_dataframe_name='sessions',
primitive=Sum,
)
cutoff_time = pd.DataFrame({
'session_id': [1],
'time': ['2014-01-01 00:04:20'],
}).astype({'time': 'datetime64[ns]'})
Explanation: Looking at the data, transactions occur every 65 seconds. To check how include_cutoff_time
effects training windows, we can calculate features at the time of a transaction
while using a 65 second training window. This creates a training window with a
transaction at both endpoints of the window. For this example, we'll find the sum
of all transactions for session id 1 that are in the training window.
End of explanation
# Case1. include_cutoff_time = True
actual = ft.calculate_feature_matrix(
features=[sum_log],
entityset=es,
cutoff_time=cutoff_time,
cutoff_time_in_index=True,
training_window='65 seconds',
include_cutoff_time=True,
)
actual
Explanation: With include_cutoff_time=True, the oldest point in the training window
(2014-01-01 00:03:15) is excluded and the cutoff time point is included. This
means only transaction 371 is in the training window, so the sum of all transaction
amounts is 31.54
End of explanation
# Case2. include_cutoff_time = False
actual = ft.calculate_feature_matrix(
features=[sum_log],
entityset=es,
cutoff_time=cutoff_time,
cutoff_time_in_index=True,
training_window='65 seconds',
include_cutoff_time=False,
)
actual
Explanation: Whereas with include_cutoff_time=False, the oldest point in the window is
included and the cutoff time point is excluded. So in this case transaction 116
is included and transaction 371 is exluded, and the sum is 78.92
End of explanation
import urllib.request as urllib2
opener = urllib2.build_opener()
opener.addheaders = [('Testing', 'True')]
urllib2.install_opener(opener)
es_flight = ft.demo.load_flight(nrows=100)
es_flight
es_flight['trip_logs'].head(3)
Explanation: Approximating Features by Rounding Cutoff Times
For each unique cutoff time, Featuretools must perform operations to select the data that’s valid for computations. If there are a large number of unique cutoff times relative to the number of instances for which we are calculating features, the time spent filtering data can add up. By reducing the number of unique cutoff times, we minimize the overhead from searching for and extracting data for feature calculations.
One way to decrease the number of unique cutoff times is to round cutoff times to an earlier point in time. An earlier cutoff time is always valid for predictive modeling — it just means we’re not using some of the data we could potentially use while calculating that feature. So, we gain computational speed by losing a small amount of information.
To understand when an approximation is useful, consider calculating features for a model to predict fraudulent credit card transactions. In this case, an important feature might be, "the average transaction amount for this card in the past". While this value can change every time there is a new transaction, updating it less frequently might not impact accuracy.
fm = ft.calculate_feature_matrix(features=features,
entityset=es_transactions,
cutoff_time=ct_transactions,
approximate="1 day")
In this computation, features that can be approximated will be calculated at 1 day intervals, while features that cannot be approximated (e.g "where did this transaction occur?") will be calculated at the exact cutoff time.
Secondary Time Index
It is sometimes the case that information in a dataset is updated or added after a row has been created. This means that certain columns may actually become known after the time index for a row. Rather than drop those columns to avoid leaking information, we can create a secondary time index to indicate when those columns become known.
End of explanation
ct_flight = pd.DataFrame()
ct_flight['trip_log_id'] = [14, 14, 92]
ct_flight['time'] = pd.to_datetime(['2016-12-28',
'2017-1-25',
'2016-12-28'])
ct_flight['label'] = [True, True, False]
ct_flight
Explanation: For every trip log, the time index is date_scheduled, which is when the airline decided on the scheduled departure and arrival times, as well as what route will be flown. We don't know the rest of the information about the actual departure/arrival times and the details of any delay at this time. However, it is possible to know everything about how a trip went after it has arrived, so we can use that information at any time after the flight lands.
Using a secondary time index, we can indicate to Featuretools which columns in our flight logs are known at the time the flight is scheduled, plus which are known at the time the flight lands.
<img src="../_static/images/flight_ti_2.png" width="400" align="center" alt="flight secondary time index diagram">
In Featuretools, when adding the dataframe to the EntitySet, we set the secondary time index to be the arrival time like this:
es = ft.EntitySet('Flight Data')
arr_time_columns = ['arr_delay', 'dep_delay', 'carrier_delay', 'weather_delay',
'national_airspace_delay', 'security_delay',
'late_aircraft_delay', 'canceled', 'diverted',
'taxi_in', 'taxi_out', 'air_time', 'dep_time']
es.add_dataframe(
dataframe_name='trip_logs',
dataframe=data,
index='trip_log_id',
make_index=True,
time_index='date_scheduled',
secondary_time_index={'arr_time': arr_time_columns})
By setting a secondary time index, we can still use the delay information from a row, but only when it becomes known.
Flight Predictions
Let's make some features at varying times using the flight example described above. Trip 14 is a flight from CLT to PHX on January 31, 2017 and trip 92 is a flight from PIT to DFW on January 1. We can set any cutoff time before the flight is scheduled to depart, emulating how we would make the prediction at that point in time.
We set two cutoff times for trip 14 at two different times: one which is more than a month before the flight and another which is only 5 days before. For trip 92, we'll only set one cutoff time, three days before it is scheduled to leave.
<img src="../_static/images/flight_ct.png" width="500" align="center" alt="flight cutoff time diagram">
Our cutoff time dataframe looks like this:
End of explanation
fm, features = ft.dfs(entityset=es_flight,
target_dataframe_name='trip_logs',
cutoff_time=ct_flight,
cutoff_time_in_index=True,
agg_primitives=["max"],
trans_primitives=["month"],)
fm[['flights.origin', 'flights.dest', 'label', 'flights.MAX(trip_logs.arr_delay)', 'MONTH(scheduled_dep_time)']]
Explanation: Now, let's calculate the feature matrix:
End of explanation
cutoff_times
Explanation: Let's understand the output:
A row was made for every id-time pair in ct_flight, which is returned as the index of the feature matrix.
The output was sorted by cutoff time. Because of the sorting, it's often helpful to pass in a label with the cutoff time dataframe so that it will remain sorted in the same fashion as the feature matrix. Any additional columns beyond id and cutoff_time will not be used for making features.
The column flights.MAX(trip_logs.arr_delay) is not always defined. It can only have any real values when there are historical flights to aggregate. Notice that, for trip 14, there wasn't any historical data when we made the feature a month in advance, but there were flights to aggregate when we shortened it to 5 days. These are powerful features that are often excluded in manual processes because of how hard they are to make.
Creating and Flattening a Feature Tensor
This function can be paired with DFS to create and flatten a feature tensor rather than making multiple feature matrices at different delays.
The function
takes in the the following parameters:
instance_ids (list, pd.Series, or np.ndarray): A list of instances.
cutoffs (list, pd.Series, or np.ndarray): An associated list of cutoff times.
window_size (str or pandas.DateOffset): The amount of time between each cutoff time in the created time series.
start (datetime.datetime or pd.Timestamp): The first cutoff time in the created time series.
num_windows (int): The number of cutoff times to create in the created time series.
Only two of the three options window_size, start, and num_windows need to be specified to uniquely determine an equally-spaced set of cutoff times at which to compute each instance.
If your cutoff times are the ones used above:
End of explanation
temporal_cutoffs = ft.make_temporal_cutoffs(cutoff_times['customer_id'],
cutoff_times['time'],
window_size='1h',
num_windows=2)
temporal_cutoffs
fm, features = ft.dfs(entityset=es,
target_dataframe_name='customers',
cutoff_time=temporal_cutoffs,
cutoff_time_in_index=True)
fm
Explanation: Then passing in window_size='1h' and num_windows=2 makes one row an hour over the last two hours to produce the following new dataframe. The result can be directly passed into DFS to make features at the different time points.
End of explanation |
7,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model hooks
Callback and helper function to add hooks in models
Step1: What are hooks?
Hooks are functions you can attach to a particular layer in your model and that will be executed in the forward pass (for forward hooks) or backward pass (for backward hooks). Here we begin with an introduction around hooks, but you should jump to HookCallback if you quickly want to implement one (and read the following example ActivationStats).
Forward hooks are functions that take three arguments
Step2: Backward hooks are functions that take three arguments
Step3: Hooks can change the input/output of a layer, or the gradients, print values or shapes. If you want to store something related to theses inputs/outputs, it's best to have your hook associated to a class so that it can put it in the state of an instance of that class.
Hook -
Step4: This will be called during the forward pass if is_forward=True, the backward pass otherwise, and will optionally detach, gather and put on the cpu the (gradient of the) input/output of the model before passing them to hook_func. The result of hook_func will be stored in the stored attribute of the Hook.
Step5: Note
Step6: Context Manager
Since it's very important to remove your Hook even if your code is interrupted by some bug, Hook can be used as context managers.
Step7: The activations stored are the gradients if grad=True, otherwise the output of module. If detach=True they are detached from their history, and if cpu=True, they're put on the CPU.
Step8: Hooks -
Step9: Context Manager
Like Hook , you can use Hooks as context managers.
Step10: The activations stored are the gradients if grad=True, otherwise the output of modules. If detach=True they are detached from their history, and if cpu=True, they're put on the CPU.
Step11: HookCallback -
To make hooks easy to use, we wrapped a version in a Callback where you just have to implement a hook function (plus any element you might need).
Step12: You can either subclass and implement a hook function (along with any event you want) or pass that a hook function when initializing. Such a function needs to take three argument
Step13: Model summary
Step14: The output of _track is expected to be a tuple of module name, the number of parameters, the shape of the layer, whether it is trainable, what layer group it belongs to, and whether or not the size changed. There are three potential groups that can show
Step15: Activation graphs
Step16: The first line contains the means of the outputs of the model for each batch in the training set, the second line their standard deviations.
Step17: Export - | Python Code:
from fastai.test_utils import *
Explanation: Model hooks
Callback and helper function to add hooks in models
End of explanation
tst_model = nn.Linear(5,3)
def example_forward_hook(m,i,o): print(m,i,o)
x = torch.randn(4,5)
hook = tst_model.register_forward_hook(example_forward_hook)
y = tst_model(x)
hook.remove()
Explanation: What are hooks?
Hooks are functions you can attach to a particular layer in your model and that will be executed in the forward pass (for forward hooks) or backward pass (for backward hooks). Here we begin with an introduction around hooks, but you should jump to HookCallback if you quickly want to implement one (and read the following example ActivationStats).
Forward hooks are functions that take three arguments: the layer it's applied to, the input of that layer and the output of that layer.
End of explanation
def example_backward_hook(m,gi,go): print(m,gi,go)
hook = tst_model.register_backward_hook(example_backward_hook)
x = torch.randn(4,5)
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
hook.remove()
Explanation: Backward hooks are functions that take three arguments: the layer it's applied to, the gradients of the loss with respect to the input, and the gradients with respect to the output.
End of explanation
#|export
@docs
class Hook():
"Create a hook on `m` with `hook_func`."
def __init__(self, m, hook_func, is_forward=True, detach=True, cpu=False, gather=False):
store_attr('hook_func,detach,cpu,gather')
f = m.register_forward_hook if is_forward else m.register_backward_hook
self.hook = f(self.hook_fn)
self.stored,self.removed = None,False
def hook_fn(self, module, input, output):
"Applies `hook_func` to `module`, `input`, `output`."
if self.detach:
input,output = to_detach(input, cpu=self.cpu, gather=self.gather),to_detach(output, cpu=self.cpu, gather=self.gather)
self.stored = self.hook_func(module, input, output)
def remove(self):
"Remove the hook from the model."
if not self.removed:
self.hook.remove()
self.removed=True
def __enter__(self, *args): return self
def __exit__(self, *args): self.remove()
_docs = dict(__enter__="Register the hook",
__exit__="Remove the hook")
Explanation: Hooks can change the input/output of a layer, or the gradients, print values or shapes. If you want to store something related to theses inputs/outputs, it's best to have your hook associated to a class so that it can put it in the state of an instance of that class.
Hook -
End of explanation
tst_model = nn.Linear(5,3)
hook = Hook(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hook.stored, y)
show_doc(Hook.hook_fn)
show_doc(Hook.remove)
Explanation: This will be called during the forward pass if is_forward=True, the backward pass otherwise, and will optionally detach, gather and put on the cpu the (gradient of the) input/output of the model before passing them to hook_func. The result of hook_func will be stored in the stored attribute of the Hook.
End of explanation
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
hook = Hook(tst_model, example_forward_hook)
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
hook.remove()
test_stdout(lambda: tst_model(x), "")
Explanation: Note: It's important to properly remove your hooks for your model when you're done to avoid them being called again next time your model is applied to some inputs, and to free the memory that go with their state.
End of explanation
show_doc(Hook.__enter__)
show_doc(Hook.__exit__)
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
y = tst_model(x)
with Hook(tst_model, example_forward_hook) as h:
test_stdout(lambda: tst_model(x), f"{tst_model} ({x},) {y.detach()}")
test_stdout(lambda: tst_model(x), "")
#|export
def _hook_inner(m,i,o): return o if isinstance(o,Tensor) or is_listy(o) else list(o)
def hook_output(module, detach=True, cpu=False, grad=False):
"Return a `Hook` that stores activations of `module` in `self.stored`"
return Hook(module, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
Explanation: Context Manager
Since it's very important to remove your Hook even if your code is interrupted by some bug, Hook can be used as context managers.
End of explanation
tst_model = nn.Linear(5,10)
x = torch.randn(4,5)
with hook_output(tst_model) as h:
y = tst_model(x)
test_eq(y, h.stored)
assert not h.stored.requires_grad
with hook_output(tst_model, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
test_close(2*y / y.numel(), h.stored[0])
#cuda
with hook_output(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
test_eq(h.stored.device, torch.device('cpu'))
Explanation: The activations stored are the gradients if grad=True, otherwise the output of module. If detach=True they are detached from their history, and if cpu=True, they're put on the CPU.
End of explanation
#|export
@docs
class Hooks():
"Create several hooks on the modules in `ms` with `hook_func`."
def __init__(self, ms, hook_func, is_forward=True, detach=True, cpu=False):
self.hooks = [Hook(m, hook_func, is_forward, detach, cpu) for m in ms]
def __getitem__(self,i): return self.hooks[i]
def __len__(self): return len(self.hooks)
def __iter__(self): return iter(self.hooks)
@property
def stored(self): return L(o.stored for o in self)
def remove(self):
"Remove the hooks from the model."
for h in self.hooks: h.remove()
def __enter__(self, *args): return self
def __exit__ (self, *args): self.remove()
_docs = dict(stored = "The states saved in each hook.",
__enter__="Register the hooks",
__exit__="Remove the hooks")
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
hooks = Hooks(tst_model, lambda m,i,o: o)
y = tst_model(x)
test_eq(hooks.stored[0], layers[0](x))
test_eq(hooks.stored[1], F.relu(layers[0](x)))
test_eq(hooks.stored[2], y)
hooks.remove()
show_doc(Hooks.stored, name='Hooks.stored')
show_doc(Hooks.remove)
Explanation: Hooks -
End of explanation
show_doc(Hooks.__enter__)
show_doc(Hooks.__exit__)
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
with Hooks(layers, lambda m,i,o: o) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
#|export
def hook_outputs(modules, detach=True, cpu=False, grad=False):
"Return `Hooks` that store activations of all `modules` in `self.stored`"
return Hooks(modules, _hook_inner, detach=detach, cpu=cpu, is_forward=not grad)
Explanation: Context Manager
Like Hook , you can use Hooks as context managers.
End of explanation
layers = [nn.Linear(5,10), nn.ReLU(), nn.Linear(10,3)]
tst_model = nn.Sequential(*layers)
x = torch.randn(4,5)
with hook_outputs(layers) as h:
y = tst_model(x)
test_eq(h.stored[0], layers[0](x))
test_eq(h.stored[1], F.relu(layers[0](x)))
test_eq(h.stored[2], y)
for s in h.stored: assert not s.requires_grad
with hook_outputs(layers, grad=True) as h:
y = tst_model(x)
loss = y.pow(2).mean()
loss.backward()
g = 2*y / y.numel()
test_close(g, h.stored[2][0])
g = g @ layers[2].weight.data
test_close(g, h.stored[1][0])
g = g * (layers[0](x) > 0).float()
test_close(g, h.stored[0][0])
#cuda
with hook_outputs(tst_model, cpu=True) as h:
y = tst_model.cuda()(x.cuda())
for s in h.stored: test_eq(s.device, torch.device('cpu'))
#|export
def dummy_eval(m, size=(64,64)):
"Evaluate `m` on a dummy input of a certain `size`"
ch_in = in_channels(m)
x = one_param(m).new(1, ch_in, *size).requires_grad_(False).uniform_(-1.,1.)
with torch.no_grad(): return m.eval()(x)
#|export
def model_sizes(m, size=(64,64)):
"Pass a dummy input through the model `m` to get the various sizes of activations."
with hook_outputs(m) as hooks:
_ = dummy_eval(m, size=size)
return [o.stored.shape for o in hooks]
m = nn.Sequential(ConvLayer(3, 16), ConvLayer(16, 32, stride=2), ConvLayer(32, 32))
test_eq(model_sizes(m), [[1, 16, 64, 64], [1, 32, 32, 32], [1, 32, 32, 32]])
#|export
def num_features_model(m):
"Return the number of output features for `m`."
sz,ch_in = 32,in_channels(m)
while True:
#Trying for a few sizes in case the model requires a big input size.
try:
return model_sizes(m, (sz,sz))[-1][1]
except Exception as e:
sz *= 2
if sz > 2048: raise e
m = nn.Sequential(nn.Conv2d(5,4,3), nn.Conv2d(4,3,3))
test_eq(num_features_model(m), 3)
m = nn.Sequential(ConvLayer(3, 16), ConvLayer(16, 32, stride=2), ConvLayer(32, 32))
test_eq(num_features_model(m), 32)
Explanation: The activations stored are the gradients if grad=True, otherwise the output of modules. If detach=True they are detached from their history, and if cpu=True, they're put on the CPU.
End of explanation
#|export
def has_params(m):
"Check if `m` has at least one parameter"
return len(list(m.parameters())) > 0
assert has_params(nn.Linear(3,4))
assert has_params(nn.LSTM(4,5,2))
assert not has_params(nn.ReLU())
#|export
@funcs_kwargs
class HookCallback(Callback):
"`Callback` that can be used to register hooks on `modules`"
_methods = ["hook"]
hook = noops
def __init__(self, modules=None, every=None, remove_end=True, is_forward=True, detach=True, cpu=True, include_paramless=False , **kwargs):
store_attr('modules,every,remove_end,is_forward,detach,cpu, include_paramless')
assert not kwargs
def before_fit(self):
"Register the `Hooks` on `self.modules`."
if self.modules is None: self.modules = [m for m in flatten_model(self.model) if self.include_paramless or has_params(m)]
if self.every is None: self._register()
def before_batch(self):
if self.every is None: return
if self.training and self.train_iter%self.every==0: self._register()
def after_batch(self):
if self.every is None: return
if self.training and self.train_iter%self.every==0: self._remove()
def after_fit(self):
"Remove the `Hooks`."
if self.remove_end: self._remove()
def _register(self): self.hooks = Hooks(self.modules, self.hook, self.is_forward, self.detach, self.cpu)
def _remove(self):
if getattr(self, 'hooks', None): self.hooks.remove()
def __del__(self): self._remove()
Explanation: HookCallback -
To make hooks easy to use, we wrapped a version in a Callback where you just have to implement a hook function (plus any element you might need).
End of explanation
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def after_batch(self): test_eq(self.hooks.stored[0], self.pred)
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
class TstCallback(HookCallback):
def __init__(self, modules=None, remove_end=True, detach=True, cpu=False):
super().__init__(modules, None, remove_end, False, detach, cpu)
def hook(self, m, i, o): return o
def after_batch(self):
if self.training:
test_eq(self.hooks.stored[0][0], 2*(self.pred-self.y)/self.pred.shape[0])
learn = synth_learner(n_trn=5, cbs = TstCallback())
learn.fit(1)
show_doc(HookCallback.before_fit)
show_doc(HookCallback.after_fit)
Explanation: You can either subclass and implement a hook function (along with any event you want) or pass that a hook function when initializing. Such a function needs to take three argument: a layer, input and output (for a backward hook, input means gradient with respect to the inputs, output, gradient with respect to the output) and can either modify them or update the state according to them.
If not provided, modules will default to the layers of self.model that have a weight attribute. (to include layers of self.model that do not have a weight attribute e.g ReLU, Flatten etc., set include_paramless=True)
Depending on do_remove, the hooks will be properly removed at the end of training (or in case of error). is_forward , detach and cpu are passed to Hooks.
The function called at each forward (or backward) pass is self.hook and must be implemented when subclassing this callback.
End of explanation
#|export
def total_params(m):
"Give the number of parameters of a module and if it's trainable or not"
params = sum([p.numel() for p in m.parameters()])
trains = [p.requires_grad for p in m.parameters()]
return params, (False if len(trains)==0 else trains[0])
test_eq(total_params(nn.Linear(10,32)), (32*10+32,True))
test_eq(total_params(nn.Linear(10,32, bias=False)), (32*10,True))
test_eq(total_params(nn.BatchNorm2d(20)), (20*2, True))
test_eq(total_params(nn.BatchNorm2d(20, affine=False)), (0,False))
test_eq(total_params(nn.Conv2d(16, 32, 3)), (16*32*3*3 + 32, True))
test_eq(total_params(nn.Conv2d(16, 32, 3, bias=False)), (16*32*3*3, True))
#First ih layer 20--10, all else 10--10. *4 for the four gates
test_eq(total_params(nn.LSTM(20, 10, 2)), (4 * (20*10 + 10) + 3 * 4 * (10*10 + 10), True))
#|export
def layer_info(learn, *xb):
"Return layer infos of `model` on `xb` (only support batch first inputs)"
def _track(m, i, o):
params, trainable, shape = '', '', ''
same = any((isinstance(x[0], torch.Tensor) and x[0].shape[1:] == x[1].shape for x in zip(i, o)))
shape = apply(lambda x: x.shape, o)
if hasattr(m, 'weight'): # non activation layer
params, trainable = total_params(m)
return (type(m).__name__, params, trainable, shape, same)
with Hooks(flatten_model(learn.model), _track) as h:
batch = apply(lambda o:o[:1], xb)
train_only_cbs = [cb for cb in learn.cbs if hasattr(cb, '_only_train_loop')]
with learn.removed_cbs(train_only_cbs), learn.no_logging(), learn as l:
r = l.get_preds(dl=[batch], inner=True, reorder=False)
return h.stored
Explanation: Model summary
End of explanation
def _m(): return nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
sample_input = torch.randn((16, 1))
test_eq(layer_info(synth_learner(model=_m()), sample_input), [
('Linear', 100, True, [1, 50], False),
('ReLU', '', '', [1,50], True),
('BatchNorm1d', 100, True, [1, 50], True),
('Linear', 51, True, [1, 1], False)
])
#|hide
# Test for Flatten
def _tst_m(): return nn.Sequential(
nn.Conv2d(1, 2, kernel_size=3, padding=1, stride=2),
nn.ReLU(),
nn.Flatten(),
nn.Linear(8,50),
nn.ReLU(),
nn.BatchNorm1d(50),
nn.Linear(50, 1)
)
sample_input = torch.randn((1,1,4,4))
test_eq(layer_info(synth_learner(model=_tst_m()), sample_input), [
('Conv2d', 20, True, [1, 2, 2, 2], False),
('ReLU', '', '', [1, 2, 2, 2], True),
('Flatten', '', '', [1, 8], False),
('Linear', 450, True, [1, 50], False),
('ReLU', '', '', [1,50], True),
('BatchNorm1d', 100, True, [1, 50], True),
('Linear', 51, True, [1, 1], False)
])
#|hide
# Test for multiple inputs model
class _2InpModel(Module):
def __init__(self):
super().__init__()
self.seq = nn.Sequential(nn.Linear(2,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1))
def forward(self, *inps):
outputs = torch.cat(inps, dim=-1)
return self.seq(outputs)
sample_inputs = (torch.randn(16, 1), torch.randn(16, 1))
learn = synth_learner(model=_2InpModel())
learn.dls.n_inp = 2
test_eq(layer_info(learn, *sample_inputs), [
('Linear', 150, True, [1, 50], False),
('ReLU', '', '', [1,50], True),
('BatchNorm1d', 100, True, [1, 50], True),
('Linear', 51, True, [1, 1], False)
])
#|export
def _get_shapes(o, bs):
inp = o[first(o)] if (isinstance(o, dict)) else o
return ' x '.join([str(bs)] + [str(t) for t in inp[1:]])
def _print_shapes(o, bs):
if isinstance(o, torch.Size): return _get_shapes(o, bs)
elif isinstance(o, tuple): return _get_shapes(o[0], bs)
else: return str([_print_shapes(x, bs) for x in o])
#|export
def module_summary(learn, *xb):
"Print a summary of `model` using `xb`"
#Individual parameters wrapped in ParameterModule aren't called through the hooks in `layer_info`,
# thus are not counted inside the summary
#TODO: find a way to have them counted in param number somehow
infos = layer_info(learn, *xb)
n,bs = 76,find_bs(xb)
inp_sz = _print_shapes(apply(lambda x:x.shape, xb), bs)
res = f"{type(learn.model).__name__} (Input shape: {inp_sz})\n"
res += "=" * n + "\n"
res += f"{'Layer (type)':<20} {'Output Shape':<20} {'Param #':<10} {'Trainable':<10}\n"
res += "=" * n
ps,trn_ps,j = 0,0,0
infos = [o for o in infos if o is not None] #see comment in previous cell
prev_sz = None
for typ,np,trn,sz,chnged in infos:
if sz is None: continue
if j == 0:
res += f'\n{"":<20} {_print_shapes(sz, bs)[:19]:<20}' # to avoid a double line at the top
if not chnged and not prev_sz == sz and j > 0: res += "\n" + "_" * n + "\n" + f'{"":<20} {_print_shapes(sz, bs)[:19]:<20}'
j = 1
res += f"\n{typ:<20} {'':<20} {np:<10} {str(trn):<10}"
if np != '':
ps += np
if trn: trn_ps += np
prev_sz = sz
res += "\n" + "_" * n + "\n"
res += f"\nTotal params: {ps:,}\n"
res += f"Total trainable params: {trn_ps:,}\n"
res += f"Total non-trainable params: {ps - trn_ps:,}\n\n"
return PrettyString(res)
#|export
@patch
def summary(self:Learner):
"Print a summary of the model, optimizer and loss function."
xb = self.dls.train.one_batch()[:getattr(self.dls.train, "n_inp", 1)]
res = module_summary(self, *xb)
res += f"Optimizer used: {self.opt_func}\nLoss function: {self.loss_func}\n\n"
if self.opt is not None:
res += f"Model " + ("unfrozen\n\n" if self.opt.frozen_idx==0 else f"frozen up to parameter group #{self.opt.frozen_idx}\n\n")
res += "Callbacks:\n" + '\n'.join(f" - {cb}" for cb in self.cbs.sorted('order'))
return PrettyString(res)
learn = synth_learner(model=_m())
learn.summary()
#|hide
#cuda
learn = synth_learner(model=_m(), cuda=True)
learn.summary()
#|hide
# Test for multiple output
class _NOutModel(Module):
def __init__(self): self.lin = nn.Linear(5, 6)
def forward(self, x1):
x = torch.randn((10, 5))
return x,self.lin(x)
learn = synth_learner(model = _NOutModel())
learn.summary() # Output Shape should be (50, 16, 256), (1, 16, 256)
#|hide
# Test for the case (as in Book) when learn.dls.train_ds is a list not fastai.data.core.Datasets
train_x = torch.rand((100, 4))
train_y = torch.rand((100, 1))
valid_x = torch.rand((100, 4))
valid_y = torch.rand((100,1))
dset = list(zip(train_x,train_y))
valid_dset = list(zip(valid_x,valid_y))
dl = DataLoader(dset, batch_size=16)
val_dl = DataLoader(valid_dset, batch_size=16)
dls = DataLoaders(dl, val_dl)
simple_net = nn.Sequential(
nn.Linear(4, 2),
nn.ReLU(),
nn.Linear(2,1)
)
learn = Learner(dls, simple_net, loss_func=F.l1_loss)
learn.summary()
Explanation: The output of _track is expected to be a tuple of module name, the number of parameters, the shape of the layer, whether it is trainable, what layer group it belongs to, and whether or not the size changed. There are three potential groups that can show:
* A non-activation layer (Linear, Conv, etc)
* An activation layer
* A pooling layer
Depending on which only part of the output is really returned, otherwise it is ''. For non-activation layers everything is returned. Activation layers only return a name, the shape and False for same. Pooling layers will return the name, the new shape, and False for same
End of explanation
#|export
@delegates()
class ActivationStats(HookCallback):
"Callback that record the mean and std of activations."
order=-20
def __init__(self, with_hist=False, **kwargs):
super().__init__(**kwargs)
self.with_hist = with_hist
def before_fit(self):
"Initialize stats."
super().before_fit()
self.stats = L()
def hook(self, m, i, o):
if isinstance(o, tuple): return self.hook_multi_ouput(o)
o = o.float()
res = {'mean': o.mean().item(), 'std': o.std().item(),
'near_zero': (o<=0.05).long().sum().item()/o.numel()}
if self.with_hist: res['hist'] = o.histc(40,0,10)
return res
def hook_multi_ouput(self,o_tuple):
"For outputs of RNN which are [nested] tuples of tensors"
res = []
for o in self._flatten_tuple(o_tuple):
if not(isinstance(o, Tensor)): continue
res.append(self.hook(None, None, o))
return res
def _flatten_tuple(self, o_tuple):
"Recursively flatten a [nested] tuple"
res = []
for it in o_tuple:
if isinstance(it, tuple): res += self._flatten_tuple(it)
else: res += [it]
return tuple(res)
def after_batch(self):
"Take the stored results and puts it in `self.stats`"
if self.training and (self.every is None or self.train_iter%self.every == 0): self.stats.append(self.hooks.stored)
super().after_batch()
def layer_stats(self, idx):
lstats = self.stats.itemgot(idx)
return L(lstats.itemgot(o) for o in ('mean','std','near_zero'))
def hist(self, idx):
res = self.stats.itemgot(idx).itemgot('hist')
return torch.stack(tuple(res)).t().float().log1p()
def color_dim(self, idx, figsize=(10,5), ax=None):
"The 'colorful dimension' plot"
res = self.hist(idx)
if ax is None: ax = subplots(figsize=figsize)[1][0]
ax.imshow(res, origin='lower')
ax.axis('off')
def plot_layer_stats(self, idx):
_,axs = subplots(1, 3, figsize=(12,3))
for o,ax,title in zip(self.layer_stats(idx),axs,('mean','std','% near zero')):
ax.plot(o)
ax.set_title(title)
learn = synth_learner(n_trn=5, cbs = ActivationStats(every=4))
learn.fit(1)
learn.activation_stats.stats
Explanation: Activation graphs
End of explanation
#|hide
def test_activation_stats_include_paramless(include_paramless=False):
"create a learner, fit, then check number of layers"
modl = nn.Sequential(nn.Linear(1,50), nn.ReLU(), nn.BatchNorm1d(50), nn.Linear(50, 1), nn.Flatten())
learn = synth_learner(model=modl, cbs=ActivationStats(every=4, include_paramless=include_paramless))
learn.fit(1)
expected_stats_len = 3
if include_paramless: expected_stats_len = 5 # includes Relu & Flatten
test_eq(expected_stats_len, len(learn.activation_stats.modules))
test_activation_stats_include_paramless(include_paramless=True)
test_activation_stats_include_paramless(include_paramless=False)
def test_every(n_tr, every):
"create a learner, fit, then check number of stats collected"
learn = synth_learner(n_trn=n_tr, cbs=ActivationStats(every=every))
learn.fit(1)
expected_stats_len = math.ceil(n_tr / every)
test_eq(expected_stats_len, len(learn.activation_stats.stats))
for n_tr in [11, 12, 13]:
test_every(n_tr, 4)
test_every(n_tr, 1)
#|hide
class TstCallback(HookCallback):
def hook(self, m, i, o): return o
def before_fit(self):
super().before_fit()
self.means,self.stds = [],[]
def after_batch(self):
if self.training:
self.means.append(self.hooks.stored[0].mean().item())
self.stds.append (self.hooks.stored[0].std() .item())
learn = synth_learner(n_trn=5, cbs = [TstCallback(), ActivationStats()])
learn.fit(1)
test_eq(learn.activation_stats.stats.itemgot(0).itemgot("mean"), learn.tst.means)
test_eq(learn.activation_stats.stats.itemgot(0).itemgot("std"), learn.tst.stds)
Explanation: The first line contains the means of the outputs of the model for each batch in the training set, the second line their standard deviations.
End of explanation
#|hide
from nbdev.export import notebook2script
notebook2script()
Explanation: Export -
End of explanation |
7,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TITANIC
Step1: Here's a sqlite database for you to store the data once it's ready
Step2: =>YOUR TURN!
Use pandas to open up the csv.
Read the documentation to find out how
Step3: Exploring the Tabular Data
The file we'll be exploring today, train.csv, is the training set -- it represents
a subset of the full passenger manifest dataset. The rest of the data is in another
file called test.csv - we'll use that later (when we get to Machine Learning).
Let's take a look...
=>YOUR TURN!
Use pandas to view the "head" of the file with the first 10 rows.
Read the documentation to find out how
Step4: What do you see?
- Are there any missing values?
- What kinds of values/numbers/text are there?
- Are the values continuous or categorical?
- Are some variables more sparse than others?
- Are there multiple values in a single column?
=>YOUR TURN!
Use pandas to run summary statistics on the data.
Read the documentation to find out how
Step5: What can we infer from the summary statistics?
- How many missing values does the 'Age' column have?
- What's the age distribution?
- What percent of the passengers survived?
- How many passengers belonged to Class 3?
- Are there any outliers in the 'Fare' column?
=>YOUR TURN!
Use pandas to get the median for the Age column.
Read the documentation to find out how
Step6: =>YOUR TURN!
Use pandas to find the number of unique values in the Ticket column.
Read the documentation to find out how
Step7: Visually Exploring the Data
Let's look at a histogram of the age distribution.
What can you tell from the graph?
Step8: Now let's look at a histogram of the fares.
What does it tell you?
Step9: Dealing with Missing Values
Part of data wrangling is figuring out how to deal with missing values.
But before you decide, think about which variables are likely to be predictive
of survival. Which ones do you think will be the best predictors?
Age
Age is likely to play a role, so we'll probably want to estimate or 'impute'
the missing values in some way.
Fare
There are a lot of extremes on the high end and low end for ticket fares.
How should we handle them?
Other Variables
What do YOU think??
=>YOUR TURN!
Use pandas to get the sum of all the null values in the Cabin column.
Read the documentation to find out how
Step10: =>YOUR TURN!
Use pandas to drop the Ticket column.
Read the documentation to find out how
Step11: =>YOUR TURN!
Use pandas to calculate the mean age and fill all the null values in the Age column with that number..
Read the documentation to find out how
Step12: Save Your Work
...you will need it in a few weeks!
=>YOUR TURN!
Use pandas to write your dataframe to our sqlite database.
Read the documentation to find out how | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pandas.io.sql as pd_sql
import sqlite3 as sql
%matplotlib inline
Explanation: TITANIC: Wrangling the Passenger Manifest
Exploratory Analysis with Pandas
This tutorial is based on the Kaggle Competition,
"Predicting Survival Aboard the Titanic"
https://www.kaggle.com/c/titanic
Be sure to read the README before you begin!
See also:
http://www.analyticsvidhya.com/blog/2014/08/baby-steps-python-performing-exploratory-analysis-python/
http://www.analyticsvidhya.com/blog/2014/09/data-munging-python-using-pandas-baby-steps-python/
End of explanation
con = sql.connect("titanic.db")
Explanation: Here's a sqlite database for you to store the data once it's ready:
End of explanation
# Use pandas to open the csv.
# You'll have to put in the filepath
# It should look something like "../titanic/data/train.csv"
df =
Explanation: =>YOUR TURN!
Use pandas to open up the csv.
Read the documentation to find out how:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
End of explanation
# Use pandas to view the first 10 rows.
Explanation: Exploring the Tabular Data
The file we'll be exploring today, train.csv, is the training set -- it represents
a subset of the full passenger manifest dataset. The rest of the data is in another
file called test.csv - we'll use that later (when we get to Machine Learning).
Let's take a look...
=>YOUR TURN!
Use pandas to view the "head" of the file with the first 10 rows.
Read the documentation to find out how:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html
End of explanation
# Use pandas to get the summary statistics.
Explanation: What do you see?
- Are there any missing values?
- What kinds of values/numbers/text are there?
- Are the values continuous or categorical?
- Are some variables more sparse than others?
- Are there multiple values in a single column?
=>YOUR TURN!
Use pandas to run summary statistics on the data.
Read the documentation to find out how:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html
End of explanation
# Use pandas to get the median age.
Explanation: What can we infer from the summary statistics?
- How many missing values does the 'Age' column have?
- What's the age distribution?
- What percent of the passengers survived?
- How many passengers belonged to Class 3?
- Are there any outliers in the 'Fare' column?
=>YOUR TURN!
Use pandas to get the median for the Age column.
Read the documentation to find out how:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.median.html
End of explanation
# Use pandas to count the number of unique Ticket values.
Explanation: =>YOUR TURN!
Use pandas to find the number of unique values in the Ticket column.
Read the documentation to find out how:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nunique.html
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(df['Age'], bins = 10, range = (df['Age'].min(),df['Age'].max()))
plt.title('Age distribution')
plt.xlabel('Age')
plt.ylabel('Count of Passengers')
plt.show()
Explanation: Visually Exploring the Data
Let's look at a histogram of the age distribution.
What can you tell from the graph?
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(df['Fare'], bins = 10, range = (df['Fare'].min(),df['Fare'].max()))
plt.title('Fare distribution')
plt.xlabel('Fare')
plt.ylabel('Count of Passengers')
plt.show()
Explanation: Now let's look at a histogram of the fares.
What does it tell you?
End of explanation
# Use pandas to sum the null Cabin values.
Explanation: Dealing with Missing Values
Part of data wrangling is figuring out how to deal with missing values.
But before you decide, think about which variables are likely to be predictive
of survival. Which ones do you think will be the best predictors?
Age
Age is likely to play a role, so we'll probably want to estimate or 'impute'
the missing values in some way.
Fare
There are a lot of extremes on the high end and low end for ticket fares.
How should we handle them?
Other Variables
What do YOU think??
=>YOUR TURN!
Use pandas to get the sum of all the null values in the Cabin column.
Read the documentation to find out how:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.isnull.html
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html
End of explanation
# Use pandas to drop the Ticket column.
Explanation: =>YOUR TURN!
Use pandas to drop the Ticket column.
Read the documentation to find out how:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html
End of explanation
# Use pandas to get the mean Age.
# Use pandas to fill in the null Age values with the mean.
Explanation: =>YOUR TURN!
Use pandas to calculate the mean age and fill all the null values in the Age column with that number..
Read the documentation to find out how:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mean.html
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html
End of explanation
# Use pandas to save your dataframe to a sqlite database.
Explanation: Save Your Work
...you will need it in a few weeks!
=>YOUR TURN!
Use pandas to write your dataframe to our sqlite database.
Read the documentation to find out how:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html
End of explanation |
7,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_text = source_text.split('\n')
source_id_text = []
for s in source_text:
source_sentence = [source_vocab_to_int[w] for w in s.split() if w != '']
source_id_text.append(source_sentence)
target_text = target_text.split('\n')
target_id_text = []
for s in target_text:
target_sentence = [target_vocab_to_int[w] for w in s.split() if w != '']
target_sentence.append(target_vocab_to_int['<EOS>'])
target_id_text.append(target_sentence)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
input_ = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_sequence_length')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return input_, targets, learning_rate, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def make_drop_cell(rnn_size, keep_prob):
cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
return cell
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
cell_stack = tf.contrib.rnn.MultiRNNCell([make_drop_cell(rnn_size, keep_prob) for _ in range(num_layers)])
output, state = tf.nn.dynamic_rnn(cell_stack, embed, sequence_length=source_sequence_length, dtype=tf.float32)
return output, state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, output_keep_prob=keep_prob)
helper = tf.contrib.seq2seq.TrainingHelper(
inputs=dec_embed_input,
sequence_length=target_sequence_length,
time_major=False)
decoder = tf.contrib.seq2seq.BasicDecoder(
cell=dec_cell,
helper=helper,
initial_state=encoder_state,
output_layer=output_layer)
decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_summary_length)
return decoder_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
dec_embeddings,
start_tokens,
end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(
cell=dec_cell,
helper=helper,
initial_state=encoder_state,
output_layer=output_layer)
decoder_outputs, decoder_state = tf.contrib.seq2seq.dynamic_decode(decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)
return decoder_outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
# 1. Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
return cell
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate the decoder's output at each time
# step into a choice from the target vocabulary
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
# 4. Training decoder
with tf.variable_scope("decode"):
logits_train = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)
# 5. Inference decoder
with tf.variable_scope("decode", reuse=True):
logits_infer = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob)
return logits_train, logits_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
enc_output, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
logits_train, logits_infer = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)
return logits_train, logits_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 15
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
display_step = 25
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
sentence = sentence.lower()
sentence_id = []
for s in sentence.split():
try:
sentence_id.append(vocab_to_int[s])
except KeyError:
sentence_id.append(vocab_to_int['<UNK>'])
return sentence_id
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
7,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What subsets of scientific questions tend to be answered correctly by the same subjects?
Mining
Step1: Search for interesting rules
Interesting rules are more likely to be the ones with highest confidence, the highest lift or with a bigger consequent set. Pairs can also be especially interesting | Python Code:
from orangecontrib.associate.fpgrowth import *
import pandas as pd
from numpy import *
questions = correctedScientific.columns
correctedScientificText = [[] for _ in range(correctedScientific.shape[0])]
for q in questions:
for index in range(correctedScientific.shape[0]):
r = correctedScientific.index[index]
if correctedScientific.loc[r, q]:
correctedScientificText[index].append(q)
#correctedScientificText
len(correctedScientificText)
# Get frequent itemsets with support > 25%
# run time < 1 min
support = 0.20
itemsets = frequent_itemsets(correctedScientificText, math.floor(len(correctedScientificText) * support))
#dict(itemsets)
# Generate rules according to confidence, confidence > 85 %
# run time < 5 min
confidence = 0.80
rules = association_rules(dict(itemsets), confidence)
#list(rules)
# Transform rules generator into a Dataframe
rulesDataframe = pd.DataFrame([(ant, cons, supp, conf) for ant, cons, supp, conf in rules])
rulesDataframe.rename(columns = {0:"antecedants", 1:"consequents", 2:"support", 3:"confidence"}, inplace=True)
rulesDataframe.head()
# Save the mined rules to file
rulesDataframe.to_csv("results/associationRulesMiningSupport"+str(support)+"percentsConfidence"+str(confidence)+"percents.csv")
Explanation: What subsets of scientific questions tend to be answered correctly by the same subjects?
Mining
End of explanation
# Sort rules by confidence
confidenceSortedRules = rulesDataframe.sort_values(by = ["confidence", "support"], ascending=[False, False])
confidenceSortedRules.head(50)
# Sort rules by size of consequent set
rulesDataframe["consequentSize"] = rulesDataframe["consequents"].apply(lambda x: len(x))
consequentSortedRules = rulesDataframe.sort_values(by = ["consequentSize", "confidence", "support"], ascending=[False, False, False])
consequentSortedRules.head(50)
# Select only pairs (rules with antecedent and consequent of size one)
# Sort pairs according to confidence
rulesDataframe["fusedRule"] = rulesDataframe[["antecedants", "consequents"]].apply(lambda x: frozenset().union(*x), axis=1)
rulesDataframe["ruleSize"] = rulesDataframe["fusedRule"].apply(lambda x: len(x))
pairRules = rulesDataframe.sort_values(by=["ruleSize", "confidence", "support"], ascending=[True, False, False])
pairRules.head(30)
correctedScientific.columns
# Sort questions by number of apparition in consequents
for q in scientificQuestions:
rulesDataframe[q+"c"] = rulesDataframe["consequents"].apply(lambda x: 1 if q in x else 0)
occurenceInConsequents = rulesDataframe.loc[:,scientificQuestions[0]+"c":scientificQuestions[-1]+"c"].sum(axis=0)
occurenceInConsequents.sort_values(inplace=True, ascending=False)
occurenceInConsequents
# Sort questions by number of apparition in antecedants
for q in scientificQuestions:
rulesDataframe[q+"a"] = rulesDataframe["antecedants"].apply(lambda x: 1 if q in x else 0)
occurenceInAntecedants = rulesDataframe.loc[:,scientificQuestions[0]+"a":scientificQuestions[-1]+"a"].sum(axis=0)
occurenceInAntecedants.sort_values(inplace=True, ascending=False)
occurenceInAntecedants
sortedPrePostProgression = pd.read_csv("../../data/sortedPrePostProgression.csv")
sortedPrePostProgression.index = sortedPrePostProgression.iloc[:,0]
sortedPrePostProgression = sortedPrePostProgression.drop(sortedPrePostProgression.columns[0], axis = 1)
del sortedPrePostProgression.index.name
sortedPrePostProgression.loc['occ_ant',:] = 0
sortedPrePostProgression.loc['occ_csq',:] = 0
sortedPrePostProgression
for questionA, occsA in enumerate(occurenceInAntecedants):
questionVariableName = occurenceInAntecedants.index[questionA][:-1]
question = globals()[questionVariableName]
questionC = questionVariableName + "c"
sortedPrePostProgression.loc['occ_ant',question] = occsA
occsC = occurenceInConsequents.loc[questionC]
sortedPrePostProgression.loc['occ_csq',question] = occsC
#print(questionVariableName+"='"+question+"'")
#print("\t"+questionVariableName+"a="+str(occsA)+","+questionC+"="+str(occsC))
#print()
sortedPrePostProgression.T
Explanation: Search for interesting rules
Interesting rules are more likely to be the ones with highest confidence, the highest lift or with a bigger consequent set. Pairs can also be especially interesting
End of explanation |
7,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob)
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Step13: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
encoded[:100]
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
7,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Desafio 1
1. Entender a API de Streaming do Twitter
Ver slides 14 até 21.
Da mesma forma que fizemos na API REST do Twitter, temos que salvar as chaves de acesso, bem como definir o objeto OAuthHandler para cuidar da autenticação e validação do acesso.
Step1: 2 .Criar uma classe herdando os atributos da classe StreamListener
Step2: 3. Instanciar essa classe e utilizá-la para se conectar a API através de um objeto de Stream.
Step3: 4. Iniciar um fluxo (Stream)
O parâmetro track aceita uma lista como argumento e cada elemento da lista é um termo que será filtrado.
Para parar a execução do código será necessário clicar em Kernel -> Interrupt ou no botão de parar, representado por um quadrado.
<span style="color | Python Code:
import tweepy
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''
autorizar = tweepy.OAuthHandler(consumer_key, consumer_secret)
autorizar.set_access_token(access_token, access_token_secret)
Explanation: Desafio 1
1. Entender a API de Streaming do Twitter
Ver slides 14 até 21.
Da mesma forma que fizemos na API REST do Twitter, temos que salvar as chaves de acesso, bem como definir o objeto OAuthHandler para cuidar da autenticação e validação do acesso.
End of explanation
class DadosPublicosTwitter(tweepy.StreamListener):
def on_status(self, dados):
print("---> ", dados.text)
Explanation: 2 .Criar uma classe herdando os atributos da classe StreamListener
End of explanation
dados_twitter = DadosPublicosTwitter()
fluxo = tweepy.Stream(autorizar, dados_twitter)
Explanation: 3. Instanciar essa classe e utilizá-la para se conectar a API através de um objeto de Stream.
End of explanation
fluxo.filter(track=['Big Data'])
Explanation: 4. Iniciar um fluxo (Stream)
O parâmetro track aceita uma lista como argumento e cada elemento da lista é um termo que será filtrado.
Para parar a execução do código será necessário clicar em Kernel -> Interrupt ou no botão de parar, representado por um quadrado.
<span style="color:red;">Um erro será gerado:</span> Provavelmente o KeyboardInterrupt.
End of explanation |
7,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas II - Working with DataFrames
Step1: We'll be using the MovieLens dataset in many examples going forward. The dataset contains 100,000 ratings made by 943 users on 1,682 movies.
Step2: Summary
Inspect<br>
a) .dtype<br>
b) .describe()<br>
c) .head(), .tail(), [i
Step3: b) .describe()
Use the .describe() method to see the basic statistics about the DataFrame's numeric columns. Be careful though, since this will return information on all columns of a numeric datatype.
Step4: Notice user_id was included since it's numeric. Since this is an ID value, the stats for it don't really matter.
We can quickly see the average age of our users is just above 34 years old, with the youngest being 7 and the oldest being 73. The median age is 31, with the youngest quartile of users being 25 or younger, and the oldest quartile being at least 43.
c) .head(), tail(), [i
Step5: 2. Select
a) Column Selection
You can think of a DataFrame as a group of Series (ie
Step6: Multiple columns selection<br>
To select multiple columns, simply pass a list of column names to the DataFrame, the output of which will be a DataFrame.
Step7: b) Row Selection
Row selection can be done multiple ways, but using boolean indexing or individual index .ix() are typically easiest.
Boolean Indexing
Step8: .ix() method
When you change the indexing of a DataFrame to a specific column, you use the default pandas 0-based index.<br>
Use .ix() method for row selection based on the new index.
Let's set the index to the user_id using the .set_index() method.<br>
NB
Step9: Use the .reset_index() method to reset the default index (the same rule apply for inplace).
Step10: 3. Sort
a) .sort() for DataFrames
Use .sort() method to sort DataFrames. Returns a new instance of a Dataframe. (See DOC)
column
Step11: b) .order() for Series
Use .order() method to sort Series. Returns a new instance of a Dataframe.
ascending (True)
Step12: 4. Operations
a) Descriptive Stats
A large number of methods for computing descriptive statistics and other related operations on Series, DataFrame, and Panel. For DataFrames these methods take an axis argument
Step13: d) Histograms
Use .value_counts() Series method to return the counts of unique values (ie frequency). (See DOC)
Step14: 5. Split-Apply-Combine
Use .groupby() method to execute the split-apply-combine strategy for data analysis
Step15: Since the data contains a '$' sign for each salary, python will treat the field as a series of strings. We can use the converters parameter to change this when reading in the file.
converters = Dict of functions for converting values in certain columns. Keys can either be integers or column labels
Step16: What departments have the most number of distinct title positions ?
Split DataFrame into groups by departement, keep only title column => SeriesGroupBy
Apply .nunique() method
(Combine into Serie)
Order resulting Serie (NB
Step17: What department pays best on average ?
Split DataFrame into groups by departement => DataFrameGroupBy
Apply .mean() method
(Combine into DataFrame)
Sort resulting DataFrame according to the salary (NB
Step18: Who is the highest paid employee of each department ?
Split DataFrame into groups by departement, keep only salary column => SeriesGroupBy
Apply .rank() method
(Combine into Serie)
Assign the resulting Serie to a new column of the DataFrame
Sort DataFrame according to salary (NB | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option('max_columns', 50)
Explanation: Pandas II - Working with DataFrames
End of explanation
# pass in column names for each CSV
u_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code']
df_users = pd.read_csv('data/MovieLens-100k/u.user', sep='|', names=u_cols)
r_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']
df_ratings = pd.read_csv('data/MovieLens-100k/u.data', sep='\t', names=r_cols)
m_cols = ['movie_id', 'title', 'release_date', 'video_release_date', 'imdb_url']
df_movies = pd.read_csv('data/MovieLens-100k/u.item', sep='|', names=m_cols, usecols=range(5))# only load the first five columns
Explanation: We'll be using the MovieLens dataset in many examples going forward. The dataset contains 100,000 ratings made by 943 users on 1,682 movies.
End of explanation
print df_movies.dtypes,'\n'
print df_users.dtypes,'\n'
print df_ratings.dtypes,'\n'
Explanation: Summary
Inspect<br>
a) .dtype<br>
b) .describe()<br>
c) .head(), .tail(), [i:j]
Select<br>
a) Column Selection<br>
b) Row Selection<br>
Sort<br>
a) .sort() for DataFrames<br>
b) .order() for Series<br>
Operations<br>
a) Descriptive Stats<br>
b) Apply<br>
b) Bins<br>
b) Histograms<br>
Split-Apply-Combine
Other<br>
a) Rename columns<br>
b) Missing values<br>
1. Inspect
Pandas has a variety of functions for getting basic information about your DataFrame.<br>
The most basic of which is calling your DataFrame by name. The output tells a few things about our DataFrame.
It's an instance of a DataFrame.
Each row is assigned an index of 0 to N-1, where N is the number of rows in the DataFrame. (index can be set arbitrary)
There are 1,682 rows (every row must have an index).
Our dataset has five total columns, one of which isn't populated at all (video_release_date) and two that are missing some values (release_date and imdb_url).
a) .dtypes
Use the .dtypes attribute to get the datatype for each column.
End of explanation
df_users.describe()
Explanation: b) .describe()
Use the .describe() method to see the basic statistics about the DataFrame's numeric columns. Be careful though, since this will return information on all columns of a numeric datatype.
End of explanation
print df_users.head()
print df_users.tail(3)
print df_users[20:22]
Explanation: Notice user_id was included since it's numeric. Since this is an ID value, the stats for it don't really matter.
We can quickly see the average age of our users is just above 34 years old, with the youngest being 7 and the oldest being 73. The median age is 31, with the youngest quartile of users being 25 or younger, and the oldest quartile being at least 43.
c) .head(), tail(), [i:j]
By default, .head() displays the first five records of the DataFrame, while .tail() displays the last five.<br>
Alternatively, Python's regular slicing [i:j] syntax works as well.
End of explanation
df_users['occupation'].head()
Explanation: 2. Select
a) Column Selection
You can think of a DataFrame as a group of Series (ie: rows) that share an index (ie: column headers). This makes it easy to select specific columns.
Single column selection<br>
Selecting a single column from the DataFrame will return a Series object.
End of explanation
list_of_cols = ['occupation', 'sex']
print df_users[list_of_cols].head()
Explanation: Multiple columns selection<br>
To select multiple columns, simply pass a list of column names to the DataFrame, the output of which will be a DataFrame.
End of explanation
# users older than 25
print df_users[df_users.age > 25].head(3), '\n'
# users aged 40 AND male
print df_users[(df_users.age == 40) & (df_users.sex == 'M')].head(3), '\n'
# users younger than 30 OR female
print df_users[(df_users.sex == 'F') | (df_users.age < 30)].head(3)
Explanation: b) Row Selection
Row selection can be done multiple ways, but using boolean indexing or individual index .ix() are typically easiest.
Boolean Indexing
End of explanation
# Change index column (new DataFrame)
new_df_users = df_users.set_index('user_id')
print new_df_users.head(3)
# Change index column (inplace)
df_users.set_index('user_id', inplace=True)
print df_users.head(3)
# Select users using their respective user_id
print df_users.ix[99], '\n'
print df_users.ix[[1, 50, 300]]
Explanation: .ix() method
When you change the indexing of a DataFrame to a specific column, you use the default pandas 0-based index.<br>
Use .ix() method for row selection based on the new index.
Let's set the index to the user_id using the .set_index() method.<br>
NB: By default, .set_index() returns a new DataFrame, so you'll have to specify if you'd like the changes to occur in place.
End of explanation
df_users.reset_index(inplace=True)
print df_users.head()
Explanation: Use the .reset_index() method to reset the default index (the same rule apply for inplace).
End of explanation
# Oldest techicians
df_users.sort('age', ascending=False, inplace=True)
print df_users[df_users.occupation == "technician"][:5]
Explanation: 3. Sort
a) .sort() for DataFrames
Use .sort() method to sort DataFrames. Returns a new instance of a Dataframe. (See DOC)
column : column name to base the sorting on (list for nested sorting / tuple for multi-index sorting)
ascending (True) : sort ascending vs. descending (specify list for multiple sort orders)
inplace (False): result is a new instance of DataFrame
End of explanation
print df_users.zip_code.order()[:3]
Explanation: b) .order() for Series
Use .order() method to sort Series. Returns a new instance of a Dataframe.
ascending (True) : sort ascending vs. descending (specify list for multiple sort orders)
inplace (False): result is a new instance of DataFrame
End of explanation
labels = ['0-9', '10-19', '20-29', '30-39', '40-49', '50-59', '60-69', '70-79']
bins = range(0, 81, 10) # [0, 10, 20, 30, 40, 50, 60, 70, 80]
df_users['age_group'] = pd.cut(df_users.age, bins, right=False, labels=labels)
print df_users[27:31] # preview of age bin
Explanation: 4. Operations
a) Descriptive Stats
A large number of methods for computing descriptive statistics and other related operations on Series, DataFrame, and Panel. For DataFrames these methods take an axis argument:
axis=0 : compute over indexes
axis=1 : compute over columns
Most methods produce a lower-dimensional result (aka aggregate functions) :
- .count(): number of NOT NULL values
- .nunique(): number of unique NOT NULL values
- .size() : number of values
- .min(): minimum
- .max(): maximum
- .sum(): sum of values
- .prod(): product of values
- .median(): arithmetic median of values
- .quantile(): sample quantile (value at %)
- .mean(): mean of values
- .std(): unbiased standard deviation
- .var(): unbiased variance
- .mad(): mean absolute deviation
- .sem(): unbiased standard error of the mean
- .skew(): unbiased skewness (3rd moment)
- .kurt(): unbiased kurtosis (4th moment)
Some methods produce an object of the same size :
- .rank(): compute data rank (1 through n)
- .mode(): mode
- .abs(): absolute value
- .cumsum(): cumulative sum
- .cumprod(): cumulative product
- .cummax(): cumulative maximum
- .cummin(): cumulative minimum
b) Apply
To apply your own or another library’s functions to pandas objects, you should be aware of the three methods below. The appropriate method to use depends on whether your function expects to operate on an entire DataFrame or Series, row- or column-wise, or elementwise.
Tablewise Function Application: .pipe()
Row or Column-wise Function Application: .apply()
Elementwise function application: .applymap() or .map()
.pipe()
Use .pipe() for method chaining over a DataFrame. (See DOC)<br>
The following two are equivalent :
- f(g(h(df), arg1=1), arg2=2, arg3=3)
- df.pipe(h).pipe(g, arg1=1).pipe(f, arg2=2, arg3=3)
The pipe method is inspired by unix pipes and more recently dplyr and magrittr, which have introduced the popular (%>%) (read pipe) operator for R.
.apply()
Use .apply() to apply a function along the axes of a DataFrame, like the descriptive statistics methods. (See DOC)<br>
- df.apply(np.mean, axis=1)
- df.apply(lambda x: x.max() - x.min())
.applymap() / .map()
Use .applymap() on DataFrame or .map() on Series to operate elementwise.<br>
The vectorized function must take a single value and return a single value.(See DOC)<br>
- df.applymap(lambda x: len(str(x)))
- df['colA'].map(lambda x: len(str(x)))
c) Bins
Use pandas.cut() static method to bin numeric values into groups. Useful for discretization. (DOC)
pandas.cut(x, bins) returns an array of the indices (or labels) of the half-open bins to which each value of x belongs.
x : array of values to be binned
bins : sequence defining the bin edges
right (True): boolean indicating whether the bins include the rightmost edge or not ([a,b] or [a,b[)
labels (None): array used as labels for the resulting bins
End of explanation
df_users['occupation'].value_counts().head()
Explanation: d) Histograms
Use .value_counts() Series method to return the counts of unique values (ie frequency). (See DOC)
End of explanation
!head -n 3 data/city-of-chicago-salaries.csv
Explanation: 5. Split-Apply-Combine
Use .groupby() method to execute the split-apply-combine strategy for data analysis :
1. Split the DataFrame into groups based on some criteria (DataFrameGroupBy or SeriesGroupBy)
2. Apply a function to each group independently
3. Combine the results into a data structure (DataFrame or Series)
DataFrameGroupBy/SeriesGroupBy Methods (See Doc)
- .apply(): apply your own or another library's function or list of functions
- .agg(): aggregate using input function or dict of {column: function}
- .transform(): transform
- .filter(): return a copy of a DataFrame excluding elements from groups
<br>
In the apply step, we might wish to do one of the following:
- Aggregation: computing a summary statistic (or statistics) about each group. Some examples:
- Compute group columns sums and means :
- gby.agg([np.sum, np.mean])
- Compute group sizes and counts :
- gby.agg([np.size, np.mean])
- Transformation: perform some group-specific computations on every data point. Some examples:
- Standardizing data (zscore) within group :
- gby.transform(lambda x: (x - x.mean()) / x.std())
- Filling NAs within groups with a value derived from each group
- gby.fillna(x.mean())
- Filtration: discard some groups, according to a group-wise computation that evaluates True or False. Some examples:
- Discarding data that belongs to groups with only a few members :
- gby.filter(lambda x: x.size() > 100)
- Discarding data based on the group sum or mean
- gby.filter(lambda x: x['A'].sum() + x['B'].sum() > 0)
- Discarding data for missing data
- gby.dropna(axis=0)
City of Chicago salaries
The City of Chicago is kind enough to publish all city employee salaries to its open data portal. Let's go through some basic groupby examples using this data.
End of explanation
headers = ['name', 'title', 'department', 'salary']
df_chicago = pd.read_csv('data/city-of-chicago-salaries.csv',
header=False,
names=headers,
converters={'salary': lambda x: float(x.replace('$', ''))})
print df_chicago.head()
print df_chicago.groupby('department').count().head(3), '\n' # NOT NULL records within each column
print df_chicago.groupby('department').size().head(3) # total records for each department
print df_chicago.groupby('department').agg({'salary': [np.size, np.mean]}).head()
Explanation: Since the data contains a '$' sign for each salary, python will treat the field as a series of strings. We can use the converters parameter to change this when reading in the file.
converters = Dict of functions for converting values in certain columns. Keys can either be integers or column labels
End of explanation
print df_chicago.groupby('department').title.nunique().order(ascending=False)[:3]
Explanation: What departments have the most number of distinct title positions ?
Split DataFrame into groups by departement, keep only title column => SeriesGroupBy
Apply .nunique() method
(Combine into Serie)
Order resulting Serie (NB: .order() is for Series, .sort() is for DataFrames)
End of explanation
print df_chicago.groupby('department').mean().sort('salary', ascending=False).head()
print df_chicago.groupby('department').agg({'salary': [np.size, np.mean]}).sort(('salary', 'mean'), ascending=False).head()
Explanation: What department pays best on average ?
Split DataFrame into groups by departement => DataFrameGroupBy
Apply .mean() method
(Combine into DataFrame)
Sort resulting DataFrame according to the salary (NB: .order() is for Series, .sort() is for DataFrames)
End of explanation
df_chicago['dept_rank'] = df_chicago.groupby('department')['salary'].rank(method='first', ascending=False)
df_chicago.sort('salary', ascending=False, inplace=True)
print df_chicago[df_chicago['dept_rank'] == 1].head()
print df_chicago[df_chicago['department'] == 'MAYOR\'S OFFICE'].tail(10)
Explanation: Who is the highest paid employee of each department ?
Split DataFrame into groups by departement, keep only salary column => SeriesGroupBy
Apply .rank() method
(Combine into Serie)
Assign the resulting Serie to a new column of the DataFrame
Sort DataFrame according to salary (NB: .order() is for Series, .sort() is for DataFrames)
Display only first rankers
For the .rank() method, use attributes:
- ascending=False : to rank high (1) to low (N)
- method='first' : so that equally high paid people within a department don't get the same rank .
End of explanation |
7,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualising statistical significance thresholds on EEG data
MNE-Python provides a range of tools for statistical hypothesis testing
and the visualisation of the results. Here, we show a few options for
exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based
permutation approaches (here with Threshold-Free Cluster Enhancement);
and how to visualise the results.
The underlying data comes from
Step1: If we have a specific point in space and time we wish to test, it can be
convenient to convert the data into Pandas Dataframe format. In this case,
the
Step2: Absent specific hypotheses, we can also conduct an exploratory
mass-univariate analysis at all sensors and time points. This requires
correcting for multiple tests.
MNE offers various methods for this; amongst them, cluster-based permutation
methods allow deriving power from the spatio-temoral correlation structure
of the data. Here, we use TFCE.
Step3: The results of these mass univariate analyses can be visualised by plotting | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind
import mne
from mne.channels import find_ch_adjacency, make_1020_channel_selections
from mne.stats import spatio_temporal_cluster_test
np.random.seed(0)
# Load the data
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
name = "NumberOfLetters"
# Split up the data by the median length in letters via the attached metadata
median_value = str(epochs.metadata[name].median())
long_words = epochs[name + " > " + median_value]
short_words = epochs[name + " < " + median_value]
Explanation: Visualising statistical significance thresholds on EEG data
MNE-Python provides a range of tools for statistical hypothesis testing
and the visualisation of the results. Here, we show a few options for
exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based
permutation approaches (here with Threshold-Free Cluster Enhancement);
and how to visualise the results.
The underlying data comes from :footcite:DufauEtAl2015; we contrast long vs.
short words. TFCE is described in :footcite:SmithNichols2009.
End of explanation
time_windows = ((.2, .25), (.35, .45))
elecs = ["Fz", "Cz", "Pz"]
index = ['condition', 'epoch', 'time']
# display the EEG data in Pandas format (first 5 rows)
print(epochs.to_data_frame(index=index)[elecs].head())
report = "{elec}, time: {tmin}-{tmax} s; t({df})={t_val:.3f}, p={p:.3f}"
print("\nTargeted statistical test results:")
for (tmin, tmax) in time_windows:
long_df = long_words.copy().crop(tmin, tmax).to_data_frame(index=index)
short_df = short_words.copy().crop(tmin, tmax).to_data_frame(index=index)
for elec in elecs:
# extract data
A = long_df[elec].groupby("condition").mean()
B = short_df[elec].groupby("condition").mean()
# conduct t test
t, p = ttest_ind(A, B)
# display results
format_dict = dict(elec=elec, tmin=tmin, tmax=tmax,
df=len(epochs.events) - 2, t_val=t, p=p)
print(report.format(**format_dict))
Explanation: If we have a specific point in space and time we wish to test, it can be
convenient to convert the data into Pandas Dataframe format. In this case,
the :class:mne.Epochs object has a convenient
:meth:mne.Epochs.to_data_frame method, which returns a dataframe.
This dataframe can then be queried for specific time windows and sensors.
The extracted data can be submitted to standard statistical tests. Here,
we conduct t-tests on the difference between long and short words.
End of explanation
# Calculate adjacency matrix between sensors from their locations
adjacency, _ = find_ch_adjacency(epochs.info, "eeg")
# Extract data: transpose because the cluster test requires channels to be last
# In this case, inference is done over items. In the same manner, we could
# also conduct the test over, e.g., subjects.
X = [long_words.get_data().transpose(0, 2, 1),
short_words.get_data().transpose(0, 2, 1)]
tfce = dict(start=.2, step=.2)
# Calculate statistical thresholds
t_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test(
X, tfce, adjacency=adjacency,
n_permutations=100) # a more standard number would be 1000+
significant_points = cluster_pv.reshape(t_obs.shape).T < .05
print(str(significant_points.sum()) + " points selected by TFCE ...")
Explanation: Absent specific hypotheses, we can also conduct an exploratory
mass-univariate analysis at all sensors and time points. This requires
correcting for multiple tests.
MNE offers various methods for this; amongst them, cluster-based permutation
methods allow deriving power from the spatio-temoral correlation structure
of the data. Here, we use TFCE.
End of explanation
# We need an evoked object to plot the image to be masked
evoked = mne.combine_evoked([long_words.average(), short_words.average()],
weights=[1, -1]) # calculate difference wave
time_unit = dict(time_unit="s")
evoked.plot_joint(title="Long vs. short words", ts_args=time_unit,
topomap_args=time_unit) # show difference wave
# Create ROIs by checking channel labels
selections = make_1020_channel_selections(evoked.info, midline="12z")
# Visualize the results
fig, axes = plt.subplots(nrows=3, figsize=(8, 8))
axes = {sel: ax for sel, ax in zip(selections, axes.ravel())}
evoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False,
mask=significant_points, show_names="all", titles=None,
**time_unit)
plt.colorbar(axes["Left"].images[-1], ax=list(axes.values()), shrink=.3,
label="µV")
plt.show()
Explanation: The results of these mass univariate analyses can be visualised by plotting
:class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image)
and masking points for significance.
Here, we group channels by Regions of Interest to facilitate localising
effects on the head.
End of explanation |
7,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Gathering system data - Python for System Administrators
Goals
Step6: Parsing /proc
Linux /proc filesystem is a cool place to get data
In the next example we'll see how to get
Step8: zip on py3 is a generator | Python Code:
import psutil
import glob
import sys
import subprocess
#
# Our code is p3-ready
#
from __future__ import print_function, unicode_literals
def grep(needle, fpath):
A simple grep implementation
goal: open() is iterable and doesn't
need splitlines()
goal: comprehension can filter lists
return [x for x in open(fpath) if needle in x]
# Do we have localhost?
grep("localhost", "/etc/hosts")
#The psutil module is very nice
import psutil
#Works on Windows, Linux and MacOS
psutil.cpu_percent()
#And its output is very easy to manage
ret = psutil.disk_io_counters()
print(ret)
# Exercise: Which other informations
# does psutil provide?
# Use this cell and the tab-completion jupyter functionalities.
# Exercise
def multiplatform_vmstat(count):
# Write a vmstat-like function printing every second:
# - cpu usage%
# - bytes read and written in the given interval
# Hint: use psutil and time.sleep(1)
# Hint: use this cell or try on ipython and *then* write the function
# using %edit vmstat.py
for i in range(count):
raise NotImplementedError
print(cpu_usage, bytes_rw)
multiplatform_vmstat(5)
%load course/multiplatform_vmstat.py
# Run your vmstat implementation.
multiplatform_vmstat(5)
#
# subprocess
#
# The check_output function returns the command stdout
from subprocess import check_output
# It takes a *list* as an argument!
out = check_output("ping -w1 -c1 www.google.com".split())
# and returns a string
print(out)
# If you want to stream command output, use subprocess.Popen
# and check carefully subprocess documentation!
def sh(cmd, shell=False, timeout=0):
"Returns an iterable output of a command string
checking...
from sys import version_info as python_version
if python_version < (3, 3): # ..before using..
if timeout:
raise ValueError("Timeout not supported until Python 3.3")
output = check_output(cmd.split(), shell=shell)
else:
output = check_output(cmd.split(), shell=shell, timeout=timeout)
return output.splitlines()
# Exercise:
# implement a multiplatform pgrep-like function.
def ppgrep(program):
A multiplatform pgrep-like function.
Prints a list of processes executing 'program'
@param program - eg firefox, explorer.exe
Hint: use subprocess, os and list-comprehension
eg. items = [x for x in a_list if 'firefox' in x]
raise NotImplementedError
%load course/pgrep.py
Explanation: Gathering system data - Python for System Administrators
Goals:
- Gathering System Data with multiplatform and platform-dependent tools
- Get infos from files, /proc, /sys
- Capture command output
- Use psutil to get IO, CPU and memory data
- Parse files with a strategy
Non-goals for this lesson:
- use with, yield or pipes
Modules
End of explanation
# Parsing /proc - 1
def linux_threads(pid):
Retrieving data from /proc
from glob import glob
# glob emulates shell expansion of * and ?
path = "/proc/{}/task/*/status".format(pid)
# pick a set of fields to gather
t_info = ('Pid', 'Tgid', 'voluntary') # this is a tuple!
for t in glob(path):
# ... and use comprehension to get
# intersting data.
t_info = [x
for x in open(t)
if x.startswith(t_info)] # startswith accepts tuples!
print(t_info)
# If you're on linux try linux_threads
pid_of_init = 1 # or systemd ?
linux_threads(pid_of_init)
# On linux /proc/diskstats is the source of I/O infos
disk_l = grep("sda", "/proc/diskstats")
print(''.join(disk_l))
# To gather that data we put the header in a multiline string
from course import diskstats_headers as headers
print(*headers, sep='\n')
#Take the 1st entry (sda), split the data...
disk_info = disk_l[0].split()
# ... and tie them with the header
ret = zip(headers, disk_info)
# On py3 we need to iterate over the generators
print(list(ret))
# Try to mangle ret
print('\n'.join(str(x) for x in ret))
# Exercise: trasform ret in a dict.
# We can create a reusable commodity class with
from collections import namedtuple
# using the imported `headers` as attributes
# like the one provided by psutil
DiskStats = namedtuple('DiskStat', headers)
# ... and disk_info as values
dstat = DiskStats(*disk_info)
print(dstat.device, dstat.writes_ms)
# Homework: check further features with
# help(collections)
# Exercise
# Write the following function
def linux_diskstats(partition):
Print every second I/O information from /proc/diskstats
@param: partition - eg sda1 or vdx1
Hint: use the above `grep` function
Hint: use zip, time.sleep, print() and *magic
diskstats_headers = ('reads reads_merged reads_sectors reads_ms'
' writes writes_merged writes_sectors writes_ms'
' io_in_progress io_ms_weight').split()
while True:
raise NotImplementedError
print(values, sep="\t")
%load course/linux_diskstats.py
# Using check_output with split() doesn't always work
from os import makedirs
makedirs('/tmp/course/b l a n k s') # , exist_ok=True) this on py3
check_output('ls "/tmp/course/b l a n k s"'.split())
# You can use
from shlex import split
# and
cmd = split('dir -a "/tmp/course/b l a n k s"')
check_output(cmd)
Explanation: Parsing /proc
Linux /proc filesystem is a cool place to get data
In the next example we'll see how to get:
- thread informations;
- disk statistics;
End of explanation
# zip_iterables():
The zip method joins list elements pairwise
like a zip fastener
from sys import version_info as python_version
a_list = [0, 1, 2, 3]
b_list = ["a", "b", "c", "d"]
zipper = zip(a_list, b_list)
print(zipper)
if python_version >= (3,):
zipper = list(zipper)
assert zipper == [(0, "a"), (1, "b"), (2, "c"), (3, "d")]
Explanation: zip on py3 is a generator
End of explanation |
7,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 4
Imports
Step1: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$
Step3: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
Step5: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
Step6: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Numpy Exercise 4
Imports
End of explanation
import networkx as nx
K_5=nx.complete_graph(5)
nx.draw(K_5)
Explanation: Complete graph Laplacian
In discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.
A Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.
Here is $K_5$:
End of explanation
def complete_deg(n):
Return the integer valued degree matrix D for the complete graph K_n.
x=np.empty((n,n),dtype=int) #makes an integer array nxn
x.fill(n-1) #fills array with n-1
deg_mat=np.diag(np.diag(x)) #takes the diag of a 2D array x which is all n-1 then creates a diag with zeros nxn
return deg_mat
D = complete_deg(5)
assert D.shape==(5,5)
assert D.dtype==np.dtype(int)
assert np.all(D.diagonal()==4*np.ones(5))
assert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))
Explanation: The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.
The degree matrix for $K_n$ is an $n \times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.
End of explanation
def complete_adj(n):
Return the integer valued adjacency matrix A for the complete graph K_n.
a=np.ones((n,n),dtype=int) #makes an interger array of ones nxn
np.fill_diagonal(a,0) #googled "how to fill diagonal array using numpy" found the fill_diagonal function
return(a)
A = complete_adj(5)
assert A.shape==(5,5)
assert A.dtype==np.dtype(int)
assert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))
Explanation: The adjacency matrix for $K_n$ is an $n \times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.
End of explanation
def Lspectrum(n):
L=complete_deg(n)-complete_adj(n)
plt.plot(L)
Lspectrum(9)
Explanation: Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.
End of explanation |
7,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 17 - Navier Stokes equations
Keywords
Step1: 3. Affine Decomposition
Step2: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
Step3: 4.2. Create Finite Element Space (Taylor-Hood P2-P1)
Step4: 4.3. Allocate an object of the NavierStokes class
Step5: 4.4. Prepare reduction with a POD-Galerkin method
Step6: 4.5. Perform the offline phase
Step7: 4.6. Perform an online solve
Step8: 4.7. Perform an error analysis
Step9: 4.8. Perform a speedup analysis | Python Code:
from ufl import transpose
from dolfin import *
from rbnics import *
Explanation: Tutorial 17 - Navier Stokes equations
Keywords: DEIM, supremizer operator
1. Introduction
In this tutorial, we will study the Navier-Stokes equations over the two-dimensional backward-facing step domain $\Omega$ shown below:
<img src="data/backward_facing_step.png" width="80%"/>
A Poiseuille flow profile is imposed on the inlet boundary, and a no-flow (zero velocity) condition is imposed on the walls. A homogeneous Neumann condition of the Cauchy stress tensor is applied at the outflow boundary.
The inflow velocity boundary condition is characterized by $$\boldsymbol{u}(\boldsymbol{x};\mu)=\mu\bigg {\frac{1}{2.25}(x_1-2)(5-x_1),0\bigg } \quad \forall \boldsymbol{x}=(x_0,x_1) \in \Omega$$
This problem is characterized by one parameter $\mu$, which characterizes the inlet velocity. The range of $\mu$ is the following $$\mu \in [1.0, 80.0].$$
Thus, the parameter domain is $$\mathbb{P}=[1.0,80.0].$$
In order to obtain a faster approximation of the problem, we pursue a model reduction by means of a POD-Galerkin reduced order method.
2. Parametrized formulation
Let $\boldsymbol{u}(\mu)$ be the velocity vector and $p(\mu)$ be the pressure in the domain $\Omega$.
We will directly provide a weak formulation for this problem: <center>for a given parameter $\mu \in \mathbb{P},$ find $u(\mu) \in \mathbb{V}(\mu), \; p \in\mathbb{M}$ such that </center>
<center>
$
\begin{cases}
\nu \int_{\Omega} \nabla \boldsymbol{u} : \nabla \boldsymbol{v} \ d\Omega + \int_{\Omega} [(\boldsymbol{u} \cdot \nabla) \boldsymbol{u}] \cdot \boldsymbol{v} \ d\Omega - \int_{\Omega} p \nabla \cdot \boldsymbol{v} \ d\Omega = \int_{\Omega} \boldsymbol{f} \cdot \boldsymbol{v} \ d\Omega, \quad \forall \boldsymbol{v} \in\mathbb{V}, \
\int_{\Omega} q \nabla \cdot \boldsymbol{u} \ d\Omega = 0, \quad \forall q \in\mathbb{M}
\end{cases}
$
</center>
where
$\nu$ represents kinematic viscosity
the functional space $\mathbb{V}(\mu)$ is defined as $\mathbb{V}=[H^1_{\Gamma_{wall}}(\Omega)]^2$
the functional space $\mathbb{M}(\mu)$ is defined as $\mathbb{M}=L^2(\Omega)$
Since this problem utilizes mixed finite element discretization with the velocity and pressure as solution variables, the inf-sup condition is necessary for the well posedness of this problem. Thus, the supremizer operator $T^{\mu}: \mathbb{M}_h \rightarrow \mathbb{V}_h$ will be used.
End of explanation
@DEIM("online", basis_generation="Greedy")
@ExactParametrizedFunctions("offline")
class NavierStokes(NavierStokesProblem):
# Default initialization of members
def __init__(self, V, **kwargs):
# Call the standard initialization
NavierStokesProblem.__init__(self, V, **kwargs)
# ... and also store FEniCS data structures for assembly
assert "subdomains" in kwargs
assert "boundaries" in kwargs
self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"]
dup = TrialFunction(V)
(self.du, self.dp) = split(dup)
(self.u, _) = split(self._solution)
vq = TestFunction(V)
(self.v, self.q) = split(vq)
self.dx = Measure("dx")(subdomain_data=self.subdomains)
self.ds = Measure("ds")(subdomain_data=self.boundaries)
#
self.inlet = Expression(("1. / 2.25 * (x[1] - 2) * (5 - x[1])", "0."), degree=2)
self.f = Constant((0.0, 0.0))
self.g = Constant(0.0)
# Customize nonlinear solver parameters
self._nonlinear_solver_parameters.update({
"linear_solver": "mumps",
"maximum_iterations": 20,
"report": True
})
# Return custom problem name
def name(self):
return "NavierStokesDEIM1"
# Return theta multiplicative terms of the affine expansion of the problem.
@compute_theta_for_derivatives
@compute_theta_for_supremizers
def compute_theta(self, term):
mu = self.mu
if term == "a":
theta_a0 = 1.
return (theta_a0,)
elif term in ("b", "bt"):
theta_b0 = 1.
return (theta_b0,)
elif term == "c":
theta_c0 = 1.
return (theta_c0,)
elif term == "f":
theta_f0 = 1.
return (theta_f0,)
elif term == "g":
theta_g0 = 1.
return (theta_g0,)
elif term == "dirichlet_bc_u":
theta_bc00 = mu[0]
return (theta_bc00,)
else:
raise ValueError("Invalid term for compute_theta().")
# Return forms resulting from the discretization of the affine expansion of the problem operators.
@assemble_operator_for_derivatives
@assemble_operator_for_supremizers
def assemble_operator(self, term):
dx = self.dx
if term == "a":
u = self.du
v = self.v
a0 = inner(grad(u) + transpose(grad(u)), grad(v)) * dx
return (a0,)
elif term == "b":
u = self.du
q = self.q
b0 = - q * div(u) * dx
return (b0,)
elif term == "bt":
p = self.dp
v = self.v
bt0 = - p * div(v) * dx
return (bt0,)
elif term == "c":
u = self.u
v = self.v
c0 = inner(grad(u) * u, v) * dx
return (c0,)
elif term == "f":
v = self.v
f0 = inner(self.f, v) * dx
return (f0,)
elif term == "g":
q = self.q
g0 = self.g * q * dx
return (g0,)
elif term == "dirichlet_bc_u":
bc0 = [DirichletBC(self.V.sub(0), self.inlet, self.boundaries, 1),
DirichletBC(self.V.sub(0), Constant((0.0, 0.0)), self.boundaries, 2)]
return (bc0,)
elif term == "inner_product_u":
u = self.du
v = self.v
x0 = inner(grad(u), grad(v)) * dx
return (x0,)
elif term == "inner_product_p":
p = self.dp
q = self.q
x0 = inner(p, q) * dx
return (x0,)
else:
raise ValueError("Invalid term for assemble_operator().")
# Customize the resulting reduced problem
@CustomizeReducedProblemFor(NavierStokesProblem)
def CustomizeReducedNavierStokes(ReducedNavierStokes_Base):
class ReducedNavierStokes(ReducedNavierStokes_Base):
def __init__(self, truth_problem, **kwargs):
ReducedNavierStokes_Base.__init__(self, truth_problem, **kwargs)
self._nonlinear_solver_parameters.update({
"report": True,
"line_search": "wolfe"
})
return ReducedNavierStokes
Explanation: 3. Affine Decomposition
End of explanation
mesh = Mesh("data/backward_facing_step.xml")
subdomains = MeshFunction("size_t", mesh, "data/backward_facing_step_physical_region.xml")
boundaries = MeshFunction("size_t", mesh, "data/backward_facing_step_facet_region.xml")
Explanation: 4. Main program
4.1. Read the mesh for this problem
The mesh was generated by the data/generate_mesh.ipynb notebook.
End of explanation
element_u = VectorElement("Lagrange", mesh.ufl_cell(), 2)
element_p = FiniteElement("Lagrange", mesh.ufl_cell(), 1)
element = MixedElement(element_u, element_p)
V = FunctionSpace(mesh, element, components=[["u", "s"], "p"])
Explanation: 4.2. Create Finite Element Space (Taylor-Hood P2-P1)
End of explanation
problem = NavierStokes(V, subdomains=subdomains, boundaries=boundaries)
mu_range = [(1.0, 80.0)]
problem.set_mu_range(mu_range)
Explanation: 4.3. Allocate an object of the NavierStokes class
End of explanation
reduction_method = PODGalerkin(problem)
reduction_method.set_Nmax(10, DEIM=20)
Explanation: 4.4. Prepare reduction with a POD-Galerkin method
End of explanation
lifting_mu = (1.0,)
problem.set_mu(lifting_mu)
reduction_method.initialize_training_set(100, DEIM=144, sampling=EquispacedDistribution())
reduced_problem = reduction_method.offline()
Explanation: 4.5. Perform the offline phase
End of explanation
online_mu = (10.0,)
reduced_problem.set_mu(online_mu)
reduced_solution = reduced_problem.solve()
plot(reduced_solution, reduced_problem=reduced_problem, component="u")
plot(reduced_solution, reduced_problem=reduced_problem, component="p")
Explanation: 4.6. Perform an online solve
End of explanation
reduction_method.initialize_testing_set(16, DEIM=25, sampling=EquispacedDistribution())
reduction_method.error_analysis()
Explanation: 4.7. Perform an error analysis
End of explanation
reduction_method.speedup_analysis()
Explanation: 4.8. Perform a speedup analysis
End of explanation |
7,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sensitivity Analysis
Test here
Step1: Setting up the base model
For a first test
Step2: Define parameter uncertainties
We will start with a sensitivity analysis for the parameters of the fault events.
Step5: Calculate total stratigraphic distance
Step6: Function to modify parameters
Multiple event parameters can be changed directly with the function change_event_params, which takes a dictionarly of events and parameters with according changes relative to the defined parameters. Here a brief example
Step8: Full sensitivity analysis
Perform now a full sensitivity analysis for all defined parameters and analyse the output matrix. For a better overview, we first create a function to perform the sensitivity analysis
Step9: As a next step, we define the parameter ranges for the local sensitivity analysis (i.e. the $\delta p_j$ from the theoretical description above)
Step10: And now, we perform the local sensitivity analysis
Step11: The function passes back a list of the changed parameters and the calculated distances according to this change. Let's have a look at the results
Step12: Results of this local sensitivity analysis suggest that the model is most sensitive to the X-position of the fault, when we evaluate distances as simple stratigraphic id differences. Here just a bar plot for better visualisation (feel free to add proper labels) | Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
Explanation: Sensitivity Analysis
Test here: (local) sensitivity analysis of kinematic parameters with respect to a defined objective function. Aim: test how sensitivity the resulting model is to uncertainties in kinematic parameters to:
Evaluate which the most important parameters are, and to
Determine which parameters could, in principle, be inverted with suitable information.
Theory: local sensitivity analysis
Basic considerations:
parameter vector $\vec{p}$
residual vector $\vec{r}$
calculated values at observation points $\vec{z}$
Jacobian matrix $J_{ij} = \frac{\partial \vec{z}}{\partial \vec{p}}$
Numerical estimation of Jacobian matrix with central difference scheme (see Finsterle):
$$J_{ij} = \frac{\partial z_i}{\partial p_j} \approx \frac{z_i(\vec{p}; p_j + \delta p_j) - z_i(\vec{p};p_j - \delta p_j)}{2 \delta p_j}$$
where $\delta p_j$ is a small perturbation of parameter $j$, often as a fraction of the value.
Defining the responses
A meaningful sensitivity analysis obviously depends on the definition of a suitable response vector $\vec{z}$. Ideally, these responses are related to actual observations. In our case, we first want to determine how sensitive a kinematic structural geological model is with respect to uncertainties in the kinematic parameters. We therefore need calculatable measures that describe variations of the model.
As a first-order assumption, we will use a notation of a stratigraphic distance for discrete subsections of the model, for example in single voxets for the calculated model. We define distance $d$ of a subset $\omega$ as the (discrete) difference between the (discrete) stratigraphic value of an ideal model, $\hat{s}$, to the value of a model realisation $s_i$:
$$d(\omega) = \hat{s} - s_i$$
In the first example, we will consider only one response: the overall sum of stratigraphic distances for a model realisation $r$ of all subsets (= voxets, in the practical sense), scaled by the number of subsets (for a subsequent comparison of model discretisations):
$$D_r = \frac{1}{n} \sum_{i=1}^n d(\omega_i)$$
Note: mistake before: not considering distances at single nodes but only the sum - this lead to "zero-difference" for simple translation! Now: consider more realistic objective function, squared distance:
$$r = \sqrt{\sum_i (z_{i calc} - z_{i ref})^2}$$
End of explanation
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import pynoddy.events
import pynoddy.output
reload(pynoddy.history)
reload(pynoddy.events)
nm = pynoddy.history.NoddyHistory()
# add stratigraphy
strati_options = {'num_layers' : 8,
'layer_names' : ['layer 1', 'layer 2', 'layer 3', 'layer 4', 'layer 5', 'layer 6', 'layer 7', 'layer 8'],
'layer_thickness' : [1500, 500, 500, 500, 500, 500, 500, 500]}
nm.add_event('stratigraphy', strati_options )
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_W',
'pos' : (4000, 3500, 5000),
'dip_dir' : 90,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
# The following options define the fault geometry:
fault_options = {'name' : 'Fault_E',
'pos' : (6000, 3500, 5000),
'dip_dir' : 270,
'dip' : 60,
'slip' : 1000}
nm.add_event('fault', fault_options)
history = "two_faults_sensi.his"
nm.write_history(history)
output_name = "two_faults_sensi_out"
# Compute the model
pynoddy.compute_model(history, output_name)
# Plot output
nout = pynoddy.output.NoddyOutput(output_name)
nout.plot_section('y', layer_labels = strati_options['layer_names'][::-1],
colorbar = True, title="",
savefig = False)
Explanation: Setting up the base model
For a first test: use simple two-fault model from paper
End of explanation
H1 = pynoddy.history.NoddyHistory(history)
# get the original dip of the fault
dip_ori = H1.events[3].properties['Dip']
# dip_ori1 = H1.events[2].properties['Dip']
# add 10 degrees to dip
add_dip = -20
dip_new = dip_ori + add_dip
# dip_new1 = dip_ori1 + add_dip
# and assign back to properties dictionary:
H1.events[3].properties['Dip'] = dip_new
reload(pynoddy.output)
new_history = "sensi_test_dip_changed.his"
new_output = "sensi_test_dip_changed_out"
H1.write_history(new_history)
pynoddy.compute_model(new_history, new_output)
# load output from both models
NO1 = pynoddy.output.NoddyOutput(output_name)
NO2 = pynoddy.output.NoddyOutput(new_output)
# create basic figure layout
fig = plt.figure(figsize = (15,5))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
NO1.plot_section('y', position=0, ax = ax1, colorbar=False, title="Dip = %.0f" % dip_ori)
NO2.plot_section('y', position=0, ax = ax2, colorbar=False, title="Dip = %.0f" % dip_new)
plt.show()
Explanation: Define parameter uncertainties
We will start with a sensitivity analysis for the parameters of the fault events.
End of explanation
# def determine_strati_diff(NO1, NO2):
# calculate total stratigraphic distance between two models
# return np.sum(NO1.block - NO2.block) / float(len(NO1.block))
def determine_strati_diff(NO1, NO2):
calculate total stratigraphic distance between two models
return np.sqrt(np.sum((NO1.block - NO2.block)**2)) / float(len(NO1.block))
diff = determine_strati_diff(NO1, NO2)
print(diff)
Explanation: Calculate total stratigraphic distance
End of explanation
# set parameter changes in dictionary
changes_fault_1 = {'Dip' : -20}
changes_fault_2 = {'Dip' : -20}
param_changes = {2 : changes_fault_1,
3 : changes_fault_2}
reload(pynoddy.history)
H2 = pynoddy.history.NoddyHistory(history)
H2.change_event_params(param_changes)
new_history = "param_dict_changes.his"
new_output = "param_dict_changes_out"
H2.write_history(new_history)
pynoddy.compute_model(new_history, new_output)
# load output from both models
NO1 = pynoddy.output.NoddyOutput(output_name)
NO2 = pynoddy.output.NoddyOutput(new_output)
# create basic figure layout
fig = plt.figure(figsize = (15,5))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
NO1.plot_section('y', position=0, ax = ax1, colorbar=False, title="Original Model")
NO2.plot_section('y', position=0, ax = ax2, colorbar=False, title="Changed Model")
plt.show()
Explanation: Function to modify parameters
Multiple event parameters can be changed directly with the function change_event_params, which takes a dictionarly of events and parameters with according changes relative to the defined parameters. Here a brief example:
End of explanation
import copy
new_history = "sensi_tmp.his"
new_output = "sensi_out"
def noddy_sensitivity(history_filename, param_change_vals):
Perform noddy sensitivity analysis for a model
param_list = [] # list to store parameters for later analysis
distances = [] # list to store calcualted distances
# Step 1:
# create new parameter list to change model
for event_id, event_dict in param_change_vals.items(): # iterate over events
for key, val in event_dict.items(): # iterate over all properties separately
changes_list = dict()
changes_list[event_id] = dict()
param_list.append("event_%d_property_%s" % (event_id, key))
for i in range(2):
# calculate positive and negative values
his = pynoddy.history.NoddyHistory(history_filename)
if i == 0:
changes_list[event_id][key] = val
# set changes
his.change_event_params(changes_list)
# save and calculate model
his.write_history(new_history)
pynoddy.compute_model(new_history, new_output)
# open output and calculate distance
NO_tmp = pynoddy.output.NoddyOutput(new_output)
dist_pos = determine_strati_diff(NO1, NO_tmp)
NO_tmp.plot_section('y', position = 0, colorbar = False,
title = "Dist: %.2f" % dist_pos,
savefig = True,
fig_filename = "event_%d_property_%s_val_%d.png" \
% (event_id, key,val))
if i == 1:
changes_list[event_id][key] = -val
his.change_event_params(changes_list)
# save and calculate model
his.write_history(new_history)
pynoddy.compute_model(new_history, new_output)
# open output and calculate distance
NO_tmp = pynoddy.output.NoddyOutput(new_output)
dist_neg = determine_strati_diff(NO1, NO_tmp)
NO_tmp.plot_section('y', position=0, colorbar=False,
title="Dist: %.2f" % dist_neg,
savefig=True,
fig_filename="event_%d_property_%s_val_%d.png" \
% (event_id, key,val))
# calculate central difference
central_diff = (dist_pos + dist_neg) / (2.)
distances.append(central_diff)
return param_list, distances
Explanation: Full sensitivity analysis
Perform now a full sensitivity analysis for all defined parameters and analyse the output matrix. For a better overview, we first create a function to perform the sensitivity analysis:
End of explanation
changes_fault_1 = {'Dip' : 1.5,
'Dip Direction' : 10,
'Slip': 100.0,
'X': 500.0}
changes_fault_2 = {'Dip' : 1.5,
'Dip Direction' : 10,
'Slip': 100.0,
'X': 500.0}
param_changes = {2 : changes_fault_1,
3 : changes_fault_2}
Explanation: As a next step, we define the parameter ranges for the local sensitivity analysis (i.e. the $\delta p_j$ from the theoretical description above):
End of explanation
param_list_1, distances = noddy_sensitivity(history, param_changes)
Explanation: And now, we perform the local sensitivity analysis:
End of explanation
for p,d in zip(param_list_1, distances):
print "%s \t\t %f" % (p, d)
Explanation: The function passes back a list of the changed parameters and the calculated distances according to this change. Let's have a look at the results:
End of explanation
d = np.array([distances])
fig = plt.figure(figsize=(5,3))
ax = fig.add_subplot(111)
ax.bar(np.arange(0.6,len(distances),1.), np.array(distances[:]))
Explanation: Results of this local sensitivity analysis suggest that the model is most sensitive to the X-position of the fault, when we evaluate distances as simple stratigraphic id differences. Here just a bar plot for better visualisation (feel free to add proper labels):
End of explanation |
7,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create a list from the dictionary keys
Step2: Create a list from the dictionary values | Python Code:
dict = {'county': ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'],
'year': [2012, 2012, 2013, 2014, 2014],
'fireReports': [4, 24, 31, 2, 3]}
Explanation: Title: Creating Lists From Dictionary Keys And Values
Slug: create_list_from_dictionary_keys_and_values
Summary: Creating Lists From Dictionary Keys And Values
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Create a dictionary
End of explanation
# Create a list of keys
list(dict.keys())
Explanation: Create a list from the dictionary keys
End of explanation
# Create a list of values
list(dict.values())
Explanation: Create a list from the dictionary values
End of explanation |
7,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix Multiplication
Various ways of implementing different matrix multiplications.
Read the documentation embedded in the code.
Step1: In this example, the assertions essentially tell the story of what's going on
Note that in all these examples, the pyspecdata version appears more
complicated.
But, that's because these are toy examples, where we have no need for the
dimension names or axes.
Nonetheless, we wanted to give the simplest working example possible.
First, we demonstrate matrix multiplication
for all the below, I attach an axis to make sure the routines work with the
axes attached
Step2: in the next line, note how only the dimension that goes away is named the
same!
if you think about the matrix a transforming from one vector space (labeled
y) to another (labeled x) this makes sense
Step3: the previous is unambiguous b/c only 'y' is shared between the two,
but I can do the following for clarity
Step4: calculate a projection matrix
Step5: note that here, I have to rename the column space
Step6: now, a standard dot product note how I don't need along here, since it's
unambiguous
Step7: Finally, let's show what happens when we multiply a matrix by itself and
don't rename one of the dimensions
By doing this, we indicate that we're not interested in transforming from one
vector space to another (as a projection matrix does), but rather just have
two sets of vectors and are interested in finding the dot products between
the two sets
This will take the dot product of our 10 2048-long vectors, and present them
10-long array | Python Code:
# -*- coding: utf-8 -*-
from pylab import *
from pyspecdata import *
from numpy.random import random
import time
init_logging('debug')
Explanation: Matrix Multiplication
Various ways of implementing different matrix multiplications.
Read the documentation embedded in the code.
End of explanation
a_nd = nddata(random(10*2048),[10,2048],['x','y']).setaxis('x','#').setaxis('y','#')
a = a_nd.data
Explanation: In this example, the assertions essentially tell the story of what's going on
Note that in all these examples, the pyspecdata version appears more
complicated.
But, that's because these are toy examples, where we have no need for the
dimension names or axes.
Nonetheless, we wanted to give the simplest working example possible.
First, we demonstrate matrix multiplication
for all the below, I attach an axis to make sure the routines work with the
axes attached
End of explanation
a2_nd = nddata(random(10*2048),[2048,10],['y','z']).setaxis('y','#').setaxis('z','#')
a2 = a2_nd.data
# multiply two different matrices
time1 = time.time()
b = a @ a2
time2 = time.time()
b_nd = a_nd @ a2_nd
Explanation: in the next line, note how only the dimension that goes away is named the
same!
if you think about the matrix a transforming from one vector space (labeled
y) to another (labeled x) this makes sense
End of explanation
time3 = time.time()
assert b_nd.dimlabels == ['x','z'], b_nd.dimlabels
assert all(isclose(b,b_nd.data))
print("total time",(time3-time2),"time/(time for raw)",((time3-time2)/(time2-time1)))
assert ((time3-time2)/(time2-time1))<1
Explanation: the previous is unambiguous b/c only 'y' is shared between the two,
but I can do the following for clarity:
b_nd = a_nd.along('y') @ a2_nd
Note that "along" gives the dimension along which the sum is performed -- and
so this dimension goes away upon matrix multiplication.
If only one dimension is shared between the matrices, then we know to take
the sum along the shared dimension.
For example, here a2_nd transforms from a space called "z" into a space called "y",
while a_nd transforms from "y" into "x" -- so it's obvious that a_nd @ a2_nd should
transform from "z" into "y".
End of explanation
time1 = time.time()
b = a @ a.T
time2 = time.time()
Explanation: calculate a projection matrix
End of explanation
b_nd = a_nd.along('y',('x','x_new')) @ a_nd
time3 = time.time()
assert b_nd.dimlabels == ['x_new','x'], b_nd.dimlabels
assert all(b_nd.getaxis('x_new') == b_nd.getaxis('x'))
assert (id(b_nd.getaxis('x_new')) != id(b_nd.getaxis('x')))
assert all(isclose(b,b_nd.data))
if time2-time1>0:
print("total time",(time3-time2),"time/(time for raw)",((time3-time2)/(time2-time1)))
assert ((time3-time2)/(time2-time1))<1.1
Explanation: note that here, I have to rename the column space
End of explanation
a_nd = nddata(random(10),[10],['myaxis']).setaxis('myaxis','#')
b_nd = nddata(random(10),[10],['myaxis']).setaxis('myaxis','#')
a = a_nd.data
b = b_nd.data
assert all(isclose(a.dot(b),(a_nd @ b_nd).data))
Explanation: now, a standard dot product note how I don't need along here, since it's
unambiguous
End of explanation
a_nd = nddata(random(10*2048),[10,2048],['x','y']).setaxis('x','#').setaxis('y','#')
a = a_nd.data
b_nd = a_nd.along('y') @ a_nd
b = matmul(a_nd.data.reshape(10,1,2048),
a_nd.data.reshape(10,2048,1)).reshape(-1)
assert all(isclose(b,b_nd.data))
assert len(b.data) == 10
Explanation: Finally, let's show what happens when we multiply a matrix by itself and
don't rename one of the dimensions
By doing this, we indicate that we're not interested in transforming from one
vector space to another (as a projection matrix does), but rather just have
two sets of vectors and are interested in finding the dot products between
the two sets
This will take the dot product of our 10 2048-long vectors, and present them
10-long array
End of explanation |
7,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
view_sentence_range[1]
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
#print(text)
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 0)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab, 0)}
print('int_to_vocab size:', len(int_to_vocab))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
punctuation_to_token = {}
punctuation_to_token['.'] = '||period||'
punctuation_to_token[','] = '||comma||'
punctuation_to_token['"'] = '||quotation||'
punctuation_to_token[';'] = '||semicolon||'
punctuation_to_token['!'] = '||exclamation||'
punctuation_to_token['?'] = '||question||'
punctuation_to_token['('] = '||l-parentheses||'
punctuation_to_token[')'] = '||r-parentheses||'
punctuation_to_token['--'] = '||dash||'
punctuation_to_token['\n'] = '||return||'
return punctuation_to_token
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
print(len(int_to_vocab))
print(int_to_vocab[6778])
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return input, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2)
#drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.5)
#lstm_layers = 1
#cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.int32)
initial_state = tf.identity(initial_state, name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
#embedding = tf.Variable(tf.random_uniform((vocab_size+1, embed_dim), -1, 1))
embedding = tf.Variable(tf.truncated_normal((vocab_size+1, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
print("vocab_size:", vocab_size)
print("embed.shape:", embed.shape)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
print("inputs.shape:", inputs.shape)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) #need to specify dtype instead of initial_state
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
#embed_dim = 300
#embed = get_embed(input_data, vocab_size, embed_dim)
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
#logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=tf.nn.relu)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None,
weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
biases_initializer=tf.zeros_initializer())
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
tmp = []
tmp = [[data[0:2]], data[2:4]]
print(tmp)
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
#print(int_text)
#print(batch_size, seq_length)
batches = []
num_of_batches = len(int_text) // (batch_size*seq_length)
print("num_of_batches:", num_of_batches)
for i in range(0, num_of_batches):
batch_of_input = []
batch_of_output = []
for j in range(0, batch_size):
top = i*seq_length + j*seq_length*num_of_batches
batch_of_input.append(int_text[top : top+seq_length])
batch_of_output.append(int_text[top+1 :top+1+seq_length])
batch = [batch_of_input, batch_of_output]
#print('batch', i, 'input:')
#print(batch_of_input)
#print('batch', i, 'output:')
#print(batch_of_output)
batches.append(batch)
return np.array(batches)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
#get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 200
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 10
# Learning Rate
learning_rate = 0.002
# Show stats for every n number of batches
show_every_n_batches = 53
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input_tensor = loaded_graph.get_tensor_by_name('input:0')
Initial_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0')
final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0')
probs_tensor = loaded_graph.get_tensor_by_name('probs:0')
return input_tensor, Initial_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
#print(probabilities)
#print(int_to_vocab)
index = np.argmax(probabilities)
word = int_to_vocab[index]
#word = int_to_vocab.get(probabilities.argmax(axis=0))
return word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
7,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparison of bubble trajectories
Start by loading some boiler plate
Step1: And some more specialized dependencies
Step2: Helper routines
Step3: Configuration for this figure. config.ref points to a simulation with periodic boundaries.
Step4: Open a chest located on a remote globus endpoint and load a remote json configuration file.
Step5: We want to plot the overall spike depth, which is the H_exp field in the chest, and the depth for individual spikes, H_exp_cell.
H_exp = max(H_exp_cell)
Step6: Use a spline to compute the derivative of 'H' vs time
Step7: Plot the Froude number, non-dimensionalized by the theoretical dependence on Atwood, acceleration, and wave number, vs the spike depth, normalized by wave-length.
The horiztonal dotted line is the theoretical prediction of Goncharaov. The vertical solid black line is the farthest that Wilkinson and Jacobs were able to get.
The solid black trajectory is the overall wall-bounded result (H_exp), while the dotted black trajectory is the periodic reference case.
Step8: Let's zoom into the stagnation stage. | Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline
from scipy.interpolate import UnivariateSpline
import json
import pandas as pd
from functools import partial
class Foo: pass
Explanation: Comparison of bubble trajectories
Start by loading some boiler plate: matplotlib, numpy, scipy, json, functools, and a convenience class.
End of explanation
from chest import Chest
from slict import CachedSlict
from glopen import glopen, glopen_many
Explanation: And some more specialized dependencies:
1. Slict provides a convenient slice-able dictionary interface
2. Chest is an out-of-core dictionary that we'll hook directly to a globus remote using...
3. glopen is an open-like context manager for remote globus files
End of explanation
def load_from_archive(names, arch):
cs = []
for name in names:
cs.append(Chest(path = "{:s}-results".format(name),
open = partial(glopen, endpoint=arch),
open_many = partial(glopen_many, endpoint=arch)))
scs = [CachedSlict(c) for c in cs]
ps = []
for name in names:
with glopen(
"{:s}.json".format(name), mode='r',
endpoint = arch,
) as f:
ps.append(json.load(f))
if len(names) == 1:
return cs[0], scs[0], ps[0]
return cs, scs, ps
Explanation: Helper routines
End of explanation
config = Foo()
config.names = [
"Wilk/Wilk_long/Wilk_long",
# "Wilk/Wilk_per/Wilk_per",
]
config.ref = ["Wilk/Wilk_per/Wilk_per",]
#config.arch_end = "maxhutch#alpha-admin/~/pub/"
#config.arch_end = "alcf#dtn_mira/projects/alpha-nek/experiments/"
config.arch_end = "alcf#dtn_mira/projects/PetaCESAR/maxhutch/"
Explanation: Configuration for this figure. config.ref points to a simulation with periodic boundaries.
End of explanation
c, sc, p = load_from_archive(config.names, config.arch_end);
rc, rsc, rp = load_from_archive(config.ref, config.arch_end);
Explanation: Open a chest located on a remote globus endpoint and load a remote json configuration file.
End of explanation
c.prefetch(sc[:,'H_exp'].full_keys())
c.prefetch(sc[:,'H_exp_cell'].full_keys())
rc.prefetch(rsc[:,'H_exp'].full_keys())
rc.prefetch(rsc[:,'H_exp_cell'].full_keys())
Explanation: We want to plot the overall spike depth, which is the H_exp field in the chest, and the depth for individual spikes, H_exp_cell.
H_exp = max(H_exp_cell)
End of explanation
spls = []
Hs = []
Ts = []
# reference
rT = np.array(rsc[:,'H_exp'].keys())
rH = np.array([x for x in rsc[:,'H_exp'].values()])/4
rspl = UnivariateSpline(rT, rH, k = 5, s = 1.e-10)
rFr = rspl.derivative()
# overall
T = np.array(sc[:,'H_exp'].keys())
H = np.array([x for x in sc[:,'H_exp'].values()])
spls.append(UnivariateSpline(T, H, k = 5, s = 1.e-10))
Hs.append(H)
Ts.append(T)
# bubble-wise
for i in range(10):
T = np.array(sc[:,'H_exp'].keys())
H = np.array([x[i] for x in sc[:,'H_exp_cell'].values()])
spls.append(UnivariateSpline(T,
H,
k = 5,
s = 1.e-10))
Hs.append(H)
Ts.append(T)
Frs = [spl.derivative() for spl in spls]
T = np.linspace(sc[:,'H_exp'].keys()[0], sc[:,'H_exp'].keys()[-1], 1000)
Explanation: Use a spline to compute the derivative of 'H' vs time: the Froude number.
First, we process the reference periodic simulation. Then, the overall height
End of explanation
fig, axs = plt.subplots(1,1)
for spl, Fr in zip(spls[1:], Frs[1:]):
axs.plot(
spl(T)* p["kmin"] ,
Fr(T)/ np.sqrt(p["atwood"]*p["g"] / p["kmin"]),
label="{:3.1f} modes".format(p["kmin"]));
axs.plot(
spls[0](T)* p["kmin"],
Frs[0](T) / np.sqrt(p["atwood"]*p["g"] / p["kmin"]),
'k-', label="Overall {:3.1f} modes".format(p["kmin"]) );
axs.plot(
rspl(T)* rp["kmin"],
rFr(T) / np.sqrt(rp["atwood"]*rp["g"] / rp["kmin"]),
'k--', label="Reference {:3.1f} modes".format(rp["kmin"]) );
axs.plot([0,3], [np.sqrt(1/np.pi), np.sqrt(1/np.pi)], 'k--')
axs.axvline(x=1.4, color='k');
axs.set_ylabel(r'Fr')
axs.set_xlabel(r'$h/\lambda$');
axs.legend(loc=0);
#axs.set_xbound(0,3)
#axs.set_ybound(0,2)
plt.savefig('Figure17.eps')
Explanation: Plot the Froude number, non-dimensionalized by the theoretical dependence on Atwood, acceleration, and wave number, vs the spike depth, normalized by wave-length.
The horiztonal dotted line is the theoretical prediction of Goncharaov. The vertical solid black line is the farthest that Wilkinson and Jacobs were able to get.
The solid black trajectory is the overall wall-bounded result (H_exp), while the dotted black trajectory is the periodic reference case.
End of explanation
fig, axs = plt.subplots(1,1)
for spl, Fr in zip(spls[1:], Frs[1:]):
axs.plot(
spl(T)* p["kmin"] ,
Fr(T)/ np.sqrt(p["atwood"]*p["g"] / p["kmin"]),
label="{:3.1f} modes".format(p["kmin"]));
axs.plot(
spls[0](T)* p["kmin"],
Frs[0](T) / np.sqrt(p["atwood"]*p["g"] / p["kmin"]),
'k-', label="Overall {:3.1f} modes".format(p["kmin"]) );
axs.plot(
rspl(T)* rp["kmin"],
rFr(T) / np.sqrt(rp["atwood"]*rp["g"] / rp["kmin"]),
'k--', label="Reference {:3.1f} modes".format(rp["kmin"]) );
axs.plot([0,3], [np.sqrt(1/np.pi), np.sqrt(1/np.pi)], 'k--')
axs.axvline(x=1.4, color='k');
axs.set_ylabel(r'Fr')
axs.set_xlabel(r'$h/\lambda$');
axs.legend(loc=0);
axs.set_xbound(.5,1)
axs.set_ybound(.5,.6)
plt.savefig('Figure17.eps')
fig, axs = plt.subplots(1,1)
for H, T in zip(Hs[1:], Ts[1:]):
axs.plot(
T,
H, 'x',
label="{:3.1f} modes".format(p["kmin"]));
axs.plot(
Ts[0],
Hs[0], 'k-',
label="Overall {:3.1f} modes".format(p["kmin"]));
axs.plot(
rT,
rH, 'k--',
label="Reference {:3.1f} modes".format(p["kmin"]));
axs.set_ylabel(r'$h/\lambda$')
axs.set_xlabel(r'T (s)');
axs.legend(loc=0);
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, matplotlib, slict, chest, glopen, globussh
Explanation: Let's zoom into the stagnation stage.
End of explanation |
7,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: TFX Keras Component Tutorial
A Component-by-Component Introduction to TensorFlow Extended (TFX)
Note
Step2: Install TFX
Note
Step3: Did you restart the runtime?
If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.
Import packages
We import necessary packages, including standard TFX component classes.
Step4: Let's check the library versions.
Step5: Set up pipeline paths
Step6: Download example data
We download the example dataset for use in our TFX pipeline.
The dataset we're using is the Taxi Trips dataset released by the City of Chicago. The columns in this dataset are
Step7: Take a quick look at the CSV file.
Step8: Disclaimer
Step9: Run TFX components interactively
In the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts.
ExampleGen
The ExampleGen component is usually at the start of a TFX pipeline. It will
Step10: Let's examine the output artifacts of ExampleGen. This component produces two artifacts, training examples and evaluation examples
Step11: We can also take a look at the first three training examples
Step12: Now that ExampleGen has finished ingesting the data, the next step is data analysis.
StatisticsGen
The StatisticsGen component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the TensorFlow Data Validation library.
StatisticsGen takes as input the dataset we just ingested using ExampleGen.
Step13: After StatisticsGen finishes running, we can visualize the outputted statistics. Try playing with the different plots!
Step14: SchemaGen
The SchemaGen component generates a schema based on your data statistics. (A schema defines the expected bounds, types, and properties of the features in your dataset.) It also uses the TensorFlow Data Validation library.
Note
Step15: After SchemaGen finishes running, we can visualize the generated schema as a table.
Step16: Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.
To learn more about schemas, see the SchemaGen documentation.
ExampleValidator
The ExampleValidator component detects anomalies in your data, based on the expectations defined by the schema. It also uses the TensorFlow Data Validation library.
ExampleValidator will take as input the statistics from StatisticsGen, and the schema from SchemaGen.
Step17: After ExampleValidator finishes running, we can visualize the anomalies as a table.
Step19: In the anomalies table, we can see that there are no anomalies. This is what we'd expect, since this the first dataset that we've analyzed and the schema is tailored to it. You should review this schema -- anything unexpected means an anomaly in the data. Once reviewed, the schema can be used to guard future data, and anomalies produced here can be used to debug model performance, understand how your data evolves over time, and identify data errors.
Transform
The Transform component performs feature engineering for both training and serving. It uses the TensorFlow Transform library.
Transform will take as input the data from ExampleGen, the schema from SchemaGen, as well as a module that contains user-defined Transform code.
Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, see the tutorial). First, we define a few constants for feature engineering
Step23: Next, we write a preprocessing_fn that takes in raw data as input, and returns transformed features that our model can train on
Step24: Now, we pass in this feature engineering code to the Transform component and run it to transform your data.
Step25: Let's examine the output artifacts of Transform. This component produces two types of outputs
Step26: Take a peek at the transform_graph artifact. It points to a directory containing three subdirectories.
Step27: The transformed_metadata subdirectory contains the schema of the preprocessed data. The transform_fn subdirectory contains the actual preprocessing graph. The metadata subdirectory contains the schema of the original data.
We can also take a look at the first three transformed examples
Step36: After the Transform component has transformed your data into features, and the next step is to train a model.
Trainer
The Trainer component will train a model that you define in TensorFlow. Default Trainer support Estimator API, to use Keras API, you need to specify Generic Trainer by setup custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor) in Trainer's contructor.
Trainer takes as input the schema from SchemaGen, the transformed data and graph from Transform, training parameters, as well as a module that contains user-defined model code.
Let's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, see the tutorial)
Step37: Now, we pass in this model code to the Trainer component and run it to train the model.
Step38: Analyze Training with TensorBoard
Take a peek at the trainer artifact. It points to a directory containing the model subdirectories.
Step39: Optionally, we can connect TensorBoard to the Trainer to analyze our model's training curves.
Step40: Evaluator
The Evaluator component computes model performance metrics over the evaluation set. It uses the TensorFlow Model Analysis library. The Evaluator can also optionally validate that a newly trained model is better than the previous model. This is useful in a production pipeline setting where you may automatically train and validate a model every day. In this notebook, we only train one model, so the Evaluator automatically will label the
model as "good".
Evaluator will take as input the data from ExampleGen, the trained model from Trainer, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values (e.g. how does your model perform on taxi trips that start at 8am versus 8pm?). See an example of this configuration below
Step41: Next, we give this configuration to Evaluator and run it.
Step42: Now let's examine the output artifacts of Evaluator.
Step43: Using the evaluation output we can show the default visualization of global metrics on the entire evaluation set.
Step44: To see the visualization for sliced evaluation metrics, we can directly call the TensorFlow Model Analysis library.
Step45: This visualization shows the same metrics, but computed at every feature value of trip_start_hour instead of on the entire evaluation set.
TensorFlow Model Analysis supports many other visualizations, such as Fairness Indicators and plotting a time series of model performance. To learn more, see the tutorial.
Since we added thresholds to our config, validation output is also available. The precence of a blessing artifact indicates that our model passed validation. Since this is the first validation being performed the candidate is automatically blessed.
Step46: Now can also verify the success by loading the validation result record
Step47: Pusher
The Pusher component is usually at the end of a TFX pipeline. It checks whether a model has passed validation, and if so, exports the model to _serving_model_dir.
Step48: Let's examine the output artifacts of Pusher.
Step49: In particular, the Pusher will export your model in the SavedModel format, which looks like this | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
import sys
if 'google.colab' in sys.modules:
!pip install --upgrade pip
Explanation: TFX Keras Component Tutorial
A Component-by-Component Introduction to TensorFlow Extended (TFX)
Note: We recommend running this tutorial in a Colab notebook, with no setup required! Just click "Run in Google Colab".
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/tfx/components_keras">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/components_keras.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/components_keras.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
<td><a target="_blank" href="https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/components_keras.ipynb">
<img width=32px src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a></td>
</table></div>
This Colab-based tutorial will interactively walk through each built-in component of TensorFlow Extended (TFX).
It covers every step in an end-to-end machine learning pipeline, from data ingestion to pushing a model to serving.
When you're done, the contents of this notebook can be automatically exported as TFX pipeline source code, which you can orchestrate with Apache Airflow and Apache Beam.
Note: This notebook demonstrates the use of native Keras models in TFX pipelines. TFX only supports the TensorFlow 2 version of Keras.
Background
This notebook demonstrates how to use TFX in a Jupyter/Colab environment. Here, we walk through the Chicago Taxi example in an interactive notebook.
Working in an interactive notebook is a useful way to become familiar with the structure of a TFX pipeline. It's also useful when doing development of your own pipelines as a lightweight development environment, but you should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts.
Orchestration
In a production deployment of TFX, you will use an orchestrator such as Apache Airflow, Kubeflow Pipelines, or Apache Beam to orchestrate a pre-defined pipeline graph of TFX components. In an interactive notebook, the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells.
Metadata
In a production deployment of TFX, you will access metadata through the ML Metadata (MLMD) API. MLMD stores metadata properties in a database such as MySQL or SQLite, and stores the metadata payloads in a persistent store such as on your filesystem. In an interactive notebook, both properties and payloads are stored in an ephemeral SQLite database in the /tmp directory on the Jupyter notebook or Colab server.
Setup
First, we install and import the necessary packages, set up paths, and download data.
Upgrade Pip
To avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately.
End of explanation
!pip install -U tfx
Explanation: Install TFX
Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).
End of explanation
import os
import pprint
import tempfile
import urllib
import absl
import tensorflow as tf
import tensorflow_model_analysis as tfma
tf.get_logger().propagate = False
pp = pprint.PrettyPrinter()
from tfx import v1 as tfx
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
Explanation: Did you restart the runtime?
If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.
Import packages
We import necessary packages, including standard TFX component classes.
End of explanation
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
Explanation: Let's check the library versions.
End of explanation
# This is the root directory for your TFX pip package installation.
_tfx_root = tfx.__path__[0]
# This is the directory containing the TFX Chicago Taxi Pipeline example.
_taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline')
# This is the path where your model will be pushed for serving.
_serving_model_dir = os.path.join(
tempfile.mkdtemp(), 'serving_model/taxi_simple')
# Set up logging.
absl.logging.set_verbosity(absl.logging.INFO)
Explanation: Set up pipeline paths
End of explanation
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
Explanation: Download example data
We download the example dataset for use in our TFX pipeline.
The dataset we're using is the Taxi Trips dataset released by the City of Chicago. The columns in this dataset are:
<table>
<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
</table>
With this dataset, we will build a model that predicts the tips of a trip.
End of explanation
!head {_data_filepath}
Explanation: Take a quick look at the CSV file.
End of explanation
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext. Calls to InteractiveContext are no-ops outside of the
# notebook.
context = InteractiveContext()
Explanation: Disclaimer: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.
Create the InteractiveContext
Last, we create an InteractiveContext, which will allow us to run TFX components interactively in this notebook.
End of explanation
example_gen = tfx.components.CsvExampleGen(input_base=_data_root)
context.run(example_gen, enable_cache=True)
Explanation: Run TFX components interactively
In the cells that follow, we create TFX components one-by-one, run each of them, and visualize their output artifacts.
ExampleGen
The ExampleGen component is usually at the start of a TFX pipeline. It will:
Split data into training and evaluation sets (by default, 2/3 training + 1/3 eval)
Convert data into the tf.Example format (learn more here)
Copy data into the _tfx_root directory for other components to access
ExampleGen takes as input the path to your data source. In our case, this is the _data_root path that contains the downloaded CSV.
Note: In this notebook, we can instantiate components one-by-one and run them with InteractiveContext.run(). By contrast, in a production setting, we would specify all the components upfront in a Pipeline to pass to the orchestrator (see the Building a TFX Pipeline Guide).
Enabling the Cache
When using the InteractiveContext in a notebook to develop a pipeline you can control when individual components will cache their outputs. Set enable_cache to True when you want to reuse the previous output artifacts that the component generated. Set enable_cache to False when you want to recompute the output artifacts for a component, if you are making changes to the code for example.
End of explanation
artifact = example_gen.outputs['examples'].get()[0]
print(artifact.split_names, artifact.uri)
Explanation: Let's examine the output artifacts of ExampleGen. This component produces two artifacts, training examples and evaluation examples:
End of explanation
# Get the URI of the output artifact representing the training examples, which is a directory
train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
Explanation: We can also take a look at the first three training examples:
End of explanation
statistics_gen = tfx.components.StatisticsGen(
examples=example_gen.outputs['examples'])
context.run(statistics_gen, enable_cache=True)
Explanation: Now that ExampleGen has finished ingesting the data, the next step is data analysis.
StatisticsGen
The StatisticsGen component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the TensorFlow Data Validation library.
StatisticsGen takes as input the dataset we just ingested using ExampleGen.
End of explanation
context.show(statistics_gen.outputs['statistics'])
Explanation: After StatisticsGen finishes running, we can visualize the outputted statistics. Try playing with the different plots!
End of explanation
schema_gen = tfx.components.SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=False)
context.run(schema_gen, enable_cache=True)
Explanation: SchemaGen
The SchemaGen component generates a schema based on your data statistics. (A schema defines the expected bounds, types, and properties of the features in your dataset.) It also uses the TensorFlow Data Validation library.
Note: The generated schema is best-effort and only tries to infer basic properties of the data. It is expected that you review and modify it as needed.
SchemaGen will take as input the statistics that we generated with StatisticsGen, looking at the training split by default.
End of explanation
context.show(schema_gen.outputs['schema'])
Explanation: After SchemaGen finishes running, we can visualize the generated schema as a table.
End of explanation
example_validator = tfx.components.ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
context.run(example_validator, enable_cache=True)
Explanation: Each feature in your dataset shows up as a row in the schema table, alongside its properties. The schema also captures all the values that a categorical feature takes on, denoted as its domain.
To learn more about schemas, see the SchemaGen documentation.
ExampleValidator
The ExampleValidator component detects anomalies in your data, based on the expectations defined by the schema. It also uses the TensorFlow Data Validation library.
ExampleValidator will take as input the statistics from StatisticsGen, and the schema from SchemaGen.
End of explanation
context.show(example_validator.outputs['anomalies'])
Explanation: After ExampleValidator finishes running, we can visualize the anomalies as a table.
End of explanation
_taxi_constants_module_file = 'taxi_constants.py'
%%writefile {_taxi_constants_module_file}
NUMERICAL_FEATURES = ['trip_miles', 'fare', 'trip_seconds']
BUCKET_FEATURES = [
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude'
]
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
CATEGORICAL_NUMERICAL_FEATURES = [
'trip_start_hour', 'trip_start_day', 'trip_start_month',
'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
'dropoff_community_area'
]
CATEGORICAL_STRING_FEATURES = [
'payment_type',
'company',
]
# Number of vocabulary terms used for encoding categorical features.
VOCAB_SIZE = 1000
# Count of out-of-vocab buckets in which unrecognized categorical are hashed.
OOV_SIZE = 10
# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'
def t_name(key):
Rename the feature keys so that they don't clash with the raw keys when
running the Evaluator component.
Args:
key: The original feature key
Returns:
key with '_xf' appended
return key + '_xf'
Explanation: In the anomalies table, we can see that there are no anomalies. This is what we'd expect, since this the first dataset that we've analyzed and the schema is tailored to it. You should review this schema -- anything unexpected means an anomaly in the data. Once reviewed, the schema can be used to guard future data, and anomalies produced here can be used to debug model performance, understand how your data evolves over time, and identify data errors.
Transform
The Transform component performs feature engineering for both training and serving. It uses the TensorFlow Transform library.
Transform will take as input the data from ExampleGen, the schema from SchemaGen, as well as a module that contains user-defined Transform code.
Let's see an example of user-defined Transform code below (for an introduction to the TensorFlow Transform APIs, see the tutorial). First, we define a few constants for feature engineering:
Note: The %%writefile cell magic will save the contents of the cell as a .py file on disk. This allows the Transform component to load your code as a module.
End of explanation
_taxi_transform_module_file = 'taxi_transform.py'
%%writefile {_taxi_transform_module_file}
import tensorflow as tf
import tensorflow_transform as tft
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when we may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if we're doing development
import importlib
importlib.reload(taxi_constants)
_NUMERICAL_FEATURES = taxi_constants.NUMERICAL_FEATURES
_BUCKET_FEATURES = taxi_constants.BUCKET_FEATURES
_FEATURE_BUCKET_COUNT = taxi_constants.FEATURE_BUCKET_COUNT
_CATEGORICAL_NUMERICAL_FEATURES = taxi_constants.CATEGORICAL_NUMERICAL_FEATURES
_CATEGORICAL_STRING_FEATURES = taxi_constants.CATEGORICAL_STRING_FEATURES
_VOCAB_SIZE = taxi_constants.VOCAB_SIZE
_OOV_SIZE = taxi_constants.OOV_SIZE
_FARE_KEY = taxi_constants.FARE_KEY
_LABEL_KEY = taxi_constants.LABEL_KEY
def _make_one_hot(x, key):
Make a one-hot tensor to encode categorical features.
Args:
X: A dense tensor
key: A string key for the feature in the input
Returns:
A dense one-hot tensor as a float list
integerized = tft.compute_and_apply_vocabulary(x,
top_k=_VOCAB_SIZE,
num_oov_buckets=_OOV_SIZE,
vocab_filename=key, name=key)
depth = (
tft.experimental.get_vocabulary_size_by_name(key) + _OOV_SIZE)
one_hot_encoded = tf.one_hot(
integerized,
depth=tf.cast(depth, tf.int32),
on_value=1.0,
off_value=0.0)
return tf.reshape(one_hot_encoded, [-1, depth])
def _fill_in_missing(x):
Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
if not isinstance(x, tf.sparse.SparseTensor):
return x
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
def preprocessing_fn(inputs):
tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
outputs = {}
for key in _NUMERICAL_FEATURES:
# If sparse make it dense, setting nan's to 0 or '', and apply zscore.
outputs[taxi_constants.t_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]), name=key)
for key in _BUCKET_FEATURES:
outputs[taxi_constants.t_name(key)] = tf.cast(tft.bucketize(
_fill_in_missing(inputs[key]), _FEATURE_BUCKET_COUNT, name=key),
dtype=tf.float32)
for key in _CATEGORICAL_STRING_FEATURES:
outputs[taxi_constants.t_name(key)] = _make_one_hot(_fill_in_missing(inputs[key]), key)
for key in _CATEGORICAL_NUMERICAL_FEATURES:
outputs[taxi_constants.t_name(key)] = _make_one_hot(tf.strings.strip(
tf.strings.as_string(_fill_in_missing(inputs[key]))), key)
# Was this passenger a big tipper?
taxi_fare = _fill_in_missing(inputs[_FARE_KEY])
tips = _fill_in_missing(inputs[_LABEL_KEY])
outputs[_LABEL_KEY] = tf.where(
tf.math.is_nan(taxi_fare),
tf.cast(tf.zeros_like(taxi_fare), tf.int64),
# Test if the tip was > 20% of the fare.
tf.cast(
tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))
return outputs
Explanation: Next, we write a preprocessing_fn that takes in raw data as input, and returns transformed features that our model can train on:
End of explanation
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath(_taxi_transform_module_file))
context.run(transform, enable_cache=True)
Explanation: Now, we pass in this feature engineering code to the Transform component and run it to transform your data.
End of explanation
transform.outputs
Explanation: Let's examine the output artifacts of Transform. This component produces two types of outputs:
transform_graph is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
transformed_examples represents the preprocessed training and evaluation data.
End of explanation
train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
Explanation: Take a peek at the transform_graph artifact. It points to a directory containing three subdirectories.
End of explanation
# Get the URI of the output artifact representing the transformed examples, which is a directory
train_uri = os.path.join(transform.outputs['transformed_examples'].get()[0].uri, 'Split-train')
# Get the list of files in this directory (all compressed TFRecord files)
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
# Create a `TFRecordDataset` to read these files
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
# Iterate over the first 3 records and decode them.
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = tf.train.Example()
example.ParseFromString(serialized_example)
pp.pprint(example)
Explanation: The transformed_metadata subdirectory contains the schema of the preprocessed data. The transform_fn subdirectory contains the actual preprocessing graph. The metadata subdirectory contains the schema of the original data.
We can also take a look at the first three transformed examples:
End of explanation
_taxi_trainer_module_file = 'taxi_trainer.py'
%%writefile {_taxi_trainer_module_file}
from typing import Dict, List, Text
import os
import glob
from absl import logging
import datetime
import tensorflow as tf
import tensorflow_transform as tft
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_transform import TFTransformOutput
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when we may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if we're doing development
import importlib
importlib.reload(taxi_constants)
_LABEL_KEY = taxi_constants.LABEL_KEY
_BATCH_SIZE = 40
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 200) -> tf.data.Dataset:
Generates features and label for tuning/training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
tf_transform_output.transformed_metadata.schema)
def _get_tf_examples_serving_signature(model, tf_transform_output):
Returns a serving signature that accepts `tensorflow.Example`.
# We need to track the layers in the model in order to save it.
# TODO(b/162357359): Revise once the bug is resolved.
model.tft_layer_inference = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def serve_tf_examples_fn(serialized_tf_example):
Returns the output to be used in the serving signature.
raw_feature_spec = tf_transform_output.raw_feature_spec()
# Remove label feature since these will not be present at serving time.
raw_feature_spec.pop(_LABEL_KEY)
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_inference(raw_features)
logging.info('serve_transformed_features = %s', transformed_features)
outputs = model(transformed_features)
# TODO(b/154085620): Convert the predicted labels from the model using a
# reverse-lookup (opposite of transform.py).
return {'outputs': outputs}
return serve_tf_examples_fn
def _get_transform_features_signature(model, tf_transform_output):
Returns a serving signature that applies tf.Transform to features.
# We need to track the layers in the model in order to save it.
# TODO(b/162357359): Revise once the bug is resolved.
model.tft_layer_eval = tf_transform_output.transform_features_layer()
@tf.function(input_signature=[
tf.TensorSpec(shape=[None], dtype=tf.string, name='examples')
])
def transform_features_fn(serialized_tf_example):
Returns the transformed_features to be fed as input to evaluator.
raw_feature_spec = tf_transform_output.raw_feature_spec()
raw_features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
transformed_features = model.tft_layer_eval(raw_features)
logging.info('eval_transformed_features = %s', transformed_features)
return transformed_features
return transform_features_fn
def export_serving_model(tf_transform_output, model, output_dir):
Exports a keras model for serving.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
model: A keras model to export for serving.
output_dir: A directory where the model will be exported to.
# The layer has to be saved to the model for keras tracking purpases.
model.tft_layer = tf_transform_output.transform_features_layer()
signatures = {
'serving_default':
_get_tf_examples_serving_signature(model, tf_transform_output),
'transform_features':
_get_transform_features_signature(model, tf_transform_output),
}
model.save(output_dir, save_format='tf', signatures=signatures)
def _build_keras_model(tf_transform_output: TFTransformOutput
) -> tf.keras.Model:
Creates a DNN Keras model for classifying taxi data.
Args:
tf_transform_output: [TFTransformOutput], the outputs from Transform
Returns:
A keras Model.
feature_spec = tf_transform_output.transformed_feature_spec().copy()
feature_spec.pop(_LABEL_KEY)
inputs = {}
for key, spec in feature_spec.items():
if isinstance(spec, tf.io.VarLenFeature):
inputs[key] = tf.keras.layers.Input(
shape=[None], name=key, dtype=spec.dtype, sparse=True)
elif isinstance(spec, tf.io.FixedLenFeature):
# TODO(b/208879020): Move into schema such that spec.shape is [1] and not
# [] for scalars.
inputs[key] = tf.keras.layers.Input(
shape=spec.shape or [1], name=key, dtype=spec.dtype)
else:
raise ValueError('Spec type is not supported: ', key, spec)
output = tf.keras.layers.Concatenate()(tf.nest.flatten(inputs))
output = tf.keras.layers.Dense(100, activation='relu')(output)
output = tf.keras.layers.Dense(70, activation='relu')(output)
output = tf.keras.layers.Dense(50, activation='relu')(output)
output = tf.keras.layers.Dense(20, activation='relu')(output)
output = tf.keras.layers.Dense(1)(output)
return tf.keras.Model(inputs=inputs, outputs=output)
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, fn_args.data_accessor,
tf_transform_output, _BATCH_SIZE)
eval_dataset = _input_fn(fn_args.eval_files, fn_args.data_accessor,
tf_transform_output, _BATCH_SIZE)
model = _build_keras_model(tf_transform_output)
model.compile(
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=[tf.keras.metrics.BinaryAccuracy()])
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq='batch')
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback])
# Export the model.
export_serving_model(tf_transform_output, model, fn_args.serving_model_dir)
Explanation: After the Transform component has transformed your data into features, and the next step is to train a model.
Trainer
The Trainer component will train a model that you define in TensorFlow. Default Trainer support Estimator API, to use Keras API, you need to specify Generic Trainer by setup custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor) in Trainer's contructor.
Trainer takes as input the schema from SchemaGen, the transformed data and graph from Transform, training parameters, as well as a module that contains user-defined model code.
Let's see an example of user-defined model code below (for an introduction to the TensorFlow Keras APIs, see the tutorial):
End of explanation
trainer = tfx.components.Trainer(
module_file=os.path.abspath(_taxi_trainer_module_file),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=10000),
eval_args=tfx.proto.EvalArgs(num_steps=5000))
context.run(trainer, enable_cache=True)
Explanation: Now, we pass in this model code to the Trainer component and run it to train the model.
End of explanation
model_artifact_dir = trainer.outputs['model'].get()[0].uri
pp.pprint(os.listdir(model_artifact_dir))
model_dir = os.path.join(model_artifact_dir, 'Format-Serving')
pp.pprint(os.listdir(model_dir))
Explanation: Analyze Training with TensorBoard
Take a peek at the trainer artifact. It points to a directory containing the model subdirectories.
End of explanation
model_run_artifact_dir = trainer.outputs['model_run'].get()[0].uri
%load_ext tensorboard
%tensorboard --logdir {model_run_artifact_dir}
Explanation: Optionally, we can connect TensorBoard to the Trainer to analyze our model's training curves.
End of explanation
# Imported files such as taxi_constants are normally cached, so changes are
# not honored after the first import. Normally this is good for efficiency, but
# during development when we may be iterating code it can be a problem. To
# avoid this problem during development, reload the file.
import taxi_constants
import sys
if 'google.colab' in sys.modules: # Testing to see if we're doing development
import importlib
importlib.reload(taxi_constants)
eval_config = tfma.EvalConfig(
model_specs=[
# This assumes a serving model with signature 'serving_default'. If
# using estimator based EvalSavedModel, add signature_name: 'eval' and
# remove the label_key.
tfma.ModelSpec(
signature_name='serving_default',
label_key=taxi_constants.LABEL_KEY,
preprocessing_function_names=['transform_features'],
)
],
metrics_specs=[
tfma.MetricsSpec(
# The metrics added here are in addition to those saved with the
# model (assuming either a keras model or EvalSavedModel is used).
# Any metrics added into the saved model (for example using
# model.compile(..., metrics=[...]), etc) will be computed
# automatically.
# To add validation thresholds for metrics saved with the model,
# add them keyed by metric name to the thresholds map.
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='BinaryAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.5}),
# Change threshold will be ignored if there is no
# baseline model resolved from MLMD (first run).
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': -1e-10})))
]
)
],
slicing_specs=[
# An empty slice spec means the overall slice, i.e. the whole dataset.
tfma.SlicingSpec(),
# Data can be sliced along a feature column. In this case, data is
# sliced along feature column trip_start_hour.
tfma.SlicingSpec(
feature_keys=['trip_start_hour'])
])
Explanation: Evaluator
The Evaluator component computes model performance metrics over the evaluation set. It uses the TensorFlow Model Analysis library. The Evaluator can also optionally validate that a newly trained model is better than the previous model. This is useful in a production pipeline setting where you may automatically train and validate a model every day. In this notebook, we only train one model, so the Evaluator automatically will label the
model as "good".
Evaluator will take as input the data from ExampleGen, the trained model from Trainer, and slicing configuration. The slicing configuration allows you to slice your metrics on feature values (e.g. how does your model perform on taxi trips that start at 8am versus 8pm?). See an example of this configuration below:
End of explanation
# Use TFMA to compute a evaluation statistics over features of a model and
# validate them against a baseline.
# The model resolver is only required if performing model validation in addition
# to evaluation. In this case we validate against the latest blessed model. If
# no model has been blessed before (as in this case) the evaluator will make our
# candidate the first blessed model.
model_resolver = tfx.dsl.Resolver(
strategy_class=tfx.dsl.experimental.LatestBlessedModelStrategy,
model=tfx.dsl.Channel(type=tfx.types.standard_artifacts.Model),
model_blessing=tfx.dsl.Channel(
type=tfx.types.standard_artifacts.ModelBlessing)).with_id(
'latest_blessed_model_resolver')
context.run(model_resolver, enable_cache=True)
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config)
context.run(evaluator, enable_cache=True)
Explanation: Next, we give this configuration to Evaluator and run it.
End of explanation
evaluator.outputs
Explanation: Now let's examine the output artifacts of Evaluator.
End of explanation
context.show(evaluator.outputs['evaluation'])
Explanation: Using the evaluation output we can show the default visualization of global metrics on the entire evaluation set.
End of explanation
import tensorflow_model_analysis as tfma
# Get the TFMA output result path and load the result.
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
tfma_result = tfma.load_eval_result(PATH_TO_RESULT)
# Show data sliced along feature column trip_start_hour.
tfma.view.render_slicing_metrics(
tfma_result, slicing_column='trip_start_hour')
Explanation: To see the visualization for sliced evaluation metrics, we can directly call the TensorFlow Model Analysis library.
End of explanation
blessing_uri = evaluator.outputs['blessing'].get()[0].uri
!ls -l {blessing_uri}
Explanation: This visualization shows the same metrics, but computed at every feature value of trip_start_hour instead of on the entire evaluation set.
TensorFlow Model Analysis supports many other visualizations, such as Fairness Indicators and plotting a time series of model performance. To learn more, see the tutorial.
Since we added thresholds to our config, validation output is also available. The precence of a blessing artifact indicates that our model passed validation. Since this is the first validation being performed the candidate is automatically blessed.
End of explanation
PATH_TO_RESULT = evaluator.outputs['evaluation'].get()[0].uri
print(tfma.load_validation_result(PATH_TO_RESULT))
Explanation: Now can also verify the success by loading the validation result record:
End of explanation
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
context.run(pusher, enable_cache=True)
Explanation: Pusher
The Pusher component is usually at the end of a TFX pipeline. It checks whether a model has passed validation, and if so, exports the model to _serving_model_dir.
End of explanation
pusher.outputs
Explanation: Let's examine the output artifacts of Pusher.
End of explanation
push_uri = pusher.outputs['pushed_model'].get()[0].uri
model = tf.saved_model.load(push_uri)
for item in model.signatures.items():
pp.pprint(item)
Explanation: In particular, the Pusher will export your model in the SavedModel format, which looks like this:
End of explanation |
7,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Convolutional Neural Networks
Team StoDIG - Statoil Deep-learning Interest Group
David Wade, John Thurmond & Eskil Kulseth Dahl
In this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge.
Given the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.
We use three key innovations
Step1: Data ingest
We load the training and testing data to preprocess it for further analysis, filling the missing data values in the PE field with zero and proceeding to normalize the data that will be fed into our model. We now incorporate the Imputation from Paolo Bestagini via LA_Team's Submission 5.
Step2: Split data into training data and blind data, and output as Numpy arrays
Step3: Data Augmentation
We expand the input data to be acted on by the convolutional layer.
Step4: Convolutional Neural Network
We build a CNN with the following layers (no longer using Sequential() model)
Step5: We train the CNN and evaluate it on precision/recall.
Step6: We display the learned 1D convolution kernels
Step7: In order to avoid overfitting, we evaluate our model by running a 5-fold stratified cross-validation routine.
Step8: Prediction
To predict the STUART and CRAWFORD blind wells we do the following
Step9: Run the model on the blind data
Output a CSV
Plot the wells in the notebook | Python Code:
%%sh
pip install pandas
pip install scikit-learn
pip install keras
pip install sklearn
from __future__ import print_function
import time
import numpy as np
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
from keras.preprocessing import sequence
from keras.models import Model, Sequential
from keras.constraints import maxnorm, nonneg
from keras.optimizers import SGD, Adam, Adamax, Nadam
from keras.regularizers import l2, activity_l2
from keras.layers import Input, Dense, Dropout, Activation, Convolution1D, Cropping1D, Cropping2D, Permute, Flatten, MaxPooling1D, merge
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold , StratifiedKFold
from classification_utilities import display_cm, display_adj_cm
from sklearn.metrics import confusion_matrix, f1_score
from sklearn import preprocessing
from sklearn.model_selection import GridSearchCV
Explanation: Facies classification using Convolutional Neural Networks
Team StoDIG - Statoil Deep-learning Interest Group
David Wade, John Thurmond & Eskil Kulseth Dahl
In this python notebook we propose a facies classification model, building on the simple Neural Network solution proposed by LA_Team in order to outperform the prediction model proposed in the predicting facies from well logs challenge.
Given the limited size of the training data set, Deep Learning is not likely to exceed the accuracy of results from refined Machine Learning techniques (such as Gradient Boosted Trees). However, we chose to use the opportunity to advance our understanding of Deep Learning network design, and have enjoyed participating in the contest. With a substantially larger training set and perhaps more facies ambiguity, Deep Learning could be a preferred approach to this sort of problem.
We use three key innovations:
- Inserting a convolutional layer as the first layer in the Neural Network
- Initializing the weights of this layer to detect gradients and extrema
- Adding Dropout regularization to prevent overfitting
Since our submission #2 we have:
- Added the distance to the next NM_M transition as a feature (thanks to geoLEARN where we spotted this)
- Removed Recruit F9 from training
... and since our submission #3 we have:
- Included training/predicting on the Formation categories
- Made our facies plot better, including demonstrating our confidence in each prediction
... and since our submission #4 we have:
- Added distance to the next Formation transition as another feature
- Used our facies probabilities plot to better understand our predicitons
Problem Modeling
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
* Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10),
photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1. Nonmarine sandstone
2. Nonmarine coarse siltstone
3. Nonmarine fine siltstone
4. Marine siltstone and shale
5. Mudstone (limestone)
6. Wackestone (limestone)
7. Dolomite
8. Packstone-grainstone (limestone)
9. Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies |Label| Adjacent Facies
:---: | :---: |:--:
1 |SS| 2
2 |CSiS| 1,3
3 |FSiS| 2
4 |SiSh| 5
5 |MS| 4,6
6 |WS| 5,7
7 |D| 6,8
8 |PS| 6,7,9
9 |BS| 7,8
Setup
Check we have all the libraries we need, and import the modules we require. Note that we have used the Theano backend for Keras, and to achieve a reasonable training time we have used an NVidia K20 GPU.
End of explanation
data = pd.read_csv('train_test_data.csv')
# Set 'Well Name' and 'Formation' fields as categories
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
def coding(col, codeDict):
colCoded = pd.Series(col, copy=True)
for key, value in codeDict.items():
colCoded.replace(key, value, inplace=True)
return colCoded
data['Formation_coded'] = coding(data['Formation'], {'A1 LM':1,'A1 SH':2,'B1 LM':3,'B1 SH':4,'B2 LM':5,'B2 SH':6,'B3 LM':7,'B3 SH':8,'B4 LM':9,'B4 SH':10,'B5 LM':11,'B5 SH':12,'C LM':13,'C SH':14})
formation = data['Formation_coded'].values[:,np.newaxis]
# Parameters
feature_names = ['Depth', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
well_names_test = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE', 'NOLAN', 'Recruit F9', 'NEWBY', 'CHURCHMAN BIBLE']
well_names_validate = ['STUART', 'CRAWFORD']
data_vectors = data[feature_names].values
correct_facies_labels = data['Facies'].values
nm_m = data['NM_M'].values
nm_m_dist = np.zeros((nm_m.shape[0],1), dtype=int)
for i in range(nm_m.shape[0]):
count=1
while (i+count<nm_m.shape[0]-1 and nm_m[i+count] == nm_m[i]):
count = count+1
nm_m_dist[i] = count
nm_m_dist.reshape(nm_m_dist.shape[0],1)
formation_dist = np.zeros((formation.shape[0],1), dtype=int)
for i in range(formation.shape[0]):
count=1
while (i+count<formation.shape[0]-1 and formation[i+count] == formation[i]):
count = count+1
formation_dist[i] = count
formation_dist.reshape(formation_dist.shape[0],1)
well_labels = data[['Well Name', 'Facies']].values
depth = data['Depth'].values
# Fill missing values and normalize for 'PE' field
imp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)
imp.fit(data_vectors)
data_vectors = imp.transform(data_vectors)
data_vectors = np.hstack([data_vectors, nm_m_dist, formation, formation_dist])
scaler = preprocessing.StandardScaler().fit(data_vectors)
scaled_features = scaler.transform(data_vectors)
data_out = np.hstack([well_labels, scaled_features])
Explanation: Data ingest
We load the training and testing data to preprocess it for further analysis, filling the missing data values in the PE field with zero and proceeding to normalize the data that will be fed into our model. We now incorporate the Imputation from Paolo Bestagini via LA_Team's Submission 5.
End of explanation
def preprocess(data_out):
data = data_out
X = data[0:4149,0:13]
y = np.concatenate((data[0:4149,0].reshape(4149,1), np_utils.to_categorical(correct_facies_labels[0:4149]-1)), axis=1)
X_test = data[4149:,0:13]
return X, y, X_test
X_train_in, y_train, X_test_in = preprocess(data_out)
print(X_train_in.shape)
Explanation: Split data into training data and blind data, and output as Numpy arrays
End of explanation
conv_domain = 11
# Reproducibility
np.random.seed(7)
# Load data
def expand_dims(input):
r = int((conv_domain-1)/2)
l = input.shape[0]
n_input_vars = input.shape[1]
output = np.zeros((l, conv_domain, n_input_vars))
for i in range(l):
for j in range(conv_domain):
for k in range(n_input_vars):
output[i,j,k] = input[min(i+j-r,l-1),k]
return output
X_train = np.empty((0,conv_domain,11), dtype=float)
X_test = np.empty((0,conv_domain,11), dtype=float)
y_select = np.empty((0,9), dtype=int)
well_names_train = ['SHRIMPLIN', 'ALEXANDER D', 'SHANKLE', 'LUKE G U', 'KIMZEY A', 'CROSS H CATTLE', 'NOLAN', 'NEWBY', 'CHURCHMAN BIBLE']
for wellId in well_names_train:
X_train_subset = X_train_in[X_train_in[:, 0] == wellId][:,2:13]
X_train_subset = expand_dims(X_train_subset)
X_train = np.concatenate((X_train,X_train_subset),axis=0)
y_select = np.concatenate((y_select, y_train[y_train[:, 0] == wellId][:,1:11]), axis=0)
for wellId in well_names_validate:
X_test_subset = X_test_in[X_test_in[:, 0] == wellId][:,2:13]
X_test_subset = expand_dims(X_test_subset)
X_test = np.concatenate((X_test,X_test_subset),axis=0)
y_train = y_select
print(X_train.shape)
print(X_test.shape)
print(y_select.shape)
Explanation: Data Augmentation
We expand the input data to be acted on by the convolutional layer.
End of explanation
# Set parameters
input_dim = 11
output_dim = 9
n_per_batch = 128
epochs = 100
crop_factor = int(conv_domain/2)
filters_per_log = 7
n_convolutions = input_dim*filters_per_log
starting_weights = [np.zeros((conv_domain, 1, input_dim, n_convolutions)), np.ones((n_convolutions))]
norm_factor=float(conv_domain)*2.0
for i in range(input_dim):
for j in range(conv_domain):
starting_weights[0][j, 0, i, i*filters_per_log+0] = j/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+1] = j/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+2] = (conv_domain-j)/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+3] = (conv_domain-j)/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+4] = (2*abs(crop_factor-j))/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+5] = (conv_domain-2*abs(crop_factor-j))/norm_factor
starting_weights[0][j, 0, i, i*filters_per_log+6] = 0.25
def dnn_model(init_dropout_rate=0.5, main_dropout_rate=0.5,
hidden_dim_1=24, hidden_dim_2=36,
max_norm=10, nb_conv=n_convolutions):
# Define the model
inputs = Input(shape=(conv_domain,input_dim,))
inputs_dropout = Dropout(init_dropout_rate)(inputs)
x1 = Convolution1D(nb_conv, conv_domain, border_mode='valid', weights=starting_weights, activation='tanh', input_shape=(conv_domain,input_dim), input_length=input_dim, W_constraint=nonneg())(inputs_dropout)
x1 = Flatten()(x1)
xn = Cropping1D(cropping=(crop_factor,crop_factor))(inputs_dropout)
xn = Flatten()(xn)
xA = merge([x1, xn], mode='concat')
xA = Dropout(main_dropout_rate)(xA)
xA = Dense(hidden_dim_1, init='uniform', activation='relu', W_constraint=maxnorm(max_norm))(xA)
x = merge([xA, xn], mode='concat')
x = Dropout(main_dropout_rate)(x)
x = Dense(hidden_dim_2, init='uniform', activation='relu', W_constraint=maxnorm(max_norm))(x)
predictions = Dense(output_dim, init='uniform', activation='softmax')(x)
model = Model(input=inputs, output=predictions)
optimizerNadam = Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=1e-08, schedule_decay=0.004)
model.compile(loss='categorical_crossentropy', optimizer=optimizerNadam, metrics=['accuracy'])
return model
# Load the model
t0 = time.time()
model_dnn = dnn_model()
model_dnn.summary()
t1 = time.time()
print("Load time = %d" % (t1-t0) )
def plot_weights(n_convs_disp=input_dim):
layerID=2
print(model_dnn.layers[layerID].get_weights()[0].shape)
print(model_dnn.layers[layerID].get_weights()[1].shape)
fig, ax = plt.subplots(figsize=(12,10))
for i in range(n_convs_disp):
plt.subplot(input_dim,1,i+1)
plt.imshow(model_dnn.layers[layerID].get_weights()[0][:,0,i,:], interpolation='none')
plt.show()
plot_weights(1)
Explanation: Convolutional Neural Network
We build a CNN with the following layers (no longer using Sequential() model):
Dropout layer on input
One 1D convolutional layer (7-point radius)
One 1D cropping layer (just take actual log-value of interest)
Series of Merge layers re-adding result of cropping layer plus Dropout & Fully-Connected layers
Instead of running CNN with gradient features added, we initialize the Convolutional layer weights to achieve this
This allows the CNN to reject them, adjust them or turn them into something else if required
End of explanation
#Train model
t0 = time.time()
model_dnn.fit(X_train, y_train, batch_size=n_per_batch, nb_epoch=epochs, verbose=0)
t1 = time.time()
print("Train time = %d seconds" % (t1-t0) )
# Predict Values on Training set
t0 = time.time()
y_predicted = model_dnn.predict( X_train , batch_size=n_per_batch, verbose=2)
t1 = time.time()
print("Test time = %d seconds" % (t1-t0) )
# Print Report
# Format output [0 - 8 ]
y_ = np.zeros((len(y_train),1))
for i in range(len(y_train)):
y_[i] = np.argmax(y_train[i])
y_predicted_ = np.zeros((len(y_predicted), 1))
for i in range(len(y_predicted)):
y_predicted_[i] = np.argmax( y_predicted[i] )
# Confusion Matrix
conf = confusion_matrix(y_, y_predicted_)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
# Print Results
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("\nConfusion Matrix")
display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True)
Explanation: We train the CNN and evaluate it on precision/recall.
End of explanation
plot_weights()
Explanation: We display the learned 1D convolution kernels
End of explanation
# Cross Validation
def cross_validate():
t0 = time.time()
estimator = KerasClassifier(build_fn=dnn_model, nb_epoch=epochs, batch_size=n_per_batch, verbose=0)
skf = StratifiedKFold(n_splits=5, shuffle=True)
results_dnn = cross_val_score(estimator, X_train, y_train, cv= skf.get_n_splits(X_train, y_train))
t1 = time.time()
print("Cross Validation time = %d" % (t1-t0) )
print(' Cross Validation Results')
print( results_dnn )
print(np.mean(results_dnn))
cross_validate()
Explanation: In order to avoid overfitting, we evaluate our model by running a 5-fold stratified cross-validation routine.
End of explanation
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
def make_facies_log_plot(logs, facies_colors, y_test=None, wellId=None):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
facies = np.zeros(2*(int(zbot-ztop)+1))
shift = 0
depth = ztop
for i in range(logs.Depth.count()-1):
while (depth < logs.Depth.values[i] + 0.25 and depth < zbot+0.25):
if (i<logs.Depth.count()-1):
new = logs['Facies'].values[i]
facies[shift] = new
depth += 0.5
shift += 1
facies = facies[0:facies.shape[0]-1]
cluster=np.repeat(np.expand_dims(facies,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=8, gridspec_kw={'width_ratios':[1,1,1,1,1,1,2,2]}, figsize=(10, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
ax[5].plot(logs.NM_M, logs.Depth, '-', color='black')
if (y_test is not None):
for i in range(9):
if (wellId == 'STUART'):
ax[6].plot(y_test[0:474,i], logs.Depth, color=facies_colors[i], lw=1.5)
else:
ax[6].plot(y_test[474:,i], logs.Depth, color=facies_colors[i], lw=1.5)
im=ax[7].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[7])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=5)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel("NM_M")
ax[5].set_xlim(logs.NM_M.min()-1.,logs.NM_M.max()+1.)
ax[6].set_xlabel("Facies Prob")
ax[6].set_xlim(0.0,1.0)
ax[7].set_xlabel('Facies')
ax[0].set_yticklabels([]);
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[6].set_xticklabels([]); ax[7].set_xticklabels([]);
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
Explanation: Prediction
To predict the STUART and CRAWFORD blind wells we do the following:
Set up a plotting function to display the logs & facies.
End of explanation
# DNN model Prediction
y_test = model_dnn.predict( X_test , batch_size=n_per_batch, verbose=0)
predictions_dnn = np.zeros((len(y_test),1))
for i in range(len(y_test)):
predictions_dnn[i] = np.argmax(y_test[i]) + 1
predictions_dnn = predictions_dnn.astype(int)
# Store results
train_data = pd.read_csv('train_test_data.csv')
test_data = pd.read_csv('../validation_data_nofacies.csv')
test_data['Facies'] = predictions_dnn
test_data.to_csv('Prediction_StoDIG_5_CNN.csv')
for wellId in well_names_validate:
make_facies_log_plot( test_data[test_data['Well Name'] == wellId], facies_colors=facies_colors, y_test=y_test, wellId=wellId)
#for wellId in well_names_test:
# make_facies_log_plot( train_data[train_data['Well Name'] == wellId], facies_colors=facies_colors)
Explanation: Run the model on the blind data
Output a CSV
Plot the wells in the notebook
End of explanation |
7,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
Step4: b. Confidence Intervals.
Calculate the first, second, and third confidence intervals.
Plot the PDF and the first, second, and third confidence intervals.
Step5: Exercise 4 | Python Code:
# Useful Functions
class DiscreteRandomVariable:
def __init__(self, a=0, b=1):
self.variableType = ""
self.low = a
self.high = b
return
def draw(self, numberOfSamples):
samples = np.random.randint(self.low, self.high, numberOfSamples)
return samples
class BinomialRandomVariable(DiscreteRandomVariable):
def __init__(self, numberOfTrials = 10, probabilityOfSuccess = 0.5):
self.variableType = "Binomial"
self.numberOfTrials = numberOfTrials
self.probabilityOfSuccess = probabilityOfSuccess
return
def draw(self, numberOfSamples):
samples = np.random.binomial(self.numberOfTrials, self.probabilityOfSuccess, numberOfSamples)
return samples
def factorial(n):return reduce(lambda x,y:x*y,[1]+range(1,n+1))
# Useful Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.stats as stats
from statsmodels.stats import stattools
from __future__ import division
Explanation: Exercises: Random Variables - Answer Key
By Christopher van Hoecke, Max Margenot, and Delaney Mackenzie
Lecture Link :
https://www.quantopian.com/lectures/random-variables
IMPORTANT NOTE:
This lecture corresponds to the Random Variables lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Key Concepts
End of explanation
# Histograms with 10 tosses.
cointoss = DiscreteRandomVariable(1, 3)
plt.hist(cointoss.draw(10), align = 'mid')
plt.xlabel('Value')
plt.ylabel('Occurences')
plt.legend(['Coin Tosses']);
# Histograms with 1000000 tosses.
cointoss = DiscreteRandomVariable(1, 3)
plt.hist(cointoss.draw(1000000), align = 'mid')
plt.xlabel('Value')
plt.ylabel('Occurences')
plt.legend(['Coin Tosses']);
Explanation: Exercise 1 : Uniform Distribution
Plot the histogram of 10 tosses with a fair coin (let 1 be heads and 2 be tails).
Plot the histogram of 1000000 tosses of a fair coin
End of explanation
# Binomial distribution with p=0.25 and n=20
binomialdistribution = BinomialRandomVariable(20, 0.25)
bins = np.arange(0,21,1)
n, bins, patches = plt.hist(binomialdistribution.draw(1000000), bins=bins)
plt.title('Binomial Distribution with p=0.25 and n=20')
plt.xlabel('Value')
plt.ylabel('Occurrences')
plt.legend(['Die Rolls']);
# Finding x which occurs most often
elem = np.argmax(n)
print 'Maximum occurance for x =', elem
# Calculating the probability of finding x.
n = 20
p = 0.5
x = elem
n_factorial = factorial(n)
x_factorial = factorial(x)
n_x_factorial = factorial(n-x)
fact = n_factorial / (n_x_factorial * x_factorial)
probability = fact * (p**x) * ((1-p)**(n-x))
print 'proabability of x = %d' % x, probability
Explanation: Exercise 2 : Binomial Distributions.
Graph the histogram of 1000000 samples from a binomial distribution of probability 0.25 and $n = 20$
Find the value that occurs the most often
Calculate the probability of the value that occurs the most often occurring. Use the factorial(x) function to find factorials
End of explanation
# Graphing a normal distribution pdf.
mu = 0
sigma = 5
x = np.linspace(-30, 30, 200)
y = (1/(sigma * np.sqrt(2 * 3.14159))) * np.exp(-(x - mu)*(x - mu) / (2 * sigma * sigma))
plt.plot(x, y)
plt.title('Graph of PDF with mu = 0 and sigma = 5')
plt.xlabel('Value')
plt.ylabel('Probability');
Explanation: Exercise 3 : Normal Distributions
a. Graphing
Graph a normal distribution using the Probability Density Function bellow, with a mean of 0 and standard deviation of 5.
$$f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x - \mu)^2}{2\sigma^2}}$$
End of explanation
# finding the 1st, 2nd, and third confidence intervals.
first_ci = (-sigma, sigma)
second_ci = (-2*sigma, 2*sigma)
third_ci = (-3*sigma, 3*sigma)
print '1-sigma -> mu +/-', sigma
print '2-sigma -> mu +/-', second_ci[1]
print '3-sigma -> mu +/-', third_ci[1]
plt.axvline(first_ci[0], linestyle='dashdot', label='68% of observations', color = 'blue')
plt.axvline(first_ci[1], linestyle='dashdot', label='68% of observations', color = 'blue')
plt.axvline(second_ci[0], linestyle='dashdot', label='95% of observations', color = 'red')
plt.axvline(second_ci[1],linestyle='dashdot', color = 'red')
plt.axvline(third_ci[0], linestyle='dashdot', label='99% of observations', color = 'green')
plt.axvline(third_ci[1], linestyle='dashdot', color = 'green')
plt.plot(x,y)
plt.title('Graph of PDF with 3 confidence intervals.')
plt.legend();
Explanation: b. Confidence Intervals.
Calculate the first, second, and third confidence intervals.
Plot the PDF and the first, second, and third confidence intervals.
End of explanation
# Collect prices and returns.
prices = get_pricing('SPY', start_date = '2016-01-01', end_date='2016-05-01',
fields = 'price')
returns = prices.pct_change()[1:]
# Calculating the mean and standard deviation.
sample_mean = np.mean(returns)
sample_std_dev = np.std(returns)
x = np.linspace(-(sample_mean + 4 * sample_std_dev), (sample_mean + 4 * sample_std_dev), len(returns))
sample_distribution = ((1/(sample_std_dev * 2 * np.pi)) *
np.exp(-(x - sample_mean)*(x - sample_mean) / (2 * sample_std_dev * sample_std_dev)))
# Plotting histograms and confidence intervals.
plt.hist(returns, range=(returns.min(), returns.max()), normed = True);
plt.plot(x, sample_distribution)
plt.axvline(sample_std_dev, linestyle='dashed', color='red', label='1st Confidence Interval')
plt.axvline(-sample_std_dev, linestyle='dashed', color='red')
plt.axvline(2*sample_std_dev, linestyle='dashed', color='k', label='2st Confidence Interval')
plt.axvline(-2*sample_std_dev, linestyle='dashed', color='k')
plt.axvline(3*sample_std_dev, linestyle='dashed', color='green', label='3st Confidence Interval')
plt.axvline(-3*sample_std_dev, linestyle='dashed', color='green')
plt.legend();
# Run the JB test for normality.
cutoff = 0.01
_, p_value, skewness, kurtosis = stattools.jarque_bera(returns)
print "The JB test p-value is: ", p_value
print "We reject the hypothesis that the data are normally distributed ", p_value < cutoff
print "The skewness of the returns is: ", skewness
print "The kurtosis of the returns is: ", kurtosis
Explanation: Exercise 4: Financial Applications:
Fit the returns of SPY from 2016-01-01 to 2016-05-01 to a normal distribution.
- Fit the returns to a normal distribution by clacluating the values of $\mu$ and $\sigma$
- Plot the returns and the distribution, along with 3 confidence intervals.
- Use the Jarque-Bera test to check for normality.
End of explanation |
7,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style='background-image
Step1: 1. Chebyshev derivative method
Exercise
Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$, call this function and display the Chebyshev derivative matrix.
Step2: 2. Initialization of setup
Step3: 3. Source Initialization
Step4: 4. Time Extrapolation
Now we time extrapolate using the previously defined get_cheby_matrix(nx) method to call the differentiation matrix. The discrete values of the numerical simulation are indicated by dots in the animation, they represent the Chebyshev collocation points. Observe how the wavefield near the domain center is less dense than towards the boundaries. | Python Code:
# This is a configuration step for the exercise. Please run it before calculating the derivative!
import numpy as np
import matplotlib.pyplot as plt
from ricker import ricker
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">The Chebyshev Pseudospectral Method - Elastic Waves in 1D</div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
David Vargas (@dvargas)
Heiner Igel (@heinerigel)
Basic Equations
This notebook presents the numerical solution for the 1D elastic wave equation using the Chebyshev Pseudospectral Method. We depart from the equation
\begin{equation}
\rho(x) \partial_t^2 u(x,t) = \partial_x (\mu(x) \partial_x u(x,t)) + f(x,t),
\end{equation}
and use a standard 3-point finite-difference operator to approximate the time derivatives. Then, the displacement field is extrapolated as
\begin{equation}
\rho_i\frac{u_{i}^{j+1} - 2u_{i}^{j} + u_{i}^{j-1}}{dt^2}= \partial_x (\mu(x) \partial_x u(x,t)){i}^{j} + f{i}^{j}
\end{equation}
An alternative way of performing space derivatives of a function defined on the Chebyshev collocation points is to define a derivative matrix $D_{ij}$
\begin{equation}
D_{ij} =
\begin{cases}
-\frac{2 N^2 + 1}{6} \hspace{1.5cm} \text{for i = j = N}\
-\frac{1}{2} \frac{x_i}{1-x_i^2} \hspace{1.5cm} \text{for i = j = 1,2,...,N-1}\
\frac{c_i}{c_j} \frac{(-1)^{i+j}}{x_i - x_j} \hspace{1.5cm} \text{for i $\neq$ j = 0,1,...,N}
\end{cases}
\end{equation}
where $N+1$ is the number of Chebyshev collocation points $ \ x_i = cos(i\pi / N)$, $ \ i=0,...,N$ and the $c_i$ are given as
$$ c_i = 2 \hspace{1.5cm} \text{for i = 0 or N} $$
$$ c_i = 1 \hspace{1.5cm} \text{otherwise} $$
This differentiation matrix allows us to write the derivative of the function $f_i = f(x_i)$ (possibly depending on time) simply as
$$\partial_x u_i = D_{ij} \ u_j$$
where the right-hand side is a matrix-vector product, and the Einstein summation convention applies.
End of explanation
#################################################################
# IMPLEMENT THE CHEBYSHEV DERIVATIVE MATRIX METHOD HERE!
#################################################################
# Call the chebyshev differentiation matrix
# ---------------------------------------------------------------
#D_ij =
# ---------------------------------------------------------------
# Display Differentiation Matrix
# ---------------------------------------------------------------
Explanation: 1. Chebyshev derivative method
Exercise
Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$, call this function and display the Chebyshev derivative matrix.
End of explanation
# Basic parameters
# ---------------------------------------------------------------
#nt = 5000 # number of time steps
tmax = 0.0006
eps = 1.4 # stability limit
isx = 100
lw = 0.7
ft = 10
f0 = 100000 # dominant frequency
iplot = 20 # Snapshot frequency
# material parameters
rho = 2500.
c = 3000.
mu = rho*c**2
# space domain
nx = 100 # number of grid points in x 199
xs = np.floor(nx/2) # source location
xr = np.floor(nx*0.8)
x = np.zeros(nx+1)
# initialization of pressure fields
p = np.zeros(nx+1)
pnew = np.zeros(nx+1)
pold = np.zeros(nx+1)
d2p = np.zeros(nx+1)
for ix in range(0,nx+1):
x[ix] = np.cos(ix * np.pi / nx)
dxmin = min(abs(np.diff(x)))
dxmax = max(abs(np.diff(x)))
dt = eps*dxmin/c # calculate time step from stability criterion
nt = int(round(tmax/dt))
Explanation: 2. Initialization of setup
End of explanation
# source time function
# ---------------------------------------------------------------
t = np.arange(1, nt+1)*dt # initialize time axis
T0 = 1./f0
tmp = ricker(dt, T0)
isrc = tmp
tmp = np.diff(tmp)
src = np.zeros(nt)
src[0:np.size(tmp)] = tmp
#spatial source function
# ---------------------------------------------------------------
sigma = 1.5*dxmax
x0 = x[int(xs)]
sg = np.exp(-1/sigma**2*(x-x0)**2)
sg = sg/max(sg)
Explanation: 3. Source Initialization
End of explanation
# Initialize animated plot
# ---------------------------------------------------------------
plt.figure(figsize=(10,6))
line = plt.plot(x, p, 'k.', lw=2)
plt.title('Chebyshev Method - 1D Elastic wave', size=16)
plt.xlabel(' x(m)', size=14)
plt.ylabel(' Amplitude ', size=14)
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
# Differentiation matrix
D = get_cheby_matrix(nx)
for it in range(nt):
# Space derivatives
dp = np.dot(D, p.T)
dp = mu/rho * dp
dp = D @ dp
# Time extrapolation
pnew = 2*p - pold + np.transpose(dp) * dt**2
# Source injection
pnew = pnew + sg*src[it]*dt**2/rho
# Remapping
pold, p = p, pnew
p[0] = 0; p[nx] = 0 # set boundaries pressure free
# --------------------------------------
# Animation plot. Display solution
if not it % iplot:
for l in line:
l.remove()
del l
# --------------------------------------
# Display lines
line = plt.plot(x, p, 'k.', lw=1.5)
plt.gcf().canvas.draw()
Explanation: 4. Time Extrapolation
Now we time extrapolate using the previously defined get_cheby_matrix(nx) method to call the differentiation matrix. The discrete values of the numerical simulation are indicated by dots in the animation, they represent the Chebyshev collocation points. Observe how the wavefield near the domain center is less dense than towards the boundaries.
End of explanation |
7,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enlib write_map bug
Step1: We make a full-sky arcminute resolution geometry. I've only been able to reproduce this bug for res=1.0.
Step2: We do a pix2sky that is needed by map2alm and make sure it gives a sensible result.
Step3: It makes sense. We now save a map that has this geometry and load it back.
Step4: The shapes and wcs of the geometry we originally made and of the saved map seem to agree. So we proceed to do the same pix2sky operation on the loaded geometry.
Step5: The results are all nans. | Python Code:
%load_ext autoreload
%autoreload 2
from enlib import enmap,wcs as mwcs
import numpy as np
import sys,os
Explanation: Enlib write_map bug
End of explanation
res = 1.0
shape, wcs = enmap.fullsky_geometry(res=res*np.pi/180./60., proj="car")
shape = (3,)+shape
Explanation: We make a full-sky arcminute resolution geometry. I've only been able to reproduce this bug for res=1.0.
End of explanation
ny, nx = shape[-2:]
vy,vx = enmap.pix2sky(shape, wcs, [np.arange(ny),np.zeros(ny)])
hy,hx = enmap.pix2sky(shape, wcs, [np.zeros(nx),np.arange(nx)])
print(vy,vx,hy,hx)
Explanation: We do a pix2sky that is needed by map2alm and make sure it gives a sensible result.
End of explanation
root = os.environ['WORK']+"/"
enmap.write_map(root+"temp.fits",enmap.zeros(shape,wcs,dtype=np.uint8))
lshape,lwcs = enmap.read_map_geometry(root+"temp.fits")
print(shape,wcs)
print(lshape,lwcs)
print(mwcs.equal(wcs,lwcs))
Explanation: It makes sense. We now save a map that has this geometry and load it back.
End of explanation
ny, nx = lshape[-2:]
vy2,vx2 = enmap.pix2sky(lshape, lwcs, [np.arange(ny),np.zeros(ny)])
hy2,hx2 = enmap.pix2sky(lshape, lwcs, [np.zeros(nx),np.arange(nx)])
print(vy2,vx2,hy2,hx2)
Explanation: The shapes and wcs of the geometry we originally made and of the saved map seem to agree. So we proceed to do the same pix2sky operation on the loaded geometry.
End of explanation
print(np.all(np.isclose(vy,vy2)))
print(np.all(np.isclose(vx,vx2)))
print(np.all(np.isclose(hy,hy2)))
print(np.all(np.isclose(hx,hx2)))
Explanation: The results are all nans.
End of explanation |
7,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KBMOD Documentation
This notebook demonstrates the basics of the kbmod image processing and searching python API
Before importing, make sure to run
source setup.bash
in the root directory, and that you are using the python3 kernel.
Import everything with
Step1: Object Types
There are currently 5 classes exposed to the python interface
psf
layered_image
image_stack
stack_search
trajectory
psf
A psf kernel, for convolution and adding artificial sources to images
Initializes to a gaussian given the width in pixels
Step2: There are several methods that get information about its properties
Step3: The entire kernel can be printed out (note
Step4: layered_image
Stores the science, mask, and variance image for a single image. The "layered" means it contains all of them together.
It can be initialized 2 ways
Step5: B. Load a file
Step6: Artificial objects can easily be added into a layered_image
Step7: The image pixels can be retrieved as a 2D numpy array
Step8: The image can mask itself by providing a bitmask of flags (note
Step9: The image can be convolved with a psf kernel
Step10: The image at any point can be saved to a file
Step11: Get properties
Step12: image_stack
A collection of layered_images. Used to apply operations to a group of images.
Step13: A shortcut is provided to initialize a stack automatically from a list of files
Step14: A master mask can be generated and applied to the stack
Step15: Manually set the times the images in the stack were taken
Step16: Most features of the layered_image can be used on the whole stack
Step17: stack_search
Searches a stack of images for a given psf
Step18: To save psi and images, a directory with "psi" and "phi" folders must be specified.
Step19: Launch a search
Step20: Save the results to a files
note
Step21: Trajectories can be retrieved directly from search without writing and reading to file.
However, this is not recommended for a large number of trajectories, as it is not returned as a numpy array, but as a list of the trajectory objects described below
Step22: trajectory
A simple container with properties representing an object and its path | Python Code:
from kbmodpy import kbmod as kb
import numpy
path = "../data/demo/"
Explanation: KBMOD Documentation
This notebook demonstrates the basics of the kbmod image processing and searching python API
Before importing, make sure to run
source setup.bash
in the root directory, and that you are using the python3 kernel.
Import everything with:
End of explanation
p = kb.psf(1.0)
p.get_array()
Explanation: Object Types
There are currently 5 classes exposed to the python interface
psf
layered_image
image_stack
stack_search
trajectory
psf
A psf kernel, for convolution and adding artificial sources to images
Initializes to a gaussian given the width in pixels
End of explanation
p.get_dim() # dimension of kernel width and height
p.get_radius() # distance from center of kernel to edge
p.get_size() # total number of pixels in the kernel
p.get_sum() # total sum of all pixels in the kernel, should be close to 1.0
Explanation: There are several methods that get information about its properties
End of explanation
p.print_psf()
Explanation: The entire kernel can be printed out (note: prints to the console, not the notebook)
End of explanation
im = kb.layered_image("image2", 100, 100, 5.0, 25.0, 0.0)
# name, width, height, background_noise_sigma, variance, capture_time
Explanation: layered_image
Stores the science, mask, and variance image for a single image. The "layered" means it contains all of them together.
It can be initialized 2 ways:
A. Generate a new image from scratch
End of explanation
im = kb.layered_image(path+"CORR40535777.fits")
Explanation: B. Load a file
End of explanation
im.add_object(20.0, 35.0, 2500.0, p)
# x, y, flux, psf
Explanation: Artificial objects can easily be added into a layered_image
End of explanation
pixels = im.science()
pixels
Explanation: The image pixels can be retrieved as a 2D numpy array
End of explanation
flags = ~0
flag_exceptions = [32,39]
# mask all of pixels with flags except those with specifed combiniations
im.apply_mask_flags( flags, flag_exceptions )
Explanation: The image can mask itself by providing a bitmask of flags (note: masked pixels are set to -9999.9 so they can be distinguished later from 0.0 pixles)
End of explanation
im.convolve(p)
# note: This function is called interally by stack_search and doesn't need to be
# used directy. It is only exposed because it happens to be a fast
# implementation of a generally useful function
Explanation: The image can be convolved with a psf kernel
End of explanation
im.save_layers(path+"/out/") # file will use original name
Explanation: The image at any point can be saved to a file
End of explanation
im.get_width()
im.get_height()
im.get_time()
im.get_ppi() # pixels per image, width*height
Explanation: Get properties
End of explanation
count = 10
imlist = [ kb.layered_image("img"+str(n), 50, 50, 10.0, 5.0, n/count) for n in range(count) ]
stack = kb.image_stack( imlist )
# this creates a stack with 10 50x50 images, and times ranging from 0 to 1
Explanation: image_stack
A collection of layered_images. Used to apply operations to a group of images.
End of explanation
import os
files = os.listdir(path)
files = [path+f for f in files if '.fits' in f]
files.sort()
files
stack = kb.image_stack(files)
Explanation: A shortcut is provided to initialize a stack automatically from a list of files
End of explanation
stack.apply_master_mask( int('100111', 2), 2 ) # flags, threshold
stack.save_master_mask("mask.fits")
Explanation: A master mask can be generated and applied to the stack
End of explanation
stack.set_times( [0,2,3,4.5,5,6,7,10,11,14] )
Explanation: Manually set the times the images in the stack were taken
End of explanation
stack.apply_mask_flags(flags, flag_exceptions)
stack.convolve(p)
stack.get_width()
stack.get_height()
stack.get_ppi()
stack.get_images() # retrieves list of layered_images back from the stack
stack.get_times()
Explanation: Most features of the layered_image can be used on the whole stack
End of explanation
search = kb.stack_search( stack, p )
Explanation: stack_search
Searches a stack of images for a given psf
End of explanation
search.save_psi_phi("/home/kbmod-usr/cuda-workspace/kbmod/search/output")
Explanation: To save psi and images, a directory with "psi" and "phi" folders must be specified.
End of explanation
search.gpu(10, 10, 0.0, 1.0, 20.0, 50.0)
# angle_steps, velocity_steps, min_angle, max_angle, min_velocity, max_velocity
Explanation: Launch a search
End of explanation
search.save_results(path+"results.txt", 0.05)
# path, fraction of total results to save in file
Explanation: Save the results to a files
note: format is {x, y, xv, yv, likelihood, flux}
End of explanation
top_results = search.get_results(0, 100)
# start, count
Explanation: Trajectories can be retrieved directly from search without writing and reading to file.
However, this is not recommended for a large number of trajectories, as it is not returned as a numpy array, but as a list of the trajectory objects described below
End of explanation
best = top_results[0]
# these numbers are wild because mask flags and search parameters above were chosen randomly
best.flux
best.lh
best.x
best.y
best.x_v
best.y_v
Explanation: trajectory
A simple container with properties representing an object and its path
End of explanation |
7,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The purpose of this post is to demonstrate a sample template for a Bokeh server. Although the example demonstrated in this notebook is rather simple (linear regression), keeping your Bokeh server organized in this manner may save you time when debugging and making adjustments in the future. In the next blog post, deploying this example using Heroku will be demonstrated.
This notebook will demonstrate linear regression on random data by creating a sample dataset using the scikit-learn function "make_regression", which will actually generate the data behind-the-scenes. Bokeh controls will be used to supply the required parameters to the function.
The general idea is that this example Bokeh server will follow a basic Model-View-Controller template. However, as will be seen, the View and Controller have both been split into multiple pieces. This is unfortunately necessary due to the structure of the code; sometimes later objects need to reference ones that were defined earlier.
The first section of the Bokeh server code is the title and imports, which should be fairly straightforward. I like to also include a "last updated" line, which gives an indication of how old the code is. However, it may sometimes be difficult to remember to update this line.
Step1: As one of the first things that will be seen when the code is opened, a parameter defaults section allows the controls to be easily changed, without digging too deeply into the code. In the code below, the defaults and ranges for the number of samples, the bias, and the amount of noise can be adjusted easily. These will be inputs to the "make_regression" function.(This is the first part of the "View" in a Model-View-Controller).
Step2: The next section defines the Bokeh controls that will be present in the GUI. This includes
Step3: Next, the base-level functions which will interact with the data are defined. Typically, these type of functions do not need to be adjusted on a regular basis, but if a bug is found or the algorithm needs to be changed, it's important to keep all the functions that interact with data in the same place. This code in particular will create a random data set, and fit it using a linear regression model. (This is the first part of the "Controller" in a Model-View-Controller).
Step4: The following section handles the "Model" (e.g. data) sources in the Bokeh server. First, I like to generate some example data to initialize the plots. This initialization serves two purposes
Step5: The "Callbacks" section is the second piece of the "Controller" part of the Model-View-Controller. It defines what happens when the user controls are interacted with. In this simple case, there are two buttons in the UI which may be clicked. If the "Simulate" button is clicked, data will be generated based on the user controls, and it will be fitted to a line. Then, the data sources and plot ranges will be updated accordingly. If the "Clear" button is clicked, the data in the plot will be removed by setting the source data arrays to []. Finally, the last two lines actually define which function to execute when the button is clicked (e.g. plot_sim.on_click(update_plot)).
Step6: The final section of the Bokeh server is a simple page layout. Calling the "add_root" method will define the controls that should show up on the main page. Note that this code must typically come at the end, because it requires some of the objects to be defined above. | Python Code:
### Bokeh Linear Regression Example
### Written By: Eric Strong
### Last Updated: 2017/05/10
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression as LR
from bokeh.io import curdoc
from bokeh.plotting import figure
from bokeh.models.ranges import Range1d
from bokeh.models import ColumnDataSource
from bokeh.layouts import column, row, widgetbox
from bokeh.models.widgets import Slider, Button, Div
Explanation: The purpose of this post is to demonstrate a sample template for a Bokeh server. Although the example demonstrated in this notebook is rather simple (linear regression), keeping your Bokeh server organized in this manner may save you time when debugging and making adjustments in the future. In the next blog post, deploying this example using Heroku will be demonstrated.
This notebook will demonstrate linear regression on random data by creating a sample dataset using the scikit-learn function "make_regression", which will actually generate the data behind-the-scenes. Bokeh controls will be used to supply the required parameters to the function.
The general idea is that this example Bokeh server will follow a basic Model-View-Controller template. However, as will be seen, the View and Controller have both been split into multiple pieces. This is unfortunately necessary due to the structure of the code; sometimes later objects need to reference ones that were defined earlier.
The first section of the Bokeh server code is the title and imports, which should be fairly straightforward. I like to also include a "last updated" line, which gives an indication of how old the code is. However, it may sometimes be difficult to remember to update this line.
End of explanation
###-----------------------------------------------------------------------###
###------------------------PARAMETER DEFAULTS-----------------------------###
### This section contains defaults and ranges for the Bokeh controls and ###
### may be modified without concern, if required. ("View" Part 1) ###
###-----------------------------------------------------------------------###
# The format for this section is: default, range[Lower, Upper, Step Size]
d_nsamp, r_nsamp = 100, [50, 500, 50] # Number of samples
d_bias, r_bias = 0, [-50, 50, 5] # Bias
d_noise, r_noise = 3, [0, 20, 1] # Amount of noise
Explanation: As one of the first things that will be seen when the code is opened, a parameter defaults section allows the controls to be easily changed, without digging too deeply into the code. In the code below, the defaults and ranges for the number of samples, the bias, and the amount of noise can be adjusted easily. These will be inputs to the "make_regression" function.(This is the first part of the "View" in a Model-View-Controller).
End of explanation
###-----------------------------------------------------------------------###
###----------------------GRAPHICAL USER INTERFACE-------------------------###
### This code defines the Bokeh controls that are used for the user ###
### interface. All the defaults for the controls are above. This code ###
### should not need to be modified. ("View" Part 2) ###
###-----------------------------------------------------------------------###
# Plot- Regression Data
plot_data = figure(plot_height=300, plot_width=600,
title="Regression Data", toolbar_location="above",
x_axis_label='X', y_axis_label='Y',
tools="pan,save,box_zoom,wheel_zoom")
# Plot Control Buttons
plot_sim = Button(label="Simulate")
plot_clear = Button(label="Clear")
plot_ctls = column(plot_sim, plot_clear)
# Main Control Buttons
ctl_title = Div(text="<h3>Parameters</h3>")
ctl_nsamp = Slider(title="Number of Samples", value=d_nsamp,
start=r_nsamp[0], end=r_nsamp[1], step=r_nsamp[2])
ctl_bias = Slider(title="Bias", value=d_bias,
start=r_bias[0], end=r_bias[1], step=r_bias[2])
ctl_noise = Slider(title="Noise", value=d_noise,
start=r_noise[0], end=r_noise[1], step=r_noise[2])
ctl_inputs = widgetbox(ctl_title, ctl_nsamp, ctl_bias, ctl_noise)
Explanation: The next section defines the Bokeh controls that will be present in the GUI. This includes: a plot of the data, the "simulate" and "clear" buttons, and the sliders for the parameters (number of samples, bias, and noise). Note that the defaults and ranges for the controls were defined in the previous section, so this section should not need modification unless new controls are being added. (This is the second part of the "View" in a Model-View-Controller).
End of explanation
###-----------------------------------------------------------------------###
###-----------------------BASE-LEVEL FUNCTIONS----------------------------###
### This section contains the low-level calculation for the inspection ###
### intervals. ("Controller" Part 1) ###
###-----------------------------------------------------------------------###
def create_data(n_samp, bias, noise):
# Creates a set of random data based on user parameters
data = make_regression(n_samp, 1, 1, 1, bias, noise=noise)
return data
def fit_data(data):
# Uses linear regression to find the coefficients and intercept
x = data[0]
y = data[1]
l_model = LR()
l_model.fit(x, y)
return l_model.coef_, l_model.intercept_
Explanation: Next, the base-level functions which will interact with the data are defined. Typically, these type of functions do not need to be adjusted on a regular basis, but if a bug is found or the algorithm needs to be changed, it's important to keep all the functions that interact with data in the same place. This code in particular will create a random data set, and fit it using a linear regression model. (This is the first part of the "Controller" in a Model-View-Controller).
End of explanation
###-----------------------------------------------------------------------###
###------------------DATA SOURCES AND INITIALIZATION----------------------###
### This section defines the data sources which will be used in the Bokeh ###
### plots. To update a Bokeh plot in the server, each of the sources will ###
### be modified in the CALLBACKS section. ("Model") ###
###-----------------------------------------------------------------------###
# Generating some initial data for the plots, based on the parameter defaults
d_data = create_data(d_nsamp, d_bias, d_noise)
d_coef, d_int = fit_data(d_data)
# Find the minimum and maximum values of the data, for ranges, etc.
# d_data[0] is the "X" data, and d_data[1] is the "Y" data
# Remember that the "X" data is of size [n,1] while the "Y" data is [n]
# That's the reason for the extra [0] in the "X" data below
d_x_min = min(d_data[0])[0]
d_x_max = max(d_data[0])[0]
d_y_min = min(d_data[1])
d_y_max = max(d_data[1])
# Find the default regression line
d_x = [d_x_min, d_x_max]
d_y = [d_x_min*d_coef+d_int, d_x_max*d_coef+d_int]
# Defining the Bokeh data sources
source_data = ColumnDataSource(data=dict(x=d_data[0], y=d_data[1]))
source_line = ColumnDataSource(data=dict(x=d_x, y=d_y))
# Associating the sources with plots
plot_data.scatter('x', 'y', source=source_data)
plot_data.line('x', 'y', source=source_line, line_color='orange')
# Defining the plot ranges
xrange_data = Range1d(bounds=[None, None], start=d_x_min, end=d_x_max)
yrange_data = Range1d(bounds=[None, None], start=d_y_min, end=d_y_max)
# Associating the ranges with plots
plot_data.x_range = xrange_data
plot_data.y_range = yrange_data
Explanation: The following section handles the "Model" (e.g. data) sources in the Bokeh server. First, I like to generate some example data to initialize the plots. This initialization serves two purposes: it allows preliminary validation of the results which can be immediately tested, and it shows the user exactly what the visualization will look like when the controls are interacted with.
If you're not familiar with data sources in Bokeh, they are required to update the plot in real-time. If a data source is not defined, the plot can only contain the data that it was initialized with. The four data sources below (source_data, source_line, xrange_data, and yrange_data) will be updated in the next section, "Callbacks".
End of explanation
###-----------------------------------------------------------------------###
###----------------------------CALLBACKS----------------------------------###
### This section defines the behavior of the GUI as the user interacts ###
### with the controls. ("Controller" Part 2) ###
###-----------------------------------------------------------------------###
# Behavior when the "Simulate" button is clicked
def update_plot():
# Pull the parameters from the controls
num_samples = ctl_nsamp.value
bias = ctl_bias.value
noise = ctl_noise.value
# Generate and fit the data
data = create_data(num_samples, bias, noise)
coef, inter = fit_data(data)
# Find the min and maxes of the data
x_min = min(data[0])[0]
x_max = max(data[0])[0]
y_min = min(data[1])
y_max = max(data[1])
# Find the regression line
x = [x_min, x_max]
y = [x_min*coef+inter, x_max*coef+inter]
# Update the data sources
source_data.data = dict(x=data[0], y=data[1])
source_line.data = dict(x=x, y=y)
# Update the data ranges
xrange_data.start = x_min
xrange_data.end = x_max
yrange_data.start = y_min
yrange_data.end = y_max
# Behavior when the "Clear" button is clicked
def clear_plot():
source_data.data = dict(x=[], y=[])
# Button callbacks, using the above functions
plot_sim.on_click(update_plot)
plot_clear.on_click(clear_plot)
Explanation: The "Callbacks" section is the second piece of the "Controller" part of the Model-View-Controller. It defines what happens when the user controls are interacted with. In this simple case, there are two buttons in the UI which may be clicked. If the "Simulate" button is clicked, data will be generated based on the user controls, and it will be fitted to a line. Then, the data sources and plot ranges will be updated accordingly. If the "Clear" button is clicked, the data in the plot will be removed by setting the source data arrays to []. Finally, the last two lines actually define which function to execute when the button is clicked (e.g. plot_sim.on_click(update_plot)).
End of explanation
###-----------------------------------------------------------------------###
###----------------------------PAGE LAYOUT--------------------------------###
### This section defines the basic layout of the GUI. ("View" Part 3) ###
###-----------------------------------------------------------------------###
col_inputs = column(plot_ctls, ctl_inputs)
col_plots = column(plot_data)
row_page = row(col_inputs, col_plots, width=1200)
curdoc().add_root(row_page)
curdoc().title = "Linear Regression Simulator"
Explanation: The final section of the Bokeh server is a simple page layout. Calling the "add_root" method will define the controls that should show up on the main page. Note that this code must typically come at the end, because it requires some of the objects to be defined above.
End of explanation |
7,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification using Random Forest
Contest entry by
Step1: A complete description of the dataset is given in the Original contest notebook by Brendon Hall, Enthought. A total of four measured rock properties and two interpreted geological properties are given as raw predictor variables for the prediction of the "Facies" class.
Feature engineering
As stated in our previous submission, we believe that feature engineering has a high potential for increasing classification success. A strategy for building new variables is explained below.
The dataset is distributed along a series of drillholes intersecting a stratigraphic sequence. Sedimentary facies tend to be deposited in sequences that reflect the evolution of the paleo-environment (variations in water depth, water temperature, biological activity, currents strenght, detrital input, ...). Each facies represents a specific depositional environment and is in contact with facies that represent a progressive transition to an other environment.
Thus, there is a relationship between neighbouring samples, and the distribution of the data along drillholes can be as important as data values for predicting facies.
A series of new variables (features) are calculated and tested below to help represent the relationship of neighbouring samples and the overall texture of the data along drillholes. These variables are
Step2: Building a prediction model from these variables
A Random Forest model is built here to test the effect of these new variables on the prediction power. Algorithm parameters have been tuned so as to take into account the non-stationarity of the training and testing sets using the LeaveOneGroupOut cross-validation strategy. The size of individual tree leafs and nodes has been increased to the maximum possible without significantly increasing the variance so as to reduce the bias of the prediction.
Box plot for a series of scores obtained through cross validation are presented below.
Create predictor and target arrays
Step3: Estimation of validation scores from this tuning
Step4: Evaluating feature importances
The individual contribution to the classification for each feature (i.e., feature importances) can be obtained from a Random Forest classifier. This gives a good idea of the classification power of individual features and helps understanding which type of feature engineering is the most promising.
Caution should be taken when interpreting feature importances, as highly correlated variables will tend to dilute their classification power between themselves and will rank lower than uncorelated variables.
Step5: Plot the feature importances of the forest
Step6: Features derived from raw geological variables tend to have the highest classification power. Rolling min, max and mean tend to have better classification power than raw data. Wavelet approximation coeficients tend to have a similar to lower classification power than raw data. Features expressing local texture of the data (entropy, gradient, standard deviation and wavelet detail coeficients) have a low classification power but still participate in the prediction.
Confusion matrix
The confusion matrix from the validation test is presented below.
Step7: Applying the classification model to test data
Step8: Exporting results | Python Code:
###### Importing all used packages
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
import seaborn as sns
from pandas import set_option
# set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
###### Import packages needed for the make_vars functions
import Feature_Engineering as FE
##### import stuff from scikit learn
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import KFold, cross_val_score,LeavePGroupsOut, LeaveOneGroupOut, cross_val_predict
from sklearn.metrics import confusion_matrix, make_scorer, f1_score, accuracy_score, recall_score, precision_score
filename = '../facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.head()
training_data.describe()
Explanation: Facies classification using Random Forest
Contest entry by :geoLEARN
Martin Blouin, Lorenzo Perozzi, Antoine Caté <br>
in collaboration with Erwan Gloaguen
Original contest notebook by Brendon Hall, Enthought
In this notebook we will train a machine learning algorithm to predict facies from well log data. The dataset comes from a class exercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a Random Forest model to classify facies types.
Exploring the dataset
First, we import and examine the dataset used to train the classifier.
End of explanation
##### cD From wavelet db1
dwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db1')
##### cA From wavelet db1
dwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db1')
##### cD From wavelet db3
dwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db3')
##### cA From wavelet db3
dwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db3')
##### From entropy
entropy_df = FE.make_entropy_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
l_foots=[2, 3, 4, 5, 7, 10])
###### From gradient
gradient_df = FE.make_gradient_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
dx_list=[2, 3, 4, 5, 6, 10, 20])
##### From rolling average
moving_av_df = FE.make_moving_av_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[1, 2, 5, 10, 20])
##### From rolling standard deviation
moving_std_df = FE.make_moving_std_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3 , 4, 5, 7, 10, 15, 20])
##### From rolling max
moving_max_df = FE.make_moving_max_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3, 4, 5, 7, 10, 15, 20])
##### From rolling min
moving_min_df = FE.make_moving_min_vars(wells_df=training_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3 , 4, 5, 7, 10, 15, 20])
###### From rolling NM/M ratio
rolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=training_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200])
###### From distance to NM and M, up and down
dist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=training_data)
dist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=training_data)
dist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=training_data)
dist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=training_data)
list_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df,
entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df,
rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df]
combined_df = training_data
for var_df in list_df_var:
temp_df = var_df
combined_df = pd.concat([combined_df,temp_df],axis=1)
combined_df.replace(to_replace=np.nan, value='-1', inplace=True)
print (combined_df.shape)
combined_df.head(5)
Explanation: A complete description of the dataset is given in the Original contest notebook by Brendon Hall, Enthought. A total of four measured rock properties and two interpreted geological properties are given as raw predictor variables for the prediction of the "Facies" class.
Feature engineering
As stated in our previous submission, we believe that feature engineering has a high potential for increasing classification success. A strategy for building new variables is explained below.
The dataset is distributed along a series of drillholes intersecting a stratigraphic sequence. Sedimentary facies tend to be deposited in sequences that reflect the evolution of the paleo-environment (variations in water depth, water temperature, biological activity, currents strenght, detrital input, ...). Each facies represents a specific depositional environment and is in contact with facies that represent a progressive transition to an other environment.
Thus, there is a relationship between neighbouring samples, and the distribution of the data along drillholes can be as important as data values for predicting facies.
A series of new variables (features) are calculated and tested below to help represent the relationship of neighbouring samples and the overall texture of the data along drillholes. These variables are:
detail and approximation coeficients at various levels of two wavelet transforms (using two types of Daubechies wavelets);
measures of the local entropy with variable observation windows;
measures of the local gradient with variable observation windows;
rolling statistical calculations (i.e., mean, standard deviation, min and max) with variable observation windows;
ratios between marine and non-marine lithofacies with different observation windows;
distances from the nearest marine or non-marine occurence uphole and downhole.
Functions used to build these variables are located in the Feature Engineering python script.
All the data exploration work related to the conception and study of these variables is not presented here.
End of explanation
X = combined_df.iloc[:, 4:]
y = combined_df['Facies']
groups = combined_df['Well Name']
Explanation: Building a prediction model from these variables
A Random Forest model is built here to test the effect of these new variables on the prediction power. Algorithm parameters have been tuned so as to take into account the non-stationarity of the training and testing sets using the LeaveOneGroupOut cross-validation strategy. The size of individual tree leafs and nodes has been increased to the maximum possible without significantly increasing the variance so as to reduce the bias of the prediction.
Box plot for a series of scores obtained through cross validation are presented below.
Create predictor and target arrays
End of explanation
scoring_param = ['accuracy', 'recall_weighted', 'precision_weighted','f1_weighted']
scores = []
Cl = RandomForestClassifier(n_estimators=100, max_features=0.1, min_samples_leaf=25,
min_samples_split=50, class_weight='balanced', random_state=42, n_jobs=-1)
lpgo = LeavePGroupsOut(n_groups=2)
for scoring in scoring_param:
cv=lpgo.split(X, y, groups)
validated = cross_val_score(Cl, X, y, scoring=scoring, cv=cv, n_jobs=-1)
scores.append(validated)
scores = np.array(scores)
scores = np.swapaxes(scores, 0, 1)
scores = pd.DataFrame(data=scores, columns=scoring_param)
sns.set_style('white')
fig,ax = plt.subplots(figsize=(8,6))
sns.boxplot(data=scores)
plt.xlabel('scoring parameters')
plt.ylabel('score')
plt.title('Classification scores for tuned parameters');
Explanation: Estimation of validation scores from this tuning
End of explanation
####### Evaluation of feature importances
Cl = RandomForestClassifier(n_estimators=75, max_features=0.1, min_samples_leaf=25,
min_samples_split=50, class_weight='balanced', random_state=42,oob_score=True, n_jobs=-1)
Cl.fit(X, y)
print ('OOB estimate of accuracy for prospectivity classification using all features: %s' % str(Cl.oob_score_))
importances = Cl.feature_importances_
std = np.std([tree.feature_importances_ for tree in Cl.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
print("Feature ranking:")
Vars = list(X.columns.values)
for f in range(X.shape[1]):
print("%d. feature %d %s (%f)" % (f + 1, indices[f], Vars[indices[f]], importances[indices[f]]))
Explanation: Evaluating feature importances
The individual contribution to the classification for each feature (i.e., feature importances) can be obtained from a Random Forest classifier. This gives a good idea of the classification power of individual features and helps understanding which type of feature engineering is the most promising.
Caution should be taken when interpreting feature importances, as highly correlated variables will tend to dilute their classification power between themselves and will rank lower than uncorelated variables.
End of explanation
sns.set_style('white')
fig,ax = plt.subplots(figsize=(15,5))
ax.bar(range(X.shape[1]), importances[indices],color="r", align="center")
plt.ylabel("Feature importance")
plt.xlabel('Ranked features')
plt.xticks([], indices)
plt.xlim([-1, X.shape[1]]);
Explanation: Plot the feature importances of the forest
End of explanation
######## Confusion matrix from this tuning
cv=LeaveOneGroupOut().split(X, y, groups)
y_pred = cross_val_predict(Cl, X, y, cv=cv, n_jobs=-1)
conf_mat = confusion_matrix(y, y_pred)
list_facies = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']
conf_mat = pd.DataFrame(conf_mat, columns=list_facies, index=list_facies)
conf_mat.head(10)
Explanation: Features derived from raw geological variables tend to have the highest classification power. Rolling min, max and mean tend to have better classification power than raw data. Wavelet approximation coeficients tend to have a similar to lower classification power than raw data. Features expressing local texture of the data (entropy, gradient, standard deviation and wavelet detail coeficients) have a low classification power but still participate in the prediction.
Confusion matrix
The confusion matrix from the validation test is presented below.
End of explanation
filename = '../validation_data_nofacies.csv'
test_data = pd.read_csv(filename)
test_data.head(5)
##### cD From wavelet db1
dwt_db1_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db1')
##### cA From wavelet db1
dwt_db1_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db1')
##### cD From wavelet db3
dwt_db3_cD_df = FE.make_dwt_vars_cD(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db3')
##### cA From wavelet db3
dwt_db3_cA_df = FE.make_dwt_vars_cA(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
levels=[1, 2, 3, 4], wavelet='db3')
##### From entropy
entropy_df = FE.make_entropy_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
l_foots=[2, 3, 4, 5, 7, 10])
###### From gradient
gradient_df = FE.make_gradient_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
dx_list=[2, 3, 4, 5, 6, 10, 20])
##### From rolling average
moving_av_df = FE.make_moving_av_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[1, 2, 5, 10, 20])
##### From rolling standard deviation
moving_std_df = FE.make_moving_std_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3 , 4, 5, 7, 10, 15, 20])
##### From rolling max
moving_max_df = FE.make_moving_max_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3, 4, 5, 7, 10, 15, 20])
##### From rolling min
moving_min_df = FE.make_moving_min_vars(wells_df=test_data, logs=['GR', 'ILD_log10', 'DeltaPHI', 'PE', 'PHIND'],
windows=[3 , 4, 5, 7, 10, 15, 20])
###### From rolling NM/M ratio
rolling_marine_ratio_df = FE.make_rolling_marine_ratio_vars(wells_df=test_data, windows=[5, 10, 15, 20, 30, 50, 75, 100, 200])
###### From distance to NM and M, up and down
dist_M_up_df = FE.make_distance_to_M_up_vars(wells_df=test_data)
dist_M_down_df = FE.make_distance_to_M_down_vars(wells_df=test_data)
dist_NM_up_df = FE.make_distance_to_NM_up_vars(wells_df=test_data)
dist_NM_down_df = FE.make_distance_to_NM_down_vars(wells_df=test_data)
combined_test_df = test_data
list_df_var = [dwt_db1_cD_df, dwt_db1_cA_df, dwt_db3_cD_df, dwt_db3_cA_df,
entropy_df, gradient_df, moving_av_df, moving_std_df, moving_max_df, moving_min_df,
rolling_marine_ratio_df, dist_M_up_df, dist_M_down_df, dist_NM_up_df, dist_NM_down_df]
for var_df in list_df_var:
temp_df = var_df
combined_test_df = pd.concat([combined_test_df,temp_df],axis=1)
combined_test_df.replace(to_replace=np.nan, value='-99999', inplace=True)
X_test = combined_test_df.iloc[:, 3:]
print (combined_test_df.shape)
combined_test_df.head(5)
Cl = RandomForestClassifier(n_estimators=100, max_features=0.1, min_samples_leaf=25,
min_samples_split=50, class_weight='balanced', random_state=42, n_jobs=-1)
Cl.fit(X, y)
y_test = Cl.predict(X_test)
y_test = pd.DataFrame(y_test, columns=['Predicted Facies'])
test_pred_df = pd.concat([combined_test_df[['Well Name', 'Depth']], y_test], axis=1)
test_pred_df.head()
Explanation: Applying the classification model to test data
End of explanation
test_pred_df.to_pickle('Prediction_blind_wells_RF_c.pkl')
Explanation: Exporting results
End of explanation |
7,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizando datos de entrada
Step1: Algoritmo de Regresion Lineal en TensorFlow
Step2: Regresion Lineal en Polinomios de grado N
Step3: Regularizacion
Para manejar un poco mejor el impacto que tienen los outliers sobre nuestro modelo (y asi evitar que el modelo produzca curvas demasiado complicadas, y el overfitting) existe el termino Regularizacion que se define como | Python Code:
import numpy as np
import matplotlib.pyplot as plt
# Regresa 101 numeros igualmmente espaciados en el intervalo[-1,1]
x_train = np.linspace(-1, 1, 101)
# Genera numeros pseudo-aleatorios multiplicando la matriz x_train * 2 y
# sumando a cada elemento un ruido (una matriz del mismo tamanio con puros numeros random)
y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33
print(np.random.randn(*x_train.shape))
plt.scatter(x_train, y_train)
plt.show()
Explanation: Visualizando datos de entrada
End of explanation
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01
training_epochs = 100
x_train = np.linspace(-1,1,101)
y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33
X = tf.placeholder("float")
Y = tf.placeholder("float")
def model(X,w):
return tf.multiply(X,w)
w = tf.Variable(0.0, name="weights")
y_model = model(X,w)
cost = tf.square(Y-y_model)
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(training_epochs):
for (x,y) in zip(x_train, y_train):
sess.run(train_op, feed_dict={X:x, Y:y})
w_val = sess.run(w)
sess.close()
plt.scatter(x_train, y_train)
y_learned = x_train*w_val
plt.plot(x_train, y_learned, 'r')
plt.show()
Explanation: Algoritmo de Regresion Lineal en TensorFlow
End of explanation
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
learning_rate = 0.01
training_epochs = 40
trX = np.linspace(-1, 1, 101)
num_coeffs = 6
trY_coeffs = [1, 2, 3, 4, 5, 6]
trY = 0
#Construir datos polinomiales pseudo-aleatorios para probar el algoritmo
for i in range(num_coeffs):
trY += trY_coeffs[i] * np.power(trX, i)
trY += np.random.randn(*trX.shape) * 1.5
plt.scatter(trX, trY)
plt.show()
# Construir el grafo para TensorFlow
X = tf.placeholder("float")
Y = tf.placeholder("float")
def model(X, w):
terms = []
for i in range(num_coeffs):
term = tf.multiply(w[i], tf.pow(X, i))
terms.append(term)
return tf.add_n(terms)
w = tf.Variable([0.] * num_coeffs, name="parameters")
y_model = model(X, w)
cost = (tf.pow(Y-y_model, 2))
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
#Correr el Algoritmo en TensorFlow
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for epoch in range(training_epochs):
for (x, y) in zip(trX, trY):
sess.run(train_op, feed_dict={X: x, Y: y})
w_val = sess.run(w)
print(w_val)
sess.close()
# Mostrar el modelo construido
plt.scatter(trX, trY)
trY2 = 0
for i in range(num_coeffs):
trY2 += w_val[i] * np.power(trX, i)
plt.plot(trX, trY2, 'r')
plt.show()
Explanation: Regresion Lineal en Polinomios de grado N
End of explanation
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
def split_dataset(x_dataset, y_dataset, ratio):
arr = np.arange(x_dataset.size)
np.random.shuffle(arr)
num_train = int(ratio* x_dataset.size)
x_train = x_dataset[arr[0:num_train]]
y_train = y_dataset[arr[0:num_train]]
x_test = x_dataset[arr[num_train:x_dataset.size]]
y_test = y_dataset[arr[num_train:x_dataset.size]]
return x_train, x_test, y_train, y_test
learning_rate = 0.001
training_epochs = 1000
reg_lambda = 0.
x_dataset = np.linspace(-1, 1, 100)
num_coeffs = 9
y_dataset_params = [0.] * num_coeffs
y_dataset_params[2] = 1
y_dataset = 0
for i in range(num_coeffs):
y_dataset += y_dataset_params[i] * np.power(x_dataset, i)
y_dataset += np.random.randn(*x_dataset.shape) * 0.3
(x_train, x_test, y_train, y_test) = split_dataset(x_dataset, y_dataset, 0.7)
X = tf.placeholder("float")
Y = tf.placeholder("float")
def model(X, w):
terms = []
for i in range(num_coeffs):
term = tf.multiply(w[i], tf.pow(X,i))
terms.append(term)
return tf.add_n(terms)
w = tf.Variable([0.] * num_coeffs, name="parameters")
y_model = model(X, w)
cost = tf.div(tf.add(tf.reduce_sum(tf.square(Y-y_model)),
tf.multiply(reg_lambda, tf.reduce_sum(tf.square(w)))),
2*x_train.size)
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
i,stop_iters = 0,15
for reg_lambda in np.linspace(0,1,100):
i += 1
for epoch in range(training_epochs):
sess.run(train_op, feed_dict={X: x_train, Y: y_train})
final_cost = sess.run(cost, feed_dict={X: x_test, Y:y_test})
print('reg lambda', reg_lambda)
print('final cost', final_cost)
if i > stop_iters: break
sess.close()
Explanation: Regularizacion
Para manejar un poco mejor el impacto que tienen los outliers sobre nuestro modelo (y asi evitar que el modelo produzca curvas demasiado complicadas, y el overfitting) existe el termino Regularizacion que se define como:
$$ Cost(X,Y) = Loss(X,Y) + \lambda |x| $$
en donde |x| es la norma del vector (la distancia del vector al origen, ver el tema de Norms en otro lado, por ejemplo L1 o L2 norm) que se utiliza como cantidad penalizadora y lambda es como parametro para ajustar que tanto afectara la penalizacion. Entre mas grande sea lambda mas penalizado sera ese punto, y si lambda es 0 entonces se tiene el modelo inicial que no aplica reguarizacion.
Para obtener un valor optimo de gama, se tiene que hacer un split al dataset y...
End of explanation |
7,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
工作流程:
1.问题定义
2.获取训练集和测试集
3.清洗数据,做好准备
4.分析并识别模式,探索数据
5.建立模型,预测并解决问题
6.可视化解决问题的步骤以及最终的解决方案
7.提交答案
工作目标:分类,将样本进行分类,考虑其与目标类的相关性;相关性,特征与目标的相关性;转换,将特征值转化为符合模型的类型;补充,补充存在的缺失值;修正,修正错误的特征值;创造,创造新的特征值;图表化,选择正确的图表。
开始真正的测试
Step1: 获取数据集
Step2: 数据集的特征
Step3: PassagerId:乘客Id编号,没有实际意义
Survived:乘客最后是否存活,0代表No,1代表Yes
Pclass:船票等级,1代表高级Upper,2代表中级Middle,3代表低级Lower
Sex: 性别
Age:年龄,代为年,如果小于1,则年龄是小数,如果年龄是估算的,则可能是xx.5的形式
SibSp:在船上的兄弟或者配偶,Sibling包括兄弟,姐妹,法律意义上的兄弟姐妹
Parch:在船上的父母或者孩子
Ticket:船票的编号
Fare:旅客票价
Cabin:船舱号
Embarked:登陆港口,C代表Cherbourg,Q代表Queenstown,S代表Southampton
Categorical 特征:Survived,Sex,Embarked,Ordinal:Pclass。
Continue 特征:Age,Fare,Discrete:SibSp,Parch
Step4: Ticket是数字加字母的混合数据,Cabin是字母连着数字。
Name有可能存在拼写错误。
训练集中Cabin>Age>Embarked存在缺失值
测试集中Cabin>Age存在缺失值。
数据的分布情况
Step5: 分析特征
Step6: 1.Pclass不同的类别明显拥有不同的存活概率,等级越高,存活率越大
2.Sex
Step7: 1.婴儿有较高的存活率
2.八十岁的老人都存活了
3.大部分的乘客在15-35之间
4.死亡率最大的在15-25之间
Step8: 1.大部分的乘客都是pclass=3,但是死亡率最高
2.Pclass=2的婴儿都存活了
3.大部分的pclass=1的人存活了
Step9: 1、女人的存活率普遍高于男性
2、在Embarked=C中,女性存活率低于男性,不能代表Embarked和Survived有直接关系
3、Embarked不同,存活率也是不同 | Python Code:
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
Explanation: 工作流程:
1.问题定义
2.获取训练集和测试集
3.清洗数据,做好准备
4.分析并识别模式,探索数据
5.建立模型,预测并解决问题
6.可视化解决问题的步骤以及最终的解决方案
7.提交答案
工作目标:分类,将样本进行分类,考虑其与目标类的相关性;相关性,特征与目标的相关性;转换,将特征值转化为符合模型的类型;补充,补充存在的缺失值;修正,修正错误的特征值;创造,创造新的特征值;图表化,选择正确的图表。
开始真正的测试
End of explanation
train_df = pd.read_csv(r'E:\song_ws\data\kaggle\Titanic\train.csv')
test_df = pd.read_csv(r'E:\song_ws\data\kaggle\Titanic\test.csv')
combine = [train_df, test_df]
Explanation: 获取数据集
End of explanation
print(train_df.columns.values)
Explanation: 数据集的特征
End of explanation
# preview the data
train_df.head()
train_df.tail()
train_df.info()
print('_'*40)
test_df.info()
Explanation: PassagerId:乘客Id编号,没有实际意义
Survived:乘客最后是否存活,0代表No,1代表Yes
Pclass:船票等级,1代表高级Upper,2代表中级Middle,3代表低级Lower
Sex: 性别
Age:年龄,代为年,如果小于1,则年龄是小数,如果年龄是估算的,则可能是xx.5的形式
SibSp:在船上的兄弟或者配偶,Sibling包括兄弟,姐妹,法律意义上的兄弟姐妹
Parch:在船上的父母或者孩子
Ticket:船票的编号
Fare:旅客票价
Cabin:船舱号
Embarked:登陆港口,C代表Cherbourg,Q代表Queenstown,S代表Southampton
Categorical 特征:Survived,Sex,Embarked,Ordinal:Pclass。
Continue 特征:Age,Fare,Discrete:SibSp,Parch
End of explanation
train_df.describe()
train_df.describe(include=['O'])
Explanation: Ticket是数字加字母的混合数据,Cabin是字母连着数字。
Name有可能存在拼写错误。
训练集中Cabin>Age>Embarked存在缺失值
测试集中Cabin>Age存在缺失值。
数据的分布情况
End of explanation
train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)
Explanation: 分析特征
End of explanation
g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)
Explanation: 1.Pclass不同的类别明显拥有不同的存活概率,等级越高,存活率越大
2.Sex:明显女性的存活率远高于男性
3.Sibsp和Parch对存活率没有明显的相关性
4.Embarked:不同的港口登陆存活率稍微有些差别,C港口的存活率明显高一些。
End of explanation
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
Explanation: 1.婴儿有较高的存活率
2.八十岁的老人都存活了
3.大部分的乘客在15-35之间
4.死亡率最大的在15-25之间
End of explanation
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend()
Explanation: 1.大部分的乘客都是pclass=3,但是死亡率最高
2.Pclass=2的婴儿都存活了
3.大部分的pclass=1的人存活了
End of explanation
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()
Explanation: 1、女人的存活率普遍高于男性
2、在Embarked=C中,女性存活率低于男性,不能代表Embarked和Survived有直接关系
3、Embarked不同,存活率也是不同
End of explanation |
7,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
README
Notebook to test the Complex class as well as parsing code from cobrame/ecolime
From COBRAme/ECOLIme...
Flat files / ProcessData
Step1: Protein complexes - ComplexData and ComplexFormation (the reactions needed to assemble the complexes in ComplexData)
Step3: Reaction to complex information
Step4: Summary | Python Code:
import ecolime
import ecolime.flat_files
Explanation: README
Notebook to test the Complex class as well as parsing code from cobrame/ecolime
From COBRAme/ECOLIme...
Flat files / ProcessData
End of explanation
# First load the list of complexes which tells you complexes + subunit stoichiometry
# Converts the protein_complexes.txt file into a dictionary for ME model construction
complexes = ecolime.flat_files.get_complex_subunit_stoichiometry('protein_complexes.txt')
# Then load the modifications which tells you the modificiations (ie. cofactors) that are needed for a complex
# Converts protein_modification.txt
complex_modification_dict = ecolime.flat_files.get_complex_modifications('protein_modification.txt', 'protein_complexes.txt')
complexes
complexes['CPLX0-7']
complexes['CPLX0-1601']
Explanation: Protein complexes - ComplexData and ComplexFormation (the reactions needed to assemble the complexes in ComplexData)
End of explanation
from collections import defaultdict
import pandas
from os.path import dirname, join, abspath
ecoli_files_dir = join('/home/nathan/projects_unsynced/ecolime/ecolime/', 'building_data/')
from ecolime import corrections
def fixpath(filename):
return join(ecoli_files_dir, filename)
# From: ecolime.flat_files.get_reaction_to_complex, modified to just parse the file
def get_reaction_to_complex(modifications=True):
anything not in this dict is assumed to be an orphan
rxn_to_complex_dict = defaultdict(set)
# Load enzyme reaction association dataframe
df = pandas.read_csv(fixpath('enzyme_reaction_association.txt'),
delimiter='\t', names=['Reaction', 'Complexes'])
# Fix legacy naming
df = df.applymap(lambda x: x.replace('DASH', ''))
df = df.set_index('Reaction')
df = corrections.correct_enzyme_reaction_association_frame(df)
for reaction, complexes in df.itertuples():
for cplx in complexes.split(' OR '):
if modifications:
rxn_to_complex_dict[reaction].add(cplx)
else:
rxn_to_complex_dict[reaction].add(cplx.split('_mod_')[0])
return rxn_to_complex_dict
reaction_to_complex = get_reaction_to_complex()
for reaction,cplxs in reaction_to_complex.items():
for c in cplxs:
if 'NADH-DHI-CPLX' in c:
print(reaction, cplxs)
Explanation: Reaction to complex information
End of explanation
from collections import OrderedDict
biglist = []
for reaction,cplxs in reaction_to_complex.items():
print('Reaction:', reaction)
print('Reaction rule:', cplxs)
print()
for cplx in cplxs:
smalldict = OrderedDict()
smalldict['Reaction'] = reaction
# smalldict['Reaction_rule'] = ';'.join(cplxs)
if cplx not in complex_modification_dict:
subunits = {k.split('protein_')[1]:v for k,v in complexes[cplx].items()}
print('\tComplex ID:', cplx)
print('\tComplex subunits:', subunits)
smalldict['Complex_ID'] = cplx
smalldict['Complex_ID_mod'] = None
smalldict['Complex_subunits'] = [(k, v) for k,v in subunits.items()]
smalldict['Complex_modifications'] = None
else:
subunits = {k.split('protein_')[1]:v for k,v in complexes[complex_modification_dict[cplx]['core_enzyme']].items()}
mods = complex_modification_dict[cplx]['modifications']
print('\tComplex ID (modification):', cplx)
print('\tComplex ID (original):', complex_modification_dict[cplx]['core_enzyme'])
print('\tComplex subunits:', subunits)
print('\tComplex modification:', mods)
smalldict['Complex_ID'] = complex_modification_dict[cplx]['core_enzyme']
smalldict['Complex_ID_mod'] = cplx
smalldict['Complex_subunits'] = ((k, v) for k,v in subunits.items())
smalldict['Complex_modifications'] = ((k, v) for k,v in mods.items())
print()
biglist.append(smalldict)
import pandas as pd
pd.DataFrame(biglist)
Explanation: Summary
End of explanation |
7,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
read from an Excel file
documentation
Step1: write to a comma separated value (.csv) file
documentation | Python Code:
file_name_string = 'C:/Users/Charles Kelly/Desktop/Exercise Files/02_07/Begin/EmployeesWithGrades.xlsx'
employees_df = pd.read_excel(file_name_string, 'Sheet1', index_col=None, na_values=['NA'])
Explanation: read from an Excel file
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html
you don't need to have MS-Excel on your computer
End of explanation
file_name_string_csv = 'C:/Users/Charles Kelly/Desktop/Exercise Files/02_07/Final/EmployeesWithGrades.csv'
employees_df.to_csv(file_name_string_csv )
Explanation: write to a comma separated value (.csv) file
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html
End of explanation |
7,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Step13: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size = n_seqs * n_steps
n_batches = len(arr)//batch_size
# Keep only enough characters to make full batches
arr = arr[:n_batches * batch_size]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
7,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Serie de Taylor
\begin{equation}
f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!} (x - a)^{n}
\end{equation}
Expansión hacia adelante
\begin{equation}
f(x) \approx f(a) + f'(a) (x - a) + \frac{1}{2} f''(a) (x - a)^{2} + \frac{1}{6} f'''(a) (x - a)^{3} + \cdots
\end{equation}
Usando un cambio de variable $x - a = h$
\begin{equation}
f(x) \approx f(a) + f'(a) h + \frac{1}{2} f''(a) h^{2} + \frac{1}{6} f'''(a) h^{3} + \cdots
\end{equation}
Reemplazando $a = x - h$
\begin{equation}
f(x) \approx f(x - h) + f'(x - h) h + \frac{1}{2} f''(x - h) h^{2} + \frac{1}{6} f'''(x - h) h^{3} + \cdots
\end{equation}
Usando un cambio de variable $x = x_{i}$
\begin{equation}
f(x_{i}) \approx f(x_{i} - h) + f'(x_{i} - h) h + \frac{1}{2} f''(x_{i} - h) h^{2} + \frac{1}{6} f'''(x_{i} - h) h^{3} + \cdots
\end{equation}
Reemplazando $x_{i} - h = x_{i-1}$
\begin{equation}
f(x_{i}) \approx f(x_{i-1}) + f'(x_{i-1}) h + \frac{1}{2} f''(x_{i-1}) h^{2} + \frac{1}{6} f'''(x_{i-1}) h^{3} + \cdots
\end{equation}
Recorriendo los índices
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} + \frac{1}{6} f'''(x_{i}) h^{3} + \cdots
\end{equation}
Expansión hacia atrás
La expansión hacia adelante puede escribirse como
\begin{equation}
f(x_{i} + h) \approx f(x_{i}) (h)^{0} + f'(x_{i}) (h)^{1} + \frac{1}{2} f''(x_{i}) (h)^{2} + \frac{1}{6} f'''(x_{i}) (h)^{3} + \cdots
\end{equation}
Entonces la expansión hacia atrás será
\begin{equation}
f(x_{i} - h) \approx f(x_{i}) (-h)^{0} + f'(x_{i}) (-h)^{1} + \frac{1}{2} f''(x_{i}) (-h)^{2} + \frac{1}{6} f'''(x_{i}) (-h)^{3} + \cdots
\end{equation}
Simplificando
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} - \frac{1}{6} f'''(x_{i}) h^{3} + \cdots
\end{equation}
Primera diferencia hacia adelante
Usando dos términos de la serie de Taylor expandida hacia adelante
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h
\end{equation}
Despejando la primera derivada
\begin{equation}
f'(x_{i}) = \frac{f(x_{i+1}) - f(x_{i})}{h}
\end{equation}
Step1: Primera diferencia hacia atrás
Usando dos términos de la serie de Taylor expandida hacia atrás
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h
\end{equation}
Despejando la primera derivada
\begin{equation}
f'(x_{i}) = \frac{f(x_{i}) - f(x_{i-1})}{h}
\end{equation}
Step2: Primera diferencia centrada
Usando tres términos de la serie de Taylor expandida hacia adelante
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} \label{ecuacion6}
\end{equation}
Usando tres términos de la serie de Taylor expandida hacia atrás
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} \label{ecuacion7}
\end{equation}
Restando \eqref{ecuacion7} de \eqref{ecuacion6}
\begin{equation}
f(x_{i+1}) - f(x_{i-1}) \approx 2 f'(x_{i}) h
\end{equation}
Despejando la primera derivada
\begin{equation}
f'(x_{i}) = \frac{f(x_{i+1}) - f(x_{i-1})}{2 h}
\end{equation}
También puede escribirse como el promedio de una diferencia hacia adelante y una diferencia hacia atrás
\begin{equation}
f'(x_{i}) = \frac{\frac{f(x_{i+1}) - f(x_{i})}{h} + \frac{f(x_{i}) - f(x_{i-1})}{h}}{2}
\end{equation}
Step3: Segunda diferencia hacia adelante
Usando tres términos de la serie de Taylor expandida hacia adelante una posición
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2}
\end{equation}
Multiplicando por dos
\begin{equation}
2 f(x_{i+1}) \approx 2 f(x_{i}) + 2 f'(x_{i}) h + f''(x_{i}) h^{2} \label{ecuacion9}
\end{equation}
Usando tres términos de la serie de Taylor expandida hacia adelante dos posiciones
\begin{equation}
f(x_{i+2}) \approx f(x_{i}) + 2 f'(x_{i}) h + 2 f''(x_{i}) h^{2} \label{ecuacion10}
\end{equation}
Restando \eqref{ecuacion10} de \eqref{ecuacion9}
\begin{equation}
2 f(x_{i+1}) - f(x_{i+2}) \approx f(x_{i}) - f''(x_{i}) h^{2}
\end{equation}
Despejando la segunda derivada
\begin{equation}
f''(x_{i}) = \frac{f(x_{i+2}) - 2 f(x_{i+1}) + f(x_{i})}{h^{2}}
\end{equation}
Step4: Segunda diferencia hacia atrás
Usando tres términos de la serie de Taylor expandida hacia atrás una posición
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2}
\end{equation}
Multiplicando por dos
\begin{equation}
2 f(x_{i-1}) \approx 2 f(x_{i}) - 2 f'(x_{i}) h + f''(x_{i}) h^{2} \label{ecuacion12}
\end{equation}
Usando tres términos de la serie de Taylor expandida hacia atrás dos posiciones
\begin{equation}
f(x_{i-2}) \approx f(x_{i}) - 2 f'(x_{i}) h + 2 f''(x_{i}) h^{2} \label{ecuacion13}
\end{equation}
Restando \eqref{ecuacion13} de \eqref{ecuacion12}
\begin{equation}
2 f(x_{i-1}) - f(x_{i-2}) \approx f(x_{i}) - f''(x_{i}) h^{2}
\end{equation}
Despejando la segunda derivada
\begin{equation}
f''(x_{i}) = \frac{f(x_{i}) - 2 f(x_{i-1}) + f(x_{i-2})}{h^{2}}
\end{equation}
Step5: Segunda diferencia centrada
Usando tres términos de la serie de Taylor expandida hacia adelante
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} \label{ecuacion15}
\end{equation}
Usando tres términos de la serie de Taylor expandida hacia atrás
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} \label{ecuacion16}
\end{equation}
Sumando \eqref{ecuacion16} a \eqref{ecuacion15}
\begin{equation}
f(x_{i+1}) + f(x_{i-1}) \approx 2 f(x_{i}) + f''(x_{i}) h^{2}
\end{equation}
Despejando la segunda derivada
\begin{equation}
f''(x_{i}) = \frac{f(x_{i+1}) - 2 f(x_{i}) + f(x_{i-1})}{h^{2}}
\end{equation}
También puede escribirse como la diferencia de una diferencia hacia adelante y una diferencia hacia atrás
\begin{equation}
f''(x_{i}) = \frac{\frac{f(x_{i+1}) - f(x_{i})}{h} - \frac{f(x_{i}) - f(x_{i-1})}{h}}{h}
\end{equation} | Python Code:
def g(x):
resultado = - 0.1*x**4 - 0.15*x**3 - 0.5*x**2 - 0.25*x + 1.2
return resultado
def fx_adelante(f,x,h):
derivada = (f(x+h) - f(x))/h
return derivada
print('f\'(0.5) =', fx_adelante(g,0.5,0.25))
Explanation: Serie de Taylor
\begin{equation}
f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!} (x - a)^{n}
\end{equation}
Expansión hacia adelante
\begin{equation}
f(x) \approx f(a) + f'(a) (x - a) + \frac{1}{2} f''(a) (x - a)^{2} + \frac{1}{6} f'''(a) (x - a)^{3} + \cdots
\end{equation}
Usando un cambio de variable $x - a = h$
\begin{equation}
f(x) \approx f(a) + f'(a) h + \frac{1}{2} f''(a) h^{2} + \frac{1}{6} f'''(a) h^{3} + \cdots
\end{equation}
Reemplazando $a = x - h$
\begin{equation}
f(x) \approx f(x - h) + f'(x - h) h + \frac{1}{2} f''(x - h) h^{2} + \frac{1}{6} f'''(x - h) h^{3} + \cdots
\end{equation}
Usando un cambio de variable $x = x_{i}$
\begin{equation}
f(x_{i}) \approx f(x_{i} - h) + f'(x_{i} - h) h + \frac{1}{2} f''(x_{i} - h) h^{2} + \frac{1}{6} f'''(x_{i} - h) h^{3} + \cdots
\end{equation}
Reemplazando $x_{i} - h = x_{i-1}$
\begin{equation}
f(x_{i}) \approx f(x_{i-1}) + f'(x_{i-1}) h + \frac{1}{2} f''(x_{i-1}) h^{2} + \frac{1}{6} f'''(x_{i-1}) h^{3} + \cdots
\end{equation}
Recorriendo los índices
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} + \frac{1}{6} f'''(x_{i}) h^{3} + \cdots
\end{equation}
Expansión hacia atrás
La expansión hacia adelante puede escribirse como
\begin{equation}
f(x_{i} + h) \approx f(x_{i}) (h)^{0} + f'(x_{i}) (h)^{1} + \frac{1}{2} f''(x_{i}) (h)^{2} + \frac{1}{6} f'''(x_{i}) (h)^{3} + \cdots
\end{equation}
Entonces la expansión hacia atrás será
\begin{equation}
f(x_{i} - h) \approx f(x_{i}) (-h)^{0} + f'(x_{i}) (-h)^{1} + \frac{1}{2} f''(x_{i}) (-h)^{2} + \frac{1}{6} f'''(x_{i}) (-h)^{3} + \cdots
\end{equation}
Simplificando
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} - \frac{1}{6} f'''(x_{i}) h^{3} + \cdots
\end{equation}
Primera diferencia hacia adelante
Usando dos términos de la serie de Taylor expandida hacia adelante
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h
\end{equation}
Despejando la primera derivada
\begin{equation}
f'(x_{i}) = \frac{f(x_{i+1}) - f(x_{i})}{h}
\end{equation}
End of explanation
def fx_atras(f,x,h):
derivada = (f(x) - f(x-h))/h
return derivada
print('f\'(0.5) =', fx_atras(g,0.5,0.25))
Explanation: Primera diferencia hacia atrás
Usando dos términos de la serie de Taylor expandida hacia atrás
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h
\end{equation}
Despejando la primera derivada
\begin{equation}
f'(x_{i}) = \frac{f(x_{i}) - f(x_{i-1})}{h}
\end{equation}
End of explanation
def fx_centrada(f,x,h):
derivada = (fx_adelante(f,x,h) + fx_atras(f,x,h))/2
return derivada
print('f\'(0.5) =', fx_centrada(g,0.5,0.25))
Explanation: Primera diferencia centrada
Usando tres términos de la serie de Taylor expandida hacia adelante
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} \label{ecuacion6}
\end{equation}
Usando tres términos de la serie de Taylor expandida hacia atrás
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} \label{ecuacion7}
\end{equation}
Restando \eqref{ecuacion7} de \eqref{ecuacion6}
\begin{equation}
f(x_{i+1}) - f(x_{i-1}) \approx 2 f'(x_{i}) h
\end{equation}
Despejando la primera derivada
\begin{equation}
f'(x_{i}) = \frac{f(x_{i+1}) - f(x_{i-1})}{2 h}
\end{equation}
También puede escribirse como el promedio de una diferencia hacia adelante y una diferencia hacia atrás
\begin{equation}
f'(x_{i}) = \frac{\frac{f(x_{i+1}) - f(x_{i})}{h} + \frac{f(x_{i}) - f(x_{i-1})}{h}}{2}
\end{equation}
End of explanation
def fxx_adelante(f,x,h):
derivada = (f(x+2*h) - 2*f(x+h) + f(x))/h**2
return derivada
print('f\'\'(0.5) =', fxx_adelante(g,0.5,0.25))
Explanation: Segunda diferencia hacia adelante
Usando tres términos de la serie de Taylor expandida hacia adelante una posición
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2}
\end{equation}
Multiplicando por dos
\begin{equation}
2 f(x_{i+1}) \approx 2 f(x_{i}) + 2 f'(x_{i}) h + f''(x_{i}) h^{2} \label{ecuacion9}
\end{equation}
Usando tres términos de la serie de Taylor expandida hacia adelante dos posiciones
\begin{equation}
f(x_{i+2}) \approx f(x_{i}) + 2 f'(x_{i}) h + 2 f''(x_{i}) h^{2} \label{ecuacion10}
\end{equation}
Restando \eqref{ecuacion10} de \eqref{ecuacion9}
\begin{equation}
2 f(x_{i+1}) - f(x_{i+2}) \approx f(x_{i}) - f''(x_{i}) h^{2}
\end{equation}
Despejando la segunda derivada
\begin{equation}
f''(x_{i}) = \frac{f(x_{i+2}) - 2 f(x_{i+1}) + f(x_{i})}{h^{2}}
\end{equation}
End of explanation
def fxx_atras(f,x,h):
derivada = (f(x) - 2*f(x-h) + f(x-2*h))/h**2
return derivada
print('f\'\'(0.5) =', fxx_atras(g,0.5,0.25))
Explanation: Segunda diferencia hacia atrás
Usando tres términos de la serie de Taylor expandida hacia atrás una posición
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2}
\end{equation}
Multiplicando por dos
\begin{equation}
2 f(x_{i-1}) \approx 2 f(x_{i}) - 2 f'(x_{i}) h + f''(x_{i}) h^{2} \label{ecuacion12}
\end{equation}
Usando tres términos de la serie de Taylor expandida hacia atrás dos posiciones
\begin{equation}
f(x_{i-2}) \approx f(x_{i}) - 2 f'(x_{i}) h + 2 f''(x_{i}) h^{2} \label{ecuacion13}
\end{equation}
Restando \eqref{ecuacion13} de \eqref{ecuacion12}
\begin{equation}
2 f(x_{i-1}) - f(x_{i-2}) \approx f(x_{i}) - f''(x_{i}) h^{2}
\end{equation}
Despejando la segunda derivada
\begin{equation}
f''(x_{i}) = \frac{f(x_{i}) - 2 f(x_{i-1}) + f(x_{i-2})}{h^{2}}
\end{equation}
End of explanation
def fxx_centrada(f,x,h):
derivada = (fxx_adelante(f,x,h) - fxx_atras(f,x,h))/h
return derivada
print('f\'\'(0.5) =', fxx_centrada(g,0.5,0.25))
Explanation: Segunda diferencia centrada
Usando tres términos de la serie de Taylor expandida hacia adelante
\begin{equation}
f(x_{i+1}) \approx f(x_{i}) + f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} \label{ecuacion15}
\end{equation}
Usando tres términos de la serie de Taylor expandida hacia atrás
\begin{equation}
f(x_{i-1}) \approx f(x_{i}) - f'(x_{i}) h + \frac{1}{2} f''(x_{i}) h^{2} \label{ecuacion16}
\end{equation}
Sumando \eqref{ecuacion16} a \eqref{ecuacion15}
\begin{equation}
f(x_{i+1}) + f(x_{i-1}) \approx 2 f(x_{i}) + f''(x_{i}) h^{2}
\end{equation}
Despejando la segunda derivada
\begin{equation}
f''(x_{i}) = \frac{f(x_{i+1}) - 2 f(x_{i}) + f(x_{i-1})}{h^{2}}
\end{equation}
También puede escribirse como la diferencia de una diferencia hacia adelante y una diferencia hacia atrás
\begin{equation}
f''(x_{i}) = \frac{\frac{f(x_{i+1}) - f(x_{i})}{h} - \frac{f(x_{i}) - f(x_{i-1})}{h}}{h}
\end{equation}
End of explanation |
7,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Image captioning with visual attention
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download and prepare the MS-COCO dataset
You will use the MS-COCO dataset to train your model. The dataset contains over 82,000 images, each of which has at least 5 different caption annotations. The code below downloads and extracts the dataset automatically.
Caution
Step3: Optional
Step4: Preprocess the images using InceptionV3
Next, you will use InceptionV3 (which is pretrained on Imagenet) to classify each image. You will extract features from the last convolutional layer.
First, you will convert the images into InceptionV3's expected format by
Step5: Initialize InceptionV3 and load the pretrained Imagenet weights
Now you'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture. The shape of the output of this layer is 8x8x2048. You use the last convolutional layer because you are using attention in this example. You don't perform this initialization during training because it could become a bottleneck.
You forward each image through the network and store the resulting vector in a dictionary (image_name --> feature_vector).
After all the images are passed through the network, you save the dictionary to disk.
Step6: Caching the features extracted from InceptionV3
You will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but also memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this exceeds the memory limitations of Colab (currently 12GB of memory).
Performance could be improved with a more sophisticated caching strategy (for example, by sharding the images to reduce random access disk I/O), but that would require more code.
The caching will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you can
Step7: Preprocess and tokenize the captions
You will transform the text captions into integer sequences using the TextVectorization layer, with the following steps
Step8: Split the data into training and testing
Step9: Create a tf.data dataset for training
Your images and captions are ready! Next, let's create a tf.data dataset to use for training your model.
Step10: Model
Fun fact
Step11: Checkpoint
Step12: Training
You extract the features stored in the respective .npy files and then pass those features through the encoder.
The encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
Step13: Caption!
The evaluate function is similar to the training loop, except you don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
Step14: Try it on your own images
For fun, below you're provided a method you can use to caption your own images with the model you've just trained. Keep in mind, it was trained on a relatively small amount of data, and your images may be different from the training data (so be prepared for weird results!) | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
# You'll generate plots of attention in order to see which parts of an image
# your model focuses on during captioning
import matplotlib.pyplot as plt
import collections
import random
import numpy as np
import os
import time
import json
from PIL import Image
Explanation: Image captioning with visual attention
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/text/image_captioning">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/image_captioning.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/image_captioning.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/image_captioning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Given an image like the example below, your goal is to generate a caption such as "a surfer riding on a wave".
Image Source; License: Public Domain
To accomplish this, you'll use an attention-based model, which enables us to see what parts of the image the model focuses on as it generates a caption.
The model architecture is similar to Show, Attend and Tell: Neural Image Caption Generation with Visual Attention.
This notebook is an end-to-end example. When you run the notebook, it downloads the MS-COCO dataset, preprocesses and caches a subset of images using Inception V3, trains an encoder-decoder model, and generates captions on new images using the trained model.
In this example, you will train a model on a relatively small amount of data—the first 30,000 captions for about 20,000 images (because there are multiple captions per image in the dataset).
End of explanation
# Download caption annotation files
annotation_folder = '/annotations/'
if not os.path.exists(os.path.abspath('.') + annotation_folder):
annotation_zip = tf.keras.utils.get_file('captions.zip',
cache_subdir=os.path.abspath('.'),
origin='http://images.cocodataset.org/annotations/annotations_trainval2014.zip',
extract=True)
annotation_file = os.path.dirname(annotation_zip)+'/annotations/captions_train2014.json'
os.remove(annotation_zip)
# Download image files
image_folder = '/train2014/'
if not os.path.exists(os.path.abspath('.') + image_folder):
image_zip = tf.keras.utils.get_file('train2014.zip',
cache_subdir=os.path.abspath('.'),
origin='http://images.cocodataset.org/zips/train2014.zip',
extract=True)
PATH = os.path.dirname(image_zip) + image_folder
os.remove(image_zip)
else:
PATH = os.path.abspath('.') + image_folder
Explanation: Download and prepare the MS-COCO dataset
You will use the MS-COCO dataset to train your model. The dataset contains over 82,000 images, each of which has at least 5 different caption annotations. The code below downloads and extracts the dataset automatically.
Caution: large download ahead. You'll use the training set, which is a 13GB file.
End of explanation
with open(annotation_file, 'r') as f:
annotations = json.load(f)
# Group all captions together having the same image ID.
image_path_to_caption = collections.defaultdict(list)
for val in annotations['annotations']:
caption = f"<start> {val['caption']} <end>"
image_path = PATH + 'COCO_train2014_' + '%012d.jpg' % (val['image_id'])
image_path_to_caption[image_path].append(caption)
image_paths = list(image_path_to_caption.keys())
random.shuffle(image_paths)
# Select the first 6000 image_paths from the shuffled set.
# Approximately each image id has 5 captions associated with it, so that will
# lead to 30,000 examples.
train_image_paths = image_paths[:6000]
print(len(train_image_paths))
train_captions = []
img_name_vector = []
for image_path in train_image_paths:
caption_list = image_path_to_caption[image_path]
train_captions.extend(caption_list)
img_name_vector.extend([image_path] * len(caption_list))
print(train_captions[0])
Image.open(img_name_vector[0])
Explanation: Optional: limit the size of the training set
To speed up training for this tutorial, you'll use a subset of 30,000 captions and their corresponding images to train your model. Choosing to use more data would result in improved captioning quality.
End of explanation
def load_image(image_path):
img = tf.io.read_file(image_path)
img = tf.io.decode_jpeg(img, channels=3)
img = tf.keras.layers.Resizing(299, 299)(img)
img = tf.keras.applications.inception_v3.preprocess_input(img)
return img, image_path
Explanation: Preprocess the images using InceptionV3
Next, you will use InceptionV3 (which is pretrained on Imagenet) to classify each image. You will extract features from the last convolutional layer.
First, you will convert the images into InceptionV3's expected format by:
* Resizing the image to 299px by 299px
* Preprocess the images using the preprocess_input method to normalize the image so that it contains pixels in the range of -1 to 1, which matches the format of the images used to train InceptionV3.
End of explanation
image_model = tf.keras.applications.InceptionV3(include_top=False,
weights='imagenet')
new_input = image_model.input
hidden_layer = image_model.layers[-1].output
image_features_extract_model = tf.keras.Model(new_input, hidden_layer)
Explanation: Initialize InceptionV3 and load the pretrained Imagenet weights
Now you'll create a tf.keras model where the output layer is the last convolutional layer in the InceptionV3 architecture. The shape of the output of this layer is 8x8x2048. You use the last convolutional layer because you are using attention in this example. You don't perform this initialization during training because it could become a bottleneck.
You forward each image through the network and store the resulting vector in a dictionary (image_name --> feature_vector).
After all the images are passed through the network, you save the dictionary to disk.
End of explanation
# Get unique images
encode_train = sorted(set(img_name_vector))
# Feel free to change batch_size according to your system configuration
image_dataset = tf.data.Dataset.from_tensor_slices(encode_train)
image_dataset = image_dataset.map(
load_image, num_parallel_calls=tf.data.AUTOTUNE).batch(16)
for img, path in image_dataset:
batch_features = image_features_extract_model(img)
batch_features = tf.reshape(batch_features,
(batch_features.shape[0], -1, batch_features.shape[3]))
for bf, p in zip(batch_features, path):
path_of_feature = p.numpy().decode("utf-8")
np.save(path_of_feature, bf.numpy())
Explanation: Caching the features extracted from InceptionV3
You will pre-process each image with InceptionV3 and cache the output to disk. Caching the output in RAM would be faster but also memory intensive, requiring 8 * 8 * 2048 floats per image. At the time of writing, this exceeds the memory limitations of Colab (currently 12GB of memory).
Performance could be improved with a more sophisticated caching strategy (for example, by sharding the images to reduce random access disk I/O), but that would require more code.
The caching will take about 10 minutes to run in Colab with a GPU. If you'd like to see a progress bar, you can:
Install tqdm:
!pip install tqdm
Import tqdm:
from tqdm import tqdm
Change the following line:
for img, path in image_dataset:
to:
for img, path in tqdm(image_dataset):
End of explanation
caption_dataset = tf.data.Dataset.from_tensor_slices(train_captions)
# We will override the default standardization of TextVectorization to preserve
# "<>" characters, so we preserve the tokens for the <start> and <end>.
def standardize(inputs):
inputs = tf.strings.lower(inputs)
return tf.strings.regex_replace(inputs,
r"!\"#$%&\(\)\*\+.,-/:;=?@\[\\\]^_`{|}~", "")
# Max word count for a caption.
max_length = 50
# Use the top 5000 words for a vocabulary.
vocabulary_size = 5000
tokenizer = tf.keras.layers.TextVectorization(
max_tokens=vocabulary_size,
standardize=standardize,
output_sequence_length=max_length)
# Learn the vocabulary from the caption data.
tokenizer.adapt(caption_dataset)
# Create the tokenized vectors
cap_vector = caption_dataset.map(lambda x: tokenizer(x))
# Create mappings for words to indices and indices to words.
word_to_index = tf.keras.layers.StringLookup(
mask_token="",
vocabulary=tokenizer.get_vocabulary())
index_to_word = tf.keras.layers.StringLookup(
mask_token="",
vocabulary=tokenizer.get_vocabulary(),
invert=True)
Explanation: Preprocess and tokenize the captions
You will transform the text captions into integer sequences using the TextVectorization layer, with the following steps:
Use adapt to iterate over all captions, split the captions into words, and compute a vocabulary of the top 5,000 words (to save memory).
Tokenize all captions by mapping each word to its index in the vocabulary. All output sequences will be padded to length 50.
Create word-to-index and index-to-word mappings to display results.
End of explanation
img_to_cap_vector = collections.defaultdict(list)
for img, cap in zip(img_name_vector, cap_vector):
img_to_cap_vector[img].append(cap)
# Create training and validation sets using an 80-20 split randomly.
img_keys = list(img_to_cap_vector.keys())
random.shuffle(img_keys)
slice_index = int(len(img_keys)*0.8)
img_name_train_keys, img_name_val_keys = img_keys[:slice_index], img_keys[slice_index:]
img_name_train = []
cap_train = []
for imgt in img_name_train_keys:
capt_len = len(img_to_cap_vector[imgt])
img_name_train.extend([imgt] * capt_len)
cap_train.extend(img_to_cap_vector[imgt])
img_name_val = []
cap_val = []
for imgv in img_name_val_keys:
capv_len = len(img_to_cap_vector[imgv])
img_name_val.extend([imgv] * capv_len)
cap_val.extend(img_to_cap_vector[imgv])
len(img_name_train), len(cap_train), len(img_name_val), len(cap_val)
Explanation: Split the data into training and testing
End of explanation
# Feel free to change these parameters according to your system's configuration
BATCH_SIZE = 64
BUFFER_SIZE = 1000
embedding_dim = 256
units = 512
num_steps = len(img_name_train) // BATCH_SIZE
# Shape of the vector extracted from InceptionV3 is (64, 2048)
# These two variables represent that vector shape
features_shape = 2048
attention_features_shape = 64
# Load the numpy files
def map_func(img_name, cap):
img_tensor = np.load(img_name.decode('utf-8')+'.npy')
return img_tensor, cap
dataset = tf.data.Dataset.from_tensor_slices((img_name_train, cap_train))
# Use map to load the numpy files in parallel
dataset = dataset.map(lambda item1, item2: tf.numpy_function(
map_func, [item1, item2], [tf.float32, tf.int64]),
num_parallel_calls=tf.data.AUTOTUNE)
# Shuffle and batch
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
dataset = dataset.prefetch(buffer_size=tf.data.AUTOTUNE)
Explanation: Create a tf.data dataset for training
Your images and captions are ready! Next, let's create a tf.data dataset to use for training your model.
End of explanation
class BahdanauAttention(tf.keras.Model):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, features, hidden):
# features(CNN_encoder output) shape == (batch_size, 64, embedding_dim)
# hidden shape == (batch_size, hidden_size)
# hidden_with_time_axis shape == (batch_size, 1, hidden_size)
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# attention_hidden_layer shape == (batch_size, 64, units)
attention_hidden_layer = (tf.nn.tanh(self.W1(features) +
self.W2(hidden_with_time_axis)))
# score shape == (batch_size, 64, 1)
# This gives you an unnormalized score for each image feature.
score = self.V(attention_hidden_layer)
# attention_weights shape == (batch_size, 64, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * features
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
class CNN_Encoder(tf.keras.Model):
# Since you have already extracted the features and dumped it
# This encoder passes those features through a Fully connected layer
def __init__(self, embedding_dim):
super(CNN_Encoder, self).__init__()
# shape after fc == (batch_size, 64, embedding_dim)
self.fc = tf.keras.layers.Dense(embedding_dim)
def call(self, x):
x = self.fc(x)
x = tf.nn.relu(x)
return x
class RNN_Decoder(tf.keras.Model):
def __init__(self, embedding_dim, units, vocab_size):
super(RNN_Decoder, self).__init__()
self.units = units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc1 = tf.keras.layers.Dense(self.units)
self.fc2 = tf.keras.layers.Dense(vocab_size)
self.attention = BahdanauAttention(self.units)
def call(self, x, features, hidden):
# defining attention as a separate model
context_vector, attention_weights = self.attention(features, hidden)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# shape == (batch_size, max_length, hidden_size)
x = self.fc1(output)
# x shape == (batch_size * max_length, hidden_size)
x = tf.reshape(x, (-1, x.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc2(x)
return x, state, attention_weights
def reset_state(self, batch_size):
return tf.zeros((batch_size, self.units))
encoder = CNN_Encoder(embedding_dim)
decoder = RNN_Decoder(embedding_dim, units, tokenizer.vocabulary_size())
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
Explanation: Model
Fun fact: the decoder below is identical to the one in the example for Neural Machine Translation with Attention.
The model architecture is inspired by the Show, Attend and Tell paper.
In this example, you extract the features from the lower convolutional layer of InceptionV3 giving us a vector of shape (8, 8, 2048).
You squash that to a shape of (64, 2048).
This vector is then passed through the CNN Encoder (which consists of a single Fully connected layer).
The RNN (here GRU) attends over the image to predict the next word.
End of explanation
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(encoder=encoder,
decoder=decoder,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
start_epoch = 0
if ckpt_manager.latest_checkpoint:
start_epoch = int(ckpt_manager.latest_checkpoint.split('-')[-1])
# restoring the latest checkpoint in checkpoint_path
ckpt.restore(ckpt_manager.latest_checkpoint)
Explanation: Checkpoint
End of explanation
# adding this in a separate cell because if you run the training cell
# many times, the loss_plot array will be reset
loss_plot = []
@tf.function
def train_step(img_tensor, target):
loss = 0
# initializing the hidden state for each batch
# because the captions are not related from image to image
hidden = decoder.reset_state(batch_size=target.shape[0])
dec_input = tf.expand_dims([word_to_index('<start>')] * target.shape[0], 1)
with tf.GradientTape() as tape:
features = encoder(img_tensor)
for i in range(1, target.shape[1]):
# passing the features through the decoder
predictions, hidden, _ = decoder(dec_input, features, hidden)
loss += loss_function(target[:, i], predictions)
# using teacher forcing
dec_input = tf.expand_dims(target[:, i], 1)
total_loss = (loss / int(target.shape[1]))
trainable_variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(gradients, trainable_variables))
return loss, total_loss
EPOCHS = 20
for epoch in range(start_epoch, EPOCHS):
start = time.time()
total_loss = 0
for (batch, (img_tensor, target)) in enumerate(dataset):
batch_loss, t_loss = train_step(img_tensor, target)
total_loss += t_loss
if batch % 100 == 0:
average_batch_loss = batch_loss.numpy()/int(target.shape[1])
print(f'Epoch {epoch+1} Batch {batch} Loss {average_batch_loss:.4f}')
# storing the epoch end loss value to plot later
loss_plot.append(total_loss / num_steps)
if epoch % 5 == 0:
ckpt_manager.save()
print(f'Epoch {epoch+1} Loss {total_loss/num_steps:.6f}')
print(f'Time taken for 1 epoch {time.time()-start:.2f} sec\n')
plt.plot(loss_plot)
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Loss Plot')
plt.show()
Explanation: Training
You extract the features stored in the respective .npy files and then pass those features through the encoder.
The encoder output, hidden state(initialized to 0) and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
End of explanation
def evaluate(image):
attention_plot = np.zeros((max_length, attention_features_shape))
hidden = decoder.reset_state(batch_size=1)
temp_input = tf.expand_dims(load_image(image)[0], 0)
img_tensor_val = image_features_extract_model(temp_input)
img_tensor_val = tf.reshape(img_tensor_val, (img_tensor_val.shape[0],
-1,
img_tensor_val.shape[3]))
features = encoder(img_tensor_val)
dec_input = tf.expand_dims([word_to_index('<start>')], 0)
result = []
for i in range(max_length):
predictions, hidden, attention_weights = decoder(dec_input,
features,
hidden)
attention_plot[i] = tf.reshape(attention_weights, (-1, )).numpy()
predicted_id = tf.random.categorical(predictions, 1)[0][0].numpy()
predicted_word = tf.compat.as_text(index_to_word(predicted_id).numpy())
result.append(predicted_word)
if predicted_word == '<end>':
return result, attention_plot
dec_input = tf.expand_dims([predicted_id], 0)
attention_plot = attention_plot[:len(result), :]
return result, attention_plot
def plot_attention(image, result, attention_plot):
temp_image = np.array(Image.open(image))
fig = plt.figure(figsize=(10, 10))
len_result = len(result)
for i in range(len_result):
temp_att = np.resize(attention_plot[i], (8, 8))
grid_size = max(int(np.ceil(len_result/2)), 2)
ax = fig.add_subplot(grid_size, grid_size, i+1)
ax.set_title(result[i])
img = ax.imshow(temp_image)
ax.imshow(temp_att, cmap='gray', alpha=0.6, extent=img.get_extent())
plt.tight_layout()
plt.show()
# captions on the validation set
rid = np.random.randint(0, len(img_name_val))
image = img_name_val[rid]
real_caption = ' '.join([tf.compat.as_text(index_to_word(i).numpy())
for i in cap_val[rid] if i not in [0]])
result, attention_plot = evaluate(image)
print('Real Caption:', real_caption)
print('Prediction Caption:', ' '.join(result))
plot_attention(image, result, attention_plot)
Explanation: Caption!
The evaluate function is similar to the training loop, except you don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
End of explanation
image_url = 'https://tensorflow.org/images/surf.jpg'
image_extension = image_url[-4:]
image_path = tf.keras.utils.get_file('image'+image_extension, origin=image_url)
result, attention_plot = evaluate(image_path)
print('Prediction Caption:', ' '.join(result))
plot_attention(image_path, result, attention_plot)
# opening the image
Image.open(image_path)
Explanation: Try it on your own images
For fun, below you're provided a method you can use to caption your own images with the model you've just trained. Keep in mind, it was trained on a relatively small amount of data, and your images may be different from the training data (so be prepared for weird results!)
End of explanation |
7,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducing Functions
One of the core principles of any programming language is, "Don't Repeat Yourself". If you have an action that should occur many times, you can define that action once and then call that code whenever you need to carry out that action.
We are already repeating ourselves in our code, so this is a good time to introduce simple functions. Functions mean less work for us as programmers, and effective use of functions results in code that is less error-prone.
Previous
Step1: This code will not run, but it shows how functions are used in general.
Defining a function
Give the keyword def, which tells Python that you are about to define a function.
Give your function a name. A variable name tells you what kind of value the variable contains; a function name should tell you what the function does.
Give names for each value the function needs in order to do its work.
These are basically variable names, but they are only used in the function.
They can be different names than what you use in the rest of your program.
These are called the function's arguments.
Make sure the function definition line ends with a colon.
Inside the function, write whatever code you need to make the function do its work.
Using your function
To call your function, write its name followed by parentheses.
Inside the parentheses, give the values you want the function to work with.
These can be variables such as current_name and current_age, or they can be actual values such as 'eric' and 5.
top
Basic Examples
For a simple first example, we will look at a program that compliments people. Let's look at the example, and then try to understand the code. First we will look at a version of this program as we would have written it earlier, with no functions.
Step2: Functions take repeated code, put it in one place, and then you call that code when you want to use it. Here's what the same program looks like with a function.
Step3: In our original code, each pair of print statements was run three times, and the only difference was the name of the person being thanked. When you see repetition like this, you can usually make your program more efficient by defining a function.
The keyword def tells Python that we are about to define a function. We give our function a name, thank_you() in this case. A variable's name should tell us what kind of information it holds; a function's name should tell us what the variable does. We then put parentheses. Inside these parenthese we create variable names for any variable the function will need to be given in order to do its job. In this case the function will need a name to include in the thank you message. The variable name will hold the value that is passed into the function thank_you().
To use a function we give the function's name, and then put any values the function needs in order to do its work. In this case we call the function three times, each time passing it a different name.
A common error
A function must be defined before you use it in your program. For example, putting the function at the end of the program would not work.
Step4: On the first line we ask Python to run the function thank_you(), but Python does not yet know how to do this function. We define our functions at the beginning of our programs, and then we can use them when we need to.
A second example
When we introduced the different methods for sorting a list, our code got very repetitive. It takes two lines of code to print a list using a for loop, so these two lines are repeated whenever you want to print out the contents of a list. This is the perfect opportunity to use a function, so let's see how the code looks with a function.
First, let's see the code we had without a function
Step5: Here's what the same code looks like, using a function to print out the list
Step6: This is much cleaner code. We have an action we want to take, which is to show the students in our list along with a message. We give this action a name, show_students().
This function needs two pieces of information to do its work, the list of students and a message to display. Inside the function, the code for printing the message and looping through the list is exactly as it was in the non-function code.
Now the rest of our program is cleaner, because it gets to focus on the things we are changing in the list, rather than having code for printing the list. We define the list, then we sort it and call our function to print the list. We sort it again, and then call the printing function a second time, with a different message. This is much more readable code.
Advantages of using functions
You might be able to see some advantages of using functions, through this example
Step7: You can think of functions as a way to "teach" Python some new behavior. In this case, we taught Python how to create a list of students using hyphens; now we can tell Python to do this with our students whenever we want to.
Returning a Value
Each function you create can return a value. This can be in addition to the primary work the function does, or it can be the function's main job. The following function takes in a number, and returns the corresponding word for that number
Step8: It's helpful sometimes to see programs that don't quite work as they are supposed to, and then see how those programs can be improved. In this case, there are no Python errors; all of the code has proper Python syntax. But there is a logical error, in the first line of the output.
We want to either not include 0 in the range we send to the function, or have the function return something other than None when it receives a value that it doesn't know. Let's teach our function the word 'zero', but let's also add an else clause that returns a more informative message for numbers that are not in the if-chain.
Step9: If you use a return statement in one of your functions, keep in mind that the function stops executing as soon as it hits a return statement. For example, we can add a line to the get_number_word() function that will never execute, because it comes after the function has returned a value | Python Code:
# Let's define a function.
def function_name(argument_1, argument_2):
# Do whatever we want this function to do,
# using argument_1 and argument_2
# Use function_name to call the function.
function_name(value_1, value_2)
Explanation: Introducing Functions
One of the core principles of any programming language is, "Don't Repeat Yourself". If you have an action that should occur many times, you can define that action once and then call that code whenever you need to carry out that action.
We are already repeating ourselves in our code, so this is a good time to introduce simple functions. Functions mean less work for us as programmers, and effective use of functions results in code that is less error-prone.
Previous: Lists and Tuples |
Home |
Next: If Statements
Contents
What are functions?
General Syntax
Examples
Returning a Value
Exercises
Challenges
What are functions?
Functions are a set of actions that we group together, and give a name to. You have already used a number of functions from the core Python language, such as string.title() and list.sort(). We can define our own functions, which allows us to "teach" Python new behavior.
General Syntax
A general function looks something like this:
End of explanation
print("You are doing good work, Adriana!")
print("Thank you very much for your efforts on this project.")
print("\nYou are doing good work, Billy!")
print("Thank you very much for your efforts on this project.")
print("\nYou are doing good work, Caroline!")
print("Thank you very much for your efforts on this project.")
Explanation: This code will not run, but it shows how functions are used in general.
Defining a function
Give the keyword def, which tells Python that you are about to define a function.
Give your function a name. A variable name tells you what kind of value the variable contains; a function name should tell you what the function does.
Give names for each value the function needs in order to do its work.
These are basically variable names, but they are only used in the function.
They can be different names than what you use in the rest of your program.
These are called the function's arguments.
Make sure the function definition line ends with a colon.
Inside the function, write whatever code you need to make the function do its work.
Using your function
To call your function, write its name followed by parentheses.
Inside the parentheses, give the values you want the function to work with.
These can be variables such as current_name and current_age, or they can be actual values such as 'eric' and 5.
top
Basic Examples
For a simple first example, we will look at a program that compliments people. Let's look at the example, and then try to understand the code. First we will look at a version of this program as we would have written it earlier, with no functions.
End of explanation
def thank_you(name):
# This function prints a two-line personalized thank you message.
print("\nYou are doing good work, %s!" % name)
print("Thank you very much for your efforts on this project.")
thank_you('Adriana')
thank_you('Billy')
thank_you('Caroline')
Explanation: Functions take repeated code, put it in one place, and then you call that code when you want to use it. Here's what the same program looks like with a function.
End of explanation
thank_you('Adriana')
thank_you('Billy')
thank_you('Caroline')
def thank_you(name):
# This function prints a two-line personalized thank you message.
print("\nYou are doing good work, %s!" % name)
print("Thank you very much for your efforts on this project.")
Explanation: In our original code, each pair of print statements was run three times, and the only difference was the name of the person being thanked. When you see repetition like this, you can usually make your program more efficient by defining a function.
The keyword def tells Python that we are about to define a function. We give our function a name, thank_you() in this case. A variable's name should tell us what kind of information it holds; a function's name should tell us what the variable does. We then put parentheses. Inside these parenthese we create variable names for any variable the function will need to be given in order to do its job. In this case the function will need a name to include in the thank you message. The variable name will hold the value that is passed into the function thank_you().
To use a function we give the function's name, and then put any values the function needs in order to do its work. In this case we call the function three times, each time passing it a different name.
A common error
A function must be defined before you use it in your program. For example, putting the function at the end of the program would not work.
End of explanation
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
# Display the list in its current order.
print("Our students are currently in alphabetical order.")
for student in students:
print(student.title())
# Put students in reverse alphabetical order.
students.sort(reverse=True)
# Display the list in its current order.
print("\nOur students are now in reverse alphabetical order.")
for student in students:
print(student.title())
Explanation: On the first line we ask Python to run the function thank_you(), but Python does not yet know how to do this function. We define our functions at the beginning of our programs, and then we can use them when we need to.
A second example
When we introduced the different methods for sorting a list, our code got very repetitive. It takes two lines of code to print a list using a for loop, so these two lines are repeated whenever you want to print out the contents of a list. This is the perfect opportunity to use a function, so let's see how the code looks with a function.
First, let's see the code we had without a function:
End of explanation
###highlight=[2,3,4,5,6,12,16]
def show_students(students, message):
# Print out a message, and then the list of students
print(message)
for student in students:
print(student.title())
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
show_students(students, "Our students are currently in alphabetical order.")
#Put students in reverse alphabetical order.
students.sort(reverse=True)
show_students(students, "\nOur students are now in reverse alphabetical order.")
Explanation: Here's what the same code looks like, using a function to print out the list:
End of explanation
def show_students(students, message):
# Print out a message, and then the list of students
print(message)
for student in students:
print("- " + student.title())
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
show_students(students, "Our students are currently in alphabetical order.")
#Put students in reverse alphabetical order.
students.sort(reverse=True)
show_students(students, "\nOur students are now in reverse alphabetical order.")
Explanation: This is much cleaner code. We have an action we want to take, which is to show the students in our list along with a message. We give this action a name, show_students().
This function needs two pieces of information to do its work, the list of students and a message to display. Inside the function, the code for printing the message and looping through the list is exactly as it was in the non-function code.
Now the rest of our program is cleaner, because it gets to focus on the things we are changing in the list, rather than having code for printing the list. We define the list, then we sort it and call our function to print the list. We sort it again, and then call the printing function a second time, with a different message. This is much more readable code.
Advantages of using functions
You might be able to see some advantages of using functions, through this example:
We write a set of instructions once. We save some work in this simple example, and we save even more work in larger programs.
When our function works, we don't have to worry about that code anymore. Every time you repeat code in your program, you introduce an opportunity to make a mistake. Writing a function means there is one place to fix mistakes, and when those bugs are fixed, we can be confident that this function will continue to work correctly.
We can modify our function's behavior, and that change takes effect every time the function is called. This is much better than deciding we need some new behavior, and then having to change code in many different places in our program.
For a quick example, let's say we decide our printed output would look better with some form of a bulleted list. Without functions, we'd have to change each print statement. With a function, we change just the print statement in the function:
End of explanation
def get_number_word(number):
# Takes in a numerical value, and returns
# the word corresponding to that number.
if number == 1:
return 'one'
elif number == 2:
return 'two'
elif number == 3:
return 'three'
# ...
# Let's try out our function.
for current_number in range(0,4):
number_word = get_number_word(current_number)
print(current_number, number_word)
Explanation: You can think of functions as a way to "teach" Python some new behavior. In this case, we taught Python how to create a list of students using hyphens; now we can tell Python to do this with our students whenever we want to.
Returning a Value
Each function you create can return a value. This can be in addition to the primary work the function does, or it can be the function's main job. The following function takes in a number, and returns the corresponding word for that number:
End of explanation
###highlight=[13,14,17]
def get_number_word(number):
# Takes in a numerical value, and returns
# the word corresponding to that number.
if number == 0:
return 'zero'
elif number == 1:
return 'one'
elif number == 2:
return 'two'
elif number == 3:
return 'three'
else:
return "I'm sorry, I don't know that number."
# Let's try out our function.
for current_number in range(0,6):
number_word = get_number_word(current_number)
print(current_number, number_word)
Explanation: It's helpful sometimes to see programs that don't quite work as they are supposed to, and then see how those programs can be improved. In this case, there are no Python errors; all of the code has proper Python syntax. But there is a logical error, in the first line of the output.
We want to either not include 0 in the range we send to the function, or have the function return something other than None when it receives a value that it doesn't know. Let's teach our function the word 'zero', but let's also add an else clause that returns a more informative message for numbers that are not in the if-chain.
End of explanation
###highlight=[16,17,18]
def get_number_word(number):
# Takes in a numerical value, and returns
# the word corresponding to that number.
if number == 0:
return 'zero'
elif number == 1:
return 'one'
elif number == 2:
return 'two'
elif number == 3:
return 'three'
else:
return "I'm sorry, I don't know that number."
# This line will never execute, because the function has already
# returned a value and stopped executing.
print("This message will never be printed.")
# Let's try out our function.
for current_number in range(0,6):
number_word = get_number_word(current_number)
print(current_number, number_word)
Explanation: If you use a return statement in one of your functions, keep in mind that the function stops executing as soon as it hits a return statement. For example, we can add a line to the get_number_word() function that will never execute, because it comes after the function has returned a value:
End of explanation |
7,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
'orb' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Dataset Parameters
Let's add an orbit dataset to the Bundle (see also the orb API docs).
Step3: compute_times / compute_phases
See the Compute Times & Phases tutorial.
Step4: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to dynamics and the ORB dataset
Step5: dynamics_method
Step6: The dynamics_method parameter controls how stars and components are placed in the coordinate system as a function of time and has several choices
Step7: The ltte parameter sets whether light travel time effects (Roemer delay) are included. If set to False, the positions and velocities are returned as they actually are for that given object at that given time. If set to True, they are instead returned as they were or will be when their light reaches the origin of the coordinate system.
See the ltte tutorial as well as the Systemic Velocity Example Script for an example of how 'ltte' and 'vgamma' (systemic velocity) interplay.
Synthetics
Step8: Plotting
By default, orb datasets plot as 'vs' vx 'us' (plane of sky coordinates). Notice the y-scale here with inclination set to 90.
Step9: As always, you have access to any of the arrays for either axes, so if you want to plot 'vus' vs 'times'
Step10: We can also plot the orbit in 3D. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: 'orb' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
b.add_dataset('orb')
print(b.get_dataset(kind='orb'))
Explanation: Dataset Parameters
Let's add an orbit dataset to the Bundle (see also the orb API docs).
End of explanation
print(b.get_parameter(qualifier='compute_times'))
print(b.get_parameter(qualifier='compute_phases', context='dataset'))
print(b.get_parameter(qualifier='phases_t0'))
Explanation: compute_times / compute_phases
See the Compute Times & Phases tutorial.
End of explanation
print(b.get_compute())
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to dynamics and the ORB dataset
End of explanation
print(b.get_parameter(qualifier='dynamics_method'))
Explanation: dynamics_method
End of explanation
print(b.get_parameter(qualifier='ltte'))
Explanation: The dynamics_method parameter controls how stars and components are placed in the coordinate system as a function of time and has several choices:
* keplerian (default): Use Kepler's laws to determine positions. If the system has more than two components, then each orbit is treated independently and nested (ie there are no dynamical/tidal effects - the inner orbit is treated as a single point mass in the outer orbit).
* nbody: Use an n-body integrator to determine positions. Here the initial conditions (positions and velocities) are still defined by the orbit's Keplerian parameters at 't0@system'. Closed orbits and orbital stability are not guaranteed and ejections can occur.
ltte
End of explanation
b.set_value_all('compute_times', phoebe.linspace(0,3,201))
b.run_compute()
print(b.filter(context='model').twigs)
print(b.get_parameter(qualifier='times', component='primary', kind='orb', context='model'))
print(b.get_parameter(qualifier='us', component='primary', kind='orb', context='model'))
print(b.get_parameter(qualifier='vus', component='primary', kind='orb', context='model'))
Explanation: The ltte parameter sets whether light travel time effects (Roemer delay) are included. If set to False, the positions and velocities are returned as they actually are for that given object at that given time. If set to True, they are instead returned as they were or will be when their light reaches the origin of the coordinate system.
See the ltte tutorial as well as the Systemic Velocity Example Script for an example of how 'ltte' and 'vgamma' (systemic velocity) interplay.
Synthetics
End of explanation
afig, mplfig = b.plot(show=True)
Explanation: Plotting
By default, orb datasets plot as 'vs' vx 'us' (plane of sky coordinates). Notice the y-scale here with inclination set to 90.
End of explanation
afig, mplfig = b.plot(x='times', y='vus', show=True)
Explanation: As always, you have access to any of the arrays for either axes, so if you want to plot 'vus' vs 'times'
End of explanation
afig, mplfig = b.plot(projection='3d', xlim=(-4,4), ylim=(-4,4), zlim=(-4,4), show=True)
Explanation: We can also plot the orbit in 3D.
End of explanation |
7,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Monte Carlo Simulations (Section 8.3 in Text)
Monte Carlo simulations are used to calculate probabilities using random numbers when the probabilities are difficult (or impossible) to calculate by hand or have to be done many times.
Basic idea
Step1: Q. How would you estimate the time it takes for a photon to travel from the core to the surface?
Step2: This is a 3D random walk (if you take ASTR3730 you'll see a derivation of this).
http
Step3: Let's compute tau by using a Monte Carlo experiment.
Area of enclosed circe in a square vs that area of the square is
Step4: Q. Does "positions" contain the final position of each particle, or the entire trajectory of each particle?
Step5: Q. How can we get the average position of all particles?
Step6: Q. And the average traveled distance for all particles?
Step7: $\sqrt{\frac{4N_s}{\tau}}$ is the theoretical expectation value for the separation for a large number of tests run, in 1D. (different values for higher dimensions).
Step8: Q. What should this histogram look like? Should it be centrally peaked? If so, at what value? How wide?
Step9: VECTORIZATION of Implementation
NOTE the difference in arguments!
Inclusive right limit
random.randint(1, 2)
Exclusive right limit
np.random.randint(1, 3)
Step10: Files
Basic file operation works via the open() function
Step11: Use so called context manager | Python Code:
from IPython.display import Image
Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/b/b4/The_Sun_by_the_Atmospheric_Imaging_Assembly_of_NASA%27s_Solar_Dynamics_Observatory_-_20100819.jpg/251px-The_Sun_by_the_Atmospheric_Imaging_Assembly_of_NASA%27s_Solar_Dynamics_Observatory_-_20100819.jpg')
Explanation: Monte Carlo Simulations (Section 8.3 in Text)
Monte Carlo simulations are used to calculate probabilities using random numbers when the probabilities are difficult (or impossible) to calculate by hand or have to be done many times.
Basic idea: Perform N experiments (numerically), where the outcome is random (or at least unknown), with some event occuring M times, or with a probability M/N each time. If the experiment is performed many times, M/N yields an estimate of the probability.
There are many examples in astrophysics:
Scattering of a photon by atoms or molecules
Detection of a weak signal (say a galaxy) in the
presence of noise (say sky noise and electronics noise)
Observation of a particular power law in galaxy number
counts given a model power law in the presence of noise
Markov Chain Monte Carlo for determining models
that best describe data
Q. Have you encountered any other examples?
End of explanation
Image(url='http://upload.wikimedia.org/wikipedia/commons/thumb/d/d4/Sun_poster.svg/500px-Sun_poster.svg.png')
Explanation: Q. How would you estimate the time it takes for a photon to travel from the core to the surface?
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pylab as pl
Explanation: This is a 3D random walk (if you take ASTR3730 you'll see a derivation of this).
http://en.wikipedia.org/wiki/Random_walk has a good intro for some of the maths involved.
Random Walk in One Dimension
For simplicity, we will assume the same distance between scatterings
(and set that distance, or the "mean free path", equal to 1).
Q. What do you think the typical distance a particle will have traveled (in net distance from its starting point) after a large number of steps ($N_s$)?
In fact, for large $N_s$ the average distance traveled will be $\sqrt{N_s}$.
Q. Put another way, on average, what is the final position of a particle undergoing a 1D random walk with respect to its starting point?
End of explanation
pl.plot?
N = 10000
radius = 250
tau = 2*np.pi
x = np.random.randint(-radius, radius, N)
y = np.random.randint(-radius, radius, N)
distances = np.sqrt(x**2 + y**2)
phi = np.arange(0, tau, 0.01)
pl.figure(figsize=(5,5))
pl.scatter(x[np.argwhere(distances <= radius)],
y[np.argwhere(distances <= radius)],
color='k', s=1);
pl.scatter(x[np.argwhere(distances > radius)],
y[np.argwhere(distances > radius)],
color='r', s=1);
pl.plot(radius * np.cos(phi), radius * np.sin(phi), color='g', linewidth=2);
print("tau is", 8.0 * np.sum(distances < radius) / float(N))
tauArray = np.zeros(N)
for num in np.arange(N):
xArray = np.random.randint(-radius, radius, num + 1)
yArray = np.random.randint(-radius, radius, num + 1)
tauArray[num] = 8.0 * np.sum(np.sqrt(xArray**2 + yArray**2) < radius) / float(num + 1)
pl.plot(np.arange(N), tauArray);
pl.axhline(y=tau, lw=2, color='r')
pl.ylim(tau - 0.5, tau + 0.5);
import random
# Number of particles
Np = 100
# Number of steps (per particle)
Ns = 50000
# All particles start at x = 0
positions = np.zeros(Np)
distances = np.zeros(Np)
# A (randomly drawn) 1 will move the particle to the left
# and a 2 will move it to the right
Left = 1; Right = 2
# Step Ns times for each particle "p"
for p in range(Np):
for step in range(Ns):
# Integer random number generator
direction = random.randint(1, 2)
# returns a random integer x such that 1 <= x <= 2
# (effectively a coin-flip here)
if direction == Left:
positions[p] -= 1 # Move left
elif direction == Right:
positions[p] += 1 # Move right
Explanation: Let's compute tau by using a Monte Carlo experiment.
Area of enclosed circe in a square vs that area of the square is:
$$ \frac{\frac{1}{2}\tau r^2}{\left(2r\right)^2} = \frac{\tau}{8}$$
This means, if we could somehow find the actual ratio of these areas, we could multiply it with 8 to get the value of $\tau$!
End of explanation
print("Positions")
print(positions)
Explanation: Q. Does "positions" contain the final position of each particle, or the entire trajectory of each particle?
End of explanation
print("Average Position", positions.mean())
Explanation: Q. How can we get the average position of all particles?
End of explanation
print("Avg Separation", abs(positions).mean())
print("Expectation %g" % np.sqrt(4*Ns/tau))
Explanation: Q. And the average traveled distance for all particles?
End of explanation
# Standard deviation
positions.std()
Explanation: $\sqrt{\frac{4N_s}{\tau}}$ is the theoretical expectation value for the separation for a large number of tests run, in 1D. (different values for higher dimensions).
End of explanation
n, bins, patches = pl.hist(positions, 20, facecolor='g')
pl.xlabel('Final Position')
pl.ylabel('Frequency')
pl.title('1-D Random Walk Distance')
pl.grid(True)
Explanation: Q. What should this histogram look like? Should it be centrally peaked? If so, at what value? How wide?
End of explanation
Np = 100 # Number of particles
Ns = 50000 # Number of steps (per particle)
# Draw the move random number steps all at once:
moves = np.random.randint(1, 3, size=Np*Ns)
# FOR np.random.randint THIS RUNS FROM 1 TO 2, INTEGERS ONLY
# Q. What's happening here?
moves = 2 * moves - 3
# Create a 2-D array of moves so that moves[i, j]
# is the "i"th step of particle j:
moves.shape = (Ns, Np)
# Create an array of initial starting positions for each particle
positions = np.zeros(Np)
for step in range(Ns):
# Select the moves values for the current step:
positions += moves[step, :]
# Updates positions for all particles in this step
# This is vectorized: I'm not looping over the particles
# Histogram the results
n, bins, patches = pl.hist(positions, bins=np.arange(-500, 500, 50), facecolor='b')
pl.xlabel('Final Position')
pl.ylabel('Frequency')
pl.title('1-D Random Walk Distance')
pl.grid(True)
print("Average Position", np.mean(positions))
print("Avg Separation ", np.mean(abs(positions)))
print("Expectation %g" % np.sqrt(4*Ns/tau))
Explanation: VECTORIZATION of Implementation
NOTE the difference in arguments!
Inclusive right limit
random.randint(1, 2)
Exclusive right limit
np.random.randint(1, 3)
End of explanation
f = open('afile')
f = open('afile', 'w')
type(f)
f.write('testing\n')
f.close()
cat afile
f = open('afile') # default mode: read
f.read()
f.write('more tests\n')
f.close()
f = open('afile', 'w')
f.read()
f.close()
f = open('afile', 'a') # append!
cat afile
Explanation: Files
Basic file operation works via the open() function:
End of explanation
with open('afile', 'w') as f:
f.write('testing\nmore tests\n')
cat afile
with open('afile', 'r') as f:
data = f.read()
data
with open('afile', 'r') as f:
data = f.readlines()
data
s = 'mystring '
s.strip('m ')
data = [item.strip() for item in data]
data
row = ' '.join(['1','2','3'])
f.write(row+'\n')
Explanation: Use so called context manager:
End of explanation |
7,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Boilerplate for graph visualization
Step5: Load the data
Run 00_download_data.ipynb if you haven't already
Step6: Create a simple classifier with low-level TF Ops
Step7: We can run this graph by feeding in batches of examples using a feed_dict. The keys of the feed_dict are placeholders we've defined previously.
The first argument of session.run is the tensor that we're computing. Only parts of the graph required to produce this value will be executed.
Step8: No learning yet but we get the losses per batch.
We need to add an optimizer to the graph.
Step9: Loss going down, Accuracy going up! \o/
Notice how batch loss differs between batches.
Model wrapped in a custom estimator
In TensorFlow, we can make it easier to experiment with different models when we separately define a model_fn and an input_fn.
Step10: Custom model, simplified with tf.layers
Instead of doing the matrix multiplications and everything ourselves, we can use tf.layers to simplify the definition.
Step11: Model using canned estimators
Instead of defining our own DNN classifier, TensorFlow supplies a number of canned estimators that can save a lot of work.
Step12: Using Convolutions | Python Code:
# This is for graph visualization.
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
Explanation: Boilerplate for graph visualization
End of explanation
DATA_DIR = '../data/'
data_filename = os.path.join(DATA_DIR, "zoo.npz")
data = np.load(open(data_filename))
train_data = data['arr_0']
train_labels = data['arr_1']
test_data = data['arr_2']
test_labels = data['arr_3']
del data
print("Data shapes: ", test_data.shape, test_labels.shape, train_data.shape, train_labels.shape)
Explanation: Load the data
Run 00_download_data.ipynb if you haven't already
End of explanation
tf.reset_default_graph()
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 10 classes
batch_size = 32
hidden1_units = 128
data_batch = tf.placeholder("float", shape=[None, input_dimension], name="data")
label_batch = tf.placeholder("float", shape=[None, output_dimension], name="labels")
weights_1 = tf.Variable(
tf.truncated_normal(
[input_dimension, hidden1_units],
stddev=1.0 / np.sqrt(float(input_dimension))),
name='weights_1')
# Task: Add Bias to first layer
# Task: Use Cross-Entropy instead of Squared Loss
# SOLUTION: Create biases variable.
biases_1 = tf.Variable(
tf.truncated_normal(
[hidden1_units],
stddev=1.0 / np.sqrt(float(hidden1_units))),
name='biases_1')
weights_2 = tf.Variable(
tf.truncated_normal(
[hidden1_units, output_dimension],
stddev=1.0 / np.sqrt(float(hidden1_units))),
name='weights_2')
# SOLUTION: Add the bias term to the first layer
wx_b = tf.add(tf.matmul(data_batch, weights_1), biases_1)
hidden_activations = tf.nn.relu(wx_b)
output_activations = tf.nn.tanh(tf.matmul(hidden_activations, weights_2))
# SOLUTION: Replace the l2 loss with softmax cross entropy.
with tf.name_scope("loss"):
# loss = tf.nn.l2_loss(label_batch - output_activations)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(
labels=label_batch,
logits=output_activations))
show_graph(tf.get_default_graph().as_graph_def())
Explanation: Create a simple classifier with low-level TF Ops
End of explanation
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
random_indices = np.random.permutation(train_data.shape[0])
for i in range(1000):
batch_start_idx = (i % (train_data.shape[0] // batch_size)) * batch_size
batch_indices = random_indices[batch_start_idx:batch_start_idx + batch_size]
batch_loss = sess.run(
loss,
feed_dict = {
data_batch : train_data[batch_indices,:],
label_batch : train_labels[batch_indices,:]
})
if (i + 1) % 100 == 0:
print("Loss at iteration {}: {}".format(i+1, batch_loss))
Explanation: We can run this graph by feeding in batches of examples using a feed_dict. The keys of the feed_dict are placeholders we've defined previously.
The first argument of session.run is the tensor that we're computing. Only parts of the graph required to produce this value will be executed.
End of explanation
# Task: Replace GradientDescentOptimizer with AdagradOptimizer and a 0.1 learning rate.
# learning_rate = 0.005
# updates = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# SOLUTION: Replace GradientDescentOptimizer
learning_rate = 0.1
updates = tf.train.AdagradOptimizer(learning_rate).minimize(loss)
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
random_indices = np.random.permutation(train_data.shape[0])
n_epochs = 10 # how often do to go through the training data
max_steps = train_data.shape[0]*n_epochs // batch_size
for i in range(max_steps):
batch_start_idx = (i % (train_data.shape[0] // batch_size)) * batch_size
batch_indices = random_indices[batch_start_idx:batch_start_idx+batch_size]
batch_loss, _ = sess.run(
[loss, updates],
feed_dict = {
data_batch : train_data[batch_indices,:],
label_batch : train_labels[batch_indices,:]
})
if i % 200 == 0 or i == max_steps - 1:
random_indices = np.random.permutation(train_data.shape[0])
print("Batch-Loss at iteration {}: {}".format(i, batch_loss))
test_predictions = sess.run(
output_activations,
feed_dict = {
data_batch : test_data,
label_batch : test_labels
})
wins = np.argmax(test_predictions, axis=1) == np.argmax(test_labels, axis=1)
print("Accuracy on test: {}%".format(100*np.mean(wins)))
Explanation: No learning yet but we get the losses per batch.
We need to add an optimizer to the graph.
End of explanation
tf.reset_default_graph()
# Model parameters.
batch_size = 32
hidden1_units = 128
learning_rate = 0.005
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 6 classes
n_epochs = 10 # how often do to go through the training data
def input_fn(data, labels):
input_images = tf.constant(data, shape=data.shape, verify_shape=True, dtype=tf.float32)
input_labels = tf.constant(labels, shape=labels.shape, verify_shape=True, dtype=tf.float32)
image, label = tf.train.slice_input_producer(
[input_images, input_labels],
num_epochs=n_epochs)
dataset_dict = dict(images=image, labels=label)
batch_dict = tf.train.batch(
dataset_dict, batch_size, allow_smaller_final_batch=True)
batch_labels = batch_dict.pop('labels')
return batch_dict, batch_labels
def model_fn(features, targets, mode, params):
# 1. Configure the model via TensorFlow operations (same as above)
weights_1 = tf.Variable(
tf.truncated_normal(
[input_dimension, hidden1_units],
stddev=1.0 / np.sqrt(float(input_dimension))))
weights_2 = tf.Variable(
tf.truncated_normal(
[hidden1_units, output_dimension],
stddev=1.0 / np.sqrt(float(hidden1_units))))
hidden_activations = tf.nn.relu(tf.matmul(features['images'], weights_1))
output_activations = tf.matmul(hidden_activations, weights_2)
# 2. Define the loss function for training/evaluation
loss = tf.reduce_mean(tf.nn.l2_loss(targets - output_activations))
# 3. Define the training operation/optimizer
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=learning_rate,
optimizer="SGD")
# 4. Generate predictions
predictions_dict = {
"classes": tf.argmax(input=output_activations, axis=1),
"probabilities": tf.nn.softmax(output_activations, name="softmax_tensor"),
"logits": output_activations,
}
# Optional: Define eval metric ops; here we add an accuracy metric.
is_correct = tf.equal(tf.argmax(input=targets, axis=1),
tf.argmax(input=output_activations, axis=1))
accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))
eval_metric_ops = { "accuracy": accuracy}
# 5. Return predictions/loss/train_op/eval_metric_ops in ModelFnOps object
return tf.contrib.learn.ModelFnOps(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
custom_model = tf.contrib.learn.Estimator(model_fn=model_fn)
# Train and evaluate the model.
def evaluate_model(model, input_fn):
for i in range(6):
max_steps = train_data.shape[0]*n_epochs // batch_size
model.fit(input_fn=lambda: input_fn(train_data, train_labels), steps=max_steps)
print(model.evaluate(input_fn=lambda: input_fn(test_data, test_labels),
steps=150))
evaluate_model(custom_model, input_fn)
Explanation: Loss going down, Accuracy going up! \o/
Notice how batch loss differs between batches.
Model wrapped in a custom estimator
In TensorFlow, we can make it easier to experiment with different models when we separately define a model_fn and an input_fn.
End of explanation
tf.reset_default_graph()
# Model parameters.
batch_size = 32
hidden1_units = 128
learning_rate = 0.005
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 6 classes
def layers_custom_model_fn(features, targets, mode, params):
# 1. Configure the model via TensorFlow operations (using tf.layers). Note how
# much simpler this is compared to defining the weight matrices and matrix
# multiplications by hand.
hidden_layer = tf.layers.dense(inputs=features['images'], units=hidden1_units, activation=tf.nn.relu)
output_layer = tf.layers.dense(inputs=hidden_layer, units=output_dimension, activation=tf.nn.relu)
# 2. Define the loss function for training/evaluation
loss = tf.losses.mean_squared_error(labels=targets, predictions=output_layer)
# 3. Define the training operation/optimizer
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=learning_rate,
optimizer="SGD")
# 4. Generate predictions
predictions_dict = {
"classes": tf.argmax(input=output_layer, axis=1),
"probabilities": tf.nn.softmax(output_layer, name="softmax_tensor"),
"logits": output_layer,
}
# Define eval metric ops; we can also use a pre-defined function here.
accuracy = tf.metrics.accuracy(
labels=tf.argmax(input=targets, axis=1),
predictions=tf.argmax(input=output_layer, axis=1))
eval_metric_ops = {"accuracy": accuracy}
# 5. Return predictions/loss/train_op/eval_metric_ops in ModelFnOps object
return tf.contrib.learn.ModelFnOps(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
layers_custom_model = tf.contrib.learn.Estimator(
model_fn=layers_custom_model_fn)
# Train and evaluate the model.
evaluate_model(layers_custom_model, input_fn)
Explanation: Custom model, simplified with tf.layers
Instead of doing the matrix multiplications and everything ourselves, we can use tf.layers to simplify the definition.
End of explanation
tf.reset_default_graph()
# Model parameters.
hidden1_units = 128
learning_rate = 0.005
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 6 classes
# Our model can be defined using just three simple lines...
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
images_column = tf.contrib.layers.real_valued_column("images")
# Task: Use the DNNClassifier Estimator to create the model in 1 line.
# SOLUTION: DNNClassifier can be used to efficiently (in lines of code) create the model.
canned_model = tf.contrib.learn.DNNClassifier(
feature_columns=[images_column],
hidden_units=[hidden1_units],
n_classes=output_dimension,
activation_fn=tf.nn.relu,
optimizer=optimizer)
# Potential exercises: play with model parameters, e.g. add dropout
# We need to change the input_fn so that it returns integers representing the classes instead of one-hot vectors.
def class_input_fn(data, labels):
input_images = tf.constant(
data, shape=data.shape, verify_shape=True, dtype=tf.float32)
# The next two lines are different.
class_labels = np.argmax(labels, axis=1)
input_labels = tf.constant(
class_labels, shape=class_labels.shape, verify_shape=True, dtype=tf.int32)
image, label = tf.train.slice_input_producer(
[input_images, input_labels], num_epochs=n_epochs)
dataset_dict = dict(images=image, labels=label)
batch_dict = tf.train.batch(
dataset_dict, batch_size, allow_smaller_final_batch=True)
batch_labels = batch_dict.pop('labels')
return batch_dict, batch_labels
# Train and evaluate the model.
evaluate_model(canned_model, class_input_fn)
Explanation: Model using canned estimators
Instead of defining our own DNN classifier, TensorFlow supplies a number of canned estimators that can save a lot of work.
End of explanation
import tensorflow as tf
tf.reset_default_graph()
input_dimension = train_data.shape[1] # 784 = 28*28 pixels
output_dimension = train_labels.shape[1] # 6 classes
batch_size = 32
data_batch = tf.placeholder("float", shape=[None, input_dimension])
label_batch = tf.placeholder("float", shape=[None, output_dimension])
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
# Task: convert the batch_size x num_pixels (784) input to batch_size, height (28), width(28), channels
# SOLUTION: reshape the input. We only have a single color channel.
image_batch = tf.reshape(data_batch, [-1, 28, 28, 1])
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(image_batch, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
W_conv2 = weight_variable([5, 5, 32, 48])
b_conv2 = bias_variable([48])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
W_fc1 = weight_variable([7 * 7 * 48, 256])
b_fc1 = bias_variable([256])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*48])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# Task: add dropout to fully connected layer. Add a variable to turn dropout off in eval.
# SOLUTION: add placeholder variable to deactivate dropout (keep_prob=1.0) in eval.
keep_prob = tf.placeholder(tf.float32)
# SOLUTION: add dropout to fully connected layer.
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = weight_variable([256, output_dimension])
b_fc2 = bias_variable([output_dimension])
output_activations = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=label_batch,
logits=output_activations))
# Solution: Switch from GradientDescentOptimizer to AdamOptimizer
# learning_rate = 0.001
# updates = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
learning_rate = 0.001
updates = tf.train.AdamOptimizer(learning_rate).minimize(loss)
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
random_indices = np.random.permutation(train_data.shape[0])
n_epochs = 5 # how often to go through the training data
max_steps = train_data.shape[0]*n_epochs // batch_size
for i in range(max_steps):
batch_start_idx = (i % (train_data.shape[0] // batch_size)) * batch_size
batch_indices = random_indices[batch_start_idx:batch_start_idx+batch_size]
batch_loss, _ = sess.run(
[loss, updates],
feed_dict = {
data_batch : train_data[batch_indices,:],
label_batch : train_labels[batch_indices,:],
# SOLUTION: Dropout active during training
keep_prob : 0.5})
if i % 100 == 0 or i == max_steps - 1:
random_indices = np.random.permutation(train_data.shape[0])
print("Batch-Loss at iteration {}/{}: {}".format(i, max_steps-1, batch_loss))
test_predictions = sess.run(
output_activations,
feed_dict = {
data_batch : test_data,
label_batch : test_labels,
# SOLUTION: No dropout during eval
keep_prob : 1.0
})
wins = np.argmax(test_predictions, axis=1) == np.argmax(test_labels, axis=1)
print("Accuracy on test: {}%".format(100*np.mean(wins)))
Explanation: Using Convolutions
End of explanation |
7,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow
Step1: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
Step2: Writing and running programs in TensorFlow has the following steps
Step3: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
Step4: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
Step6: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step14: Expected Output
Step15: Expected Output
Step16: Change the index below and run the cell to visualize some examples in the dataset.
Step17: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
Step19: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
2.1 - Create placeholders
Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
Exercise
Step21: Expected Output
Step23: Expected Output
Step25: Expected Output
Step27: Expected Output
Step28: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
Step29: Expected Output | Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
Explanation: TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
Initialize variables
Start your own session
Train algorithms
Implement a Neural Network
Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.
1 - Exploring the Tensorflow Library
To start, you will import the library:
End of explanation
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
Explanation: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
End of explanation
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
Explanation: Writing and running programs in TensorFlow has the following steps:
Create Tensors (variables) that are not yet executed/evaluated.
Write operations between those Tensors.
Initialize your Tensors.
Create a Session.
Run the Session. This will run the operations you'd written above.
Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run init=tf.global_variables_initializer(). That initialized the loss variable, and in the last line we were finally able to evaluate the value of loss and print its value.
Now let us look at an easy example. Run the cell below:
End of explanation
sess = tf.Session()
print(sess.run(c))
Explanation: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
End of explanation
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
Explanation: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
End of explanation
# GRADED FUNCTION: linear_function
def linear_function():
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = np.random.randn(3, 1)
W = np.random.randn(4, 3)
b = np.random.randn(4, 1)
Y = tf.add(tf.matmul(W, X), b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
Explanation: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
1.1 - Linear function
Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector.
Exercise: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):
```python
X = tf.constant(np.random.randn(3,1), name = "X")
```
You might find the following functions helpful:
- tf.matmul(..., ...) to do a matrix multiplication
- tf.add(..., ...) to do an addition
- np.random.randn(...) to initialize randomly
End of explanation
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name="x")
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = result = sess.run(sigmoid, feed_dict = {x: z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
Explanation: Expected Output :
<table>
<tr>
<td>
**result**
</td>
<td>
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
</td>
</tr>
</table>
1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. For this exercise lets compute the sigmoid function of an input.
You will do this exercise using a placeholder variable x. When running the session, you should use the feed dictionary to pass in the input z. In this exercise, you will have to (i) create a placeholder x, (ii) define the operations needed to compute the sigmoid using tf.sigmoid, and then (iii) run the session.
Exercise : Implement the sigmoid function below. You should use the following:
tf.placeholder(tf.float32, name = "...")
tf.sigmoid(...)
sess.run(..., feed_dict = {x: z})
Note that there are two typical ways to create and use sessions in tensorflow:
Method 1:
```python
sess = tf.Session()
Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
**Method 2:**python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
End of explanation
# GRADED FUNCTION: cost
def cost(logits, labels):
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, name="z")
y = tf.placeholder(tf.float32, name="y")
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict={z: logits, y: labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2, 0.4, 0.7, 0.9]))
cost = cost(logits, np.array([0, 0, 1, 1]))
print ("cost = " + str(cost))
Explanation: Expected Output :
<table>
<tr>
<td>
**sigmoid(0)**
</td>
<td>
0.5
</td>
</tr>
<tr>
<td>
**sigmoid(12)**
</td>
<td>
0.999994
</td>
</tr>
</table>
<font color='blue'>
To summarize, you how know how to:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values.
1.3 - Computing the Cost
You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{2}$ and $y^{(i)}$ for i=1...m:
$$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$
you can do it in one line of code in tensorflow!
Exercise: Implement the cross entropy loss. The function you will use is:
tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)
Your code should input z, compute the sigmoid (to get a) and then compute the cross entropy cost $J$. All this can be done using one call to tf.nn.sigmoid_cross_entropy_with_logits, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{2}) + (1-y^{(i)})\log (1-\sigma(z^{2})\large )\small\tag{2}$$
End of explanation
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C, name='C')
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(indices=labels, depth=C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C=4)
print ("one_hot = " + str(one_hot))
Explanation: Expected Output :
<table>
<tr>
<td>
**cost**
</td>
<td>
[ 1.00538719 1.03664088 0.41385433 0.39956614]
</td>
</tr>
</table>
1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:
<img src="images/onehot.png" style="width:600px;height:150px;">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
tf.one_hot(labels, depth, axis)
Exercise: Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use tf.one_hot() to do this.
End of explanation
# GRADED FUNCTION: ones
def ones(shape):
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
Explanation: Expected Output:
<table>
<tr>
<td>
**one_hot**
</td>
<td>
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
</td>
</tr>
</table>
1.5 - Initialize with zeros and ones
Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is tf.ones(). To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively.
Exercise: Implement the function below to take in a shape and to return an array (of the shape's dimension of ones).
tf.ones(shape)
End of explanation
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
Explanation: Expected Output:
<table>
<tr>
<td>
**ones**
</td>
<td>
[ 1. 1. 1.]
</td>
</tr>
</table>
2 - Building your first neural network in tensorflow
In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:
Create the computation graph
Run the graph
Let's delve into the problem you'd like to solve!
2.0 - Problem statement: SIGNS Dataset
One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.
Training set: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).
Test set: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).
Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.
Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.
<img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> Figure 1</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>
Run the following code to load the dataset.
End of explanation
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
Explanation: Change the index below and run the cell to visualize some examples in the dataset.
End of explanation
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten / 255.
X_test = X_test_flatten / 255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print("number of training examples = " + str(X_train.shape[1]))
print("number of test examples = " + str(X_test.shape[1]))
print("X_train shape: " + str(X_train.shape))
print("Y_train shape: " + str(Y_train.shape))
print("X_test shape: " + str(X_test.shape))
print("Y_test shape: " + str(Y_test.shape))
Explanation: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
End of explanation
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(tf.float32, [n_x, None], name="X")
Y = tf.placeholder(tf.float32, [n_y, None], name="Y")
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print("X = " + str(X))
print("Y = " + str(Y))
Explanation: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
2.1 - Create placeholders
Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
Exercise: Implement the function below to create the placeholders in tensorflow.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", [25, 12288], initializer = tf.contrib.layers.xavier_initializer(seed=1))
b1 = tf.get_variable("b1", [25, 1], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", [12, 25], initializer = tf.contrib.layers.xavier_initializer(seed=1))
b2 = tf.get_variable("b2", [12, 1], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", [6, 12], initializer = tf.contrib.layers.xavier_initializer(seed=1))
b3 = tf.get_variable("b3", [6, 1], initializer = tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**X**
</td>
<td>
Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)
</td>
</tr>
<tr>
<td>
**Y**
</td>
<td>
Tensor("Placeholder_2:0", shape=(6, ?), dtype=float32) (not necessarily Placeholder_2)
</td>
</tr>
</table>
2.2 - Initializing the parameters
Your second task is to initialize the parameters in tensorflow.
Exercise: Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use:
python
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
Please use seed = 1 to make sure your results match ours.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
< tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
< tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
< tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
< tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >
</td>
</tr>
</table>
As expected, the parameters haven't been evaluated yet.
2.3 - Forward propagation in tensorflow
You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are:
tf.add(...,...) to do an addition
tf.matmul(...,...) to do a matrix multiplication
tf.nn.relu(...) to apply the ReLU activation
Question: Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at z3. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need a3!
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
Explanation: Expected Output:
<table>
<tr>
<td>
**Z3**
</td>
<td>
Tensor("Add_2:0", shape=(6, ?), dtype=float32)
</td>
</tr>
</table>
You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.
2.4 Compute cost
As seen before, it is very easy to compute the cost using:
python
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))
Question: Implement the cost function below.
- It is important to know that the "logits" and "labels" inputs of tf.nn.softmax_cross_entropy_with_logits are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.
- Besides, tf.reduce_mean basically does the summation over the examples.
End of explanation
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
Explanation: Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
Tensor("Mean:0", shape=(), dtype=float32)
</td>
</tr>
</table>
2.5 - Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "optimizer" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
To make the optimization you would do:
python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
Note When coding, we often use _ as a "throwaway" variable to store values that we won't need to use later. Here, _ takes on the evaluated value of optimizer, which we don't need (and c takes the value of the cost variable).
2.6 - Building the model
Now, you will bring it all together!
Exercise: Implement the model. You will be calling the functions you had previously implemented.
End of explanation
parameters = model(X_train, Y_train, X_test, Y_test)
Explanation: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
End of explanation
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64, 64)).reshape((1, 64 * 64 * 3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
Explanation: Expected Output:
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.999074
</td>
</tr>
<tr>
<td>
**Test Accuracy**
</td>
<td>
0.716667
</td>
</tr>
</table>
Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.
Insights:
- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting.
- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.
2.7 - Test with your own image (optional / ungraded exercise)
Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
End of explanation |
7,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistical inference
Here we will briefly cover multiple concepts of inferential statistics in an
introductory manner, and demonstrate how to use some MNE statistical functions.
Step1: Hypothesis testing
Null hypothesis
^^^^^^^^^^^^^^^
From Wikipedia <https
Step2: The data averaged over all subjects looks like this
Step3: In this case, a null hypothesis we could test for each voxel is
Step4: "Hat" variance adjustment
The "hat" technique regularizes the variance values used in the t-test
calculation
Step5: Non-parametric tests
Instead of assuming an underlying Gaussian distribution, we could instead
use a non-parametric resampling method. In the case of a paired t-test
between two conditions A and B, which is mathematically equivalent to a
one-sample t-test between the difference in the conditions A-B, under the
null hypothesis we have the principle of exchangeability. This means
that, if the null is true, we can exchange conditions and not change
the distribution of the test statistic.
When using a paired t-test, exchangeability thus means that we can flip the
signs of the difference between A and B. Therefore, we can construct the
null distribution values for each voxel by taking random subsets of
samples (subjects), flipping the sign of their difference, and recording the
absolute value of the resulting statistic (we record the absolute value
because we conduct a two-tailed test). The absolute value of the statistic
evaluated on the veridical data can then be compared to this distribution,
and the p-value is simply the proportion of null distribution values that
are smaller.
<div class="alert alert-danger"><h4>Warning</h4><p>In the case of a true one-sample t-test, i.e. analyzing a single
condition rather than the difference between two conditions,
it is not clear where/how exchangeability applies; see
`this FieldTrip discussion <ft_exch_>`_.</p></div>
In the case where n_permutations is large enough (or "all") so
that the complete set of unique resampling exchanges can be done
(which is $2^{N_{samp}}-1$ for a one-tailed and
$2^{N_{samp}-1}-1$ for a two-tailed test, not counting the
veridical distribution), instead of randomly exchanging conditions
the null is formed from using all possible exchanges. This is known
as a permutation test (or exact test).
Step6: Multiple comparisons
So far, we have done no correction for multiple comparisons. This is
potentially problematic for these data because there are
$40 \cdot 40 = 1600$ tests being performed. If we use a threshold
p < 0.05 for each individual test, we would expect many voxels to be declared
significant even if there were no true effect. In other words, we would make
many type I errors (adapted from here <errors_>_)
Step7: To combat this problem, several methods exist. Typically these
provide control over either one of the following two measures
Step8: False discovery rate (FDR) correction
Typically FDR is performed with the Benjamini-Hochberg procedure, which
is less restrictive than Bonferroni correction for large numbers of
comparisons (fewer type II errors), but provides less strict control of type
I errors.
Step9: Non-parametric resampling test with a maximum statistic
Non-parametric resampling tests can also be used to correct for multiple
comparisons. In its simplest form, we again do permutations using
exchangeability under the null hypothesis, but this time we take the
maximum statistic across all voxels in each permutation to form the
null distribution. The p-value for each voxel from the veridical data
is then given by the proportion of null distribution values
that were smaller.
This method has two important features
Step10: Clustering
Each of the aforementioned multiple comparisons corrections have the
disadvantage of not fully incorporating the correlation structure of the
data, namely that points close to one another (e.g., in space or time) tend
to be correlated. However, by defining the adjacency/adjacency/neighbor
structure in our data, we can use clustering to compensate.
To use this, we need to rethink our null hypothesis. Instead
of thinking about a null hypothesis about means per voxel (with one
independent test per voxel), we consider a null hypothesis about sizes
of clusters in our data, which could be stated like
Step11: In general the adjacency between voxels can be more complex, such as
those between sensors in 3D space, or time-varying activation at brain
vertices on a cortical surface. MNE provides several convenience functions
for computing adjacency matrices (see the
Statistics API <api_reference_statistics>).
Standard clustering
Here, since our data are on a grid, we can use adjacency=None to
trigger optimized grid-based code, and run the clustering algorithm.
Step12: "Hat" variance adjustment
This method can also be used in this context to correct for small
variances
Step13: Threshold-free cluster enhancement (TFCE)
TFCE eliminates the free parameter initial threshold value that
determines which points are included in clustering by approximating
a continuous integration across possible threshold values with a standard
Riemann sum <https
Step14: We can also combine TFCE and the "hat" correction
Step15: Visualize and compare methods
Let's take a look at these statistics. The top row shows each test statistic,
and the bottom shows p-values for various statistical tests, with the ones
with proper control over FWER or FDR with bold titles. | Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
from functools import partial
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa, analysis:ignore
import mne
from mne.stats import (ttest_1samp_no_p, bonferroni_correction, fdr_correction,
permutation_t_test, permutation_cluster_1samp_test)
print(__doc__)
Explanation: Statistical inference
Here we will briefly cover multiple concepts of inferential statistics in an
introductory manner, and demonstrate how to use some MNE statistical functions.
End of explanation
width = 40
n_subjects = 10
signal_mean = 100
signal_sd = 100
noise_sd = 0.01
gaussian_sd = 5
sigma = 1e-3 # sigma for the "hat" method
n_permutations = 'all' # run an exact test
n_src = width * width
# For each "subject", make a smoothed noisy signal with a centered peak
rng = np.random.RandomState(2)
X = noise_sd * rng.randn(n_subjects, width, width)
# Add a signal at the center
X[:, width // 2, width // 2] = signal_mean + rng.randn(n_subjects) * signal_sd
# Spatially smooth with a 2D Gaussian kernel
size = width // 2 - 1
gaussian = np.exp(-(np.arange(-size, size + 1) ** 2 / float(gaussian_sd ** 2)))
for si in range(X.shape[0]):
for ri in range(X.shape[1]):
X[si, ri, :] = np.convolve(X[si, ri, :], gaussian, 'same')
for ci in range(X.shape[2]):
X[si, :, ci] = np.convolve(X[si, :, ci], gaussian, 'same')
Explanation: Hypothesis testing
Null hypothesis
^^^^^^^^^^^^^^^
From Wikipedia <https://en.wikipedia.org/wiki/Null_hypothesis>__:
In inferential statistics, a general statement or default position that
there is no relationship between two measured phenomena, or no
association among groups.
We typically want to reject a null hypothesis with
some probability (e.g., p < 0.05). This probability is also called the
significance level $\alpha$.
To think about what this means, let's follow the illustrative example from
:footcite:RidgwayEtAl2012 and construct a toy dataset consisting of a
40 x 40 square with a "signal" present in the center with white noise added
and a Gaussian smoothing kernel applied.
End of explanation
fig, ax = plt.subplots()
ax.imshow(X.mean(0), cmap='inferno')
ax.set(xticks=[], yticks=[], title="Data averaged over subjects")
Explanation: The data averaged over all subjects looks like this:
End of explanation
titles = ['t']
out = stats.ttest_1samp(X, 0, axis=0)
ts = [out[0]]
ps = [out[1]]
mccs = [False] # these are not multiple-comparisons corrected
def plot_t_p(t, p, title, mcc, axes=None):
if axes is None:
fig = plt.figure(figsize=(6, 3))
axes = [fig.add_subplot(121, projection='3d'), fig.add_subplot(122)]
show = True
else:
show = False
p_lims = [0.1, 0.001]
t_lims = -stats.distributions.t.ppf(p_lims, n_subjects - 1)
p_lims = [-np.log10(p) for p in p_lims]
# t plot
x, y = np.mgrid[0:width, 0:width]
surf = axes[0].plot_surface(x, y, np.reshape(t, (width, width)),
rstride=1, cstride=1, linewidth=0,
vmin=t_lims[0], vmax=t_lims[1], cmap='viridis')
axes[0].set(xticks=[], yticks=[], zticks=[],
xlim=[0, width - 1], ylim=[0, width - 1])
axes[0].view_init(30, 15)
cbar = plt.colorbar(ax=axes[0], shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025, mappable=surf)
cbar.set_ticks(t_lims)
cbar.set_ticklabels(['%0.1f' % t_lim for t_lim in t_lims])
cbar.set_label('t-value')
cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
if not show:
axes[0].set(title=title)
if mcc:
axes[0].title.set_weight('bold')
# p plot
use_p = -np.log10(np.reshape(np.maximum(p, 1e-5), (width, width)))
img = axes[1].imshow(use_p, cmap='inferno', vmin=p_lims[0], vmax=p_lims[1],
interpolation='nearest')
axes[1].set(xticks=[], yticks=[])
cbar = plt.colorbar(ax=axes[1], shrink=0.75, orientation='horizontal',
fraction=0.1, pad=0.025, mappable=img)
cbar.set_ticks(p_lims)
cbar.set_ticklabels(['%0.1f' % p_lim for p_lim in p_lims])
cbar.set_label(r'$-\log_{10}(p)$')
cbar.ax.get_xaxis().set_label_coords(0.5, -0.3)
if show:
text = fig.suptitle(title)
if mcc:
text.set_weight('bold')
plt.subplots_adjust(0, 0.05, 1, 0.9, wspace=0, hspace=0)
mne.viz.utils.plt_show()
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: In this case, a null hypothesis we could test for each voxel is:
There is no difference between the mean value and zero
($H_0 \colon \mu = 0$).
The alternative hypothesis, then, is that the voxel has a non-zero mean
($H_1 \colon \mu \neq 0$).
This is a two-tailed test because the mean could be less than
or greater than zero, whereas a one-tailed test would test only one of
these possibilities, i.e. $H_1 \colon \mu \geq 0$ or
$H_1 \colon \mu \leq 0$.
<div class="alert alert-info"><h4>Note</h4><p>Here we will refer to each spatial location as a "voxel".
In general, though, it could be any sort of data value,
including cortical vertex at a specific time, pixel in a
time-frequency decomposition, etc.</p></div>
Parametric tests
Let's start with a paired t-test, which is a standard test
for differences in paired samples. Mathematically, it is equivalent
to a 1-sample t-test on the difference between the samples in each condition.
The paired t-test is parametric
because it assumes that the underlying sample distribution is Gaussian, and
is only valid in this case. This happens to be satisfied by our toy dataset,
but is not always satisfied for neuroimaging data.
In the context of our toy dataset, which has many voxels
($40 \cdot 40 = 1600$), applying the paired t-test is called a
mass-univariate approach as it treats each voxel independently.
End of explanation
ts.append(ttest_1samp_no_p(X, sigma=sigma))
ps.append(stats.distributions.t.sf(np.abs(ts[-1]), len(X) - 1) * 2)
titles.append(r'$\mathrm{t_{hat}}$')
mccs.append(False)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: "Hat" variance adjustment
The "hat" technique regularizes the variance values used in the t-test
calculation :footcite:RidgwayEtAl2012 to compensate for implausibly small
variances.
End of explanation
# Here we have to do a bit of gymnastics to get our function to do
# a permutation test without correcting for multiple comparisons:
X.shape = (n_subjects, n_src) # flatten the array for simplicity
titles.append('Permutation')
ts.append(np.zeros(width * width))
ps.append(np.zeros(width * width))
mccs.append(False)
for ii in range(n_src):
ts[-1][ii], ps[-1][ii] = permutation_t_test(X[:, [ii]], verbose=False)[:2]
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: Non-parametric tests
Instead of assuming an underlying Gaussian distribution, we could instead
use a non-parametric resampling method. In the case of a paired t-test
between two conditions A and B, which is mathematically equivalent to a
one-sample t-test between the difference in the conditions A-B, under the
null hypothesis we have the principle of exchangeability. This means
that, if the null is true, we can exchange conditions and not change
the distribution of the test statistic.
When using a paired t-test, exchangeability thus means that we can flip the
signs of the difference between A and B. Therefore, we can construct the
null distribution values for each voxel by taking random subsets of
samples (subjects), flipping the sign of their difference, and recording the
absolute value of the resulting statistic (we record the absolute value
because we conduct a two-tailed test). The absolute value of the statistic
evaluated on the veridical data can then be compared to this distribution,
and the p-value is simply the proportion of null distribution values that
are smaller.
<div class="alert alert-danger"><h4>Warning</h4><p>In the case of a true one-sample t-test, i.e. analyzing a single
condition rather than the difference between two conditions,
it is not clear where/how exchangeability applies; see
`this FieldTrip discussion <ft_exch_>`_.</p></div>
In the case where n_permutations is large enough (or "all") so
that the complete set of unique resampling exchanges can be done
(which is $2^{N_{samp}}-1$ for a one-tailed and
$2^{N_{samp}-1}-1$ for a two-tailed test, not counting the
veridical distribution), instead of randomly exchanging conditions
the null is formed from using all possible exchanges. This is known
as a permutation test (or exact test).
End of explanation
N = np.arange(1, 80)
alpha = 0.05
p_type_I = 1 - (1 - alpha) ** N
fig, ax = plt.subplots(figsize=(4, 3))
ax.scatter(N, p_type_I, 3)
ax.set(xlim=N[[0, -1]], ylim=[0, 1], xlabel=r'$N_{\mathrm{test}}$',
ylabel=u'Probability of at least\none type I error')
ax.grid(True)
fig.tight_layout()
fig.show()
Explanation: Multiple comparisons
So far, we have done no correction for multiple comparisons. This is
potentially problematic for these data because there are
$40 \cdot 40 = 1600$ tests being performed. If we use a threshold
p < 0.05 for each individual test, we would expect many voxels to be declared
significant even if there were no true effect. In other words, we would make
many type I errors (adapted from here <errors_>_):
.. rst-class:: skinnytable
+----------+--------+------------------+------------------+
| | Null hypothesis |
| +------------------+------------------+
| | True | False |
+==========+========+==================+==================+
| | | Type I error | Correct |
| | Yes | False positive | True positive |
+ Reject +--------+------------------+------------------+
| | | Correct | Type II error |
| | No | True Negative | False negative |
+----------+--------+------------------+------------------+
To see why, consider a standard $\alpha = 0.05$.
For a single test, our probability of making a type I error is 0.05.
The probability of making at least one type I error in
$N_{\mathrm{test}}$ independent tests is then given by
$1 - (1 - \alpha)^{N_{\mathrm{test}}}$:
End of explanation
titles.append('Bonferroni')
ts.append(ts[-1])
ps.append(bonferroni_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: To combat this problem, several methods exist. Typically these
provide control over either one of the following two measures:
Familywise error rate (FWER) <fwer_>_
The probability of making one or more type I errors:
.. math::
\mathrm{P}(N_{\mathrm{type\ I}} >= 1 \mid H_0)
False discovery rate (FDR) <fdr_>_
The expected proportion of rejected null hypotheses that are
actually true:
.. math::
\mathrm{E}(\frac{N_{\mathrm{type\ I}}}{N_{\mathrm{reject}}}
\mid N_{\mathrm{reject}} > 0) \cdot
\mathrm{P}(N_{\mathrm{reject}} > 0 \mid H_0)
We cover some techniques that control FWER and FDR below.
Bonferroni correction
Perhaps the simplest way to deal with multiple comparisons, Bonferroni
correction <https://en.wikipedia.org/wiki/Bonferroni_correction>__
conservatively multiplies the p-values by the number of comparisons to
control the FWER.
End of explanation
titles.append('FDR')
ts.append(ts[-1])
ps.append(fdr_correction(ps[0])[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: False discovery rate (FDR) correction
Typically FDR is performed with the Benjamini-Hochberg procedure, which
is less restrictive than Bonferroni correction for large numbers of
comparisons (fewer type II errors), but provides less strict control of type
I errors.
End of explanation
titles.append(r'$\mathbf{Perm_{max}}$')
out = permutation_t_test(X, verbose=False)[:2]
ts.append(out[0])
ps.append(out[1])
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: Non-parametric resampling test with a maximum statistic
Non-parametric resampling tests can also be used to correct for multiple
comparisons. In its simplest form, we again do permutations using
exchangeability under the null hypothesis, but this time we take the
maximum statistic across all voxels in each permutation to form the
null distribution. The p-value for each voxel from the veridical data
is then given by the proportion of null distribution values
that were smaller.
This method has two important features:
It controls FWER.
It is non-parametric. Even though our initial test statistic
(here a 1-sample t-test) is parametric, the null
distribution for the null hypothesis rejection (the mean value across
subjects is indistinguishable from zero) is obtained by permutations.
This means that it makes no assumptions of Gaussianity
(which do hold for this example, but do not in general for some types
of processed neuroimaging data).
End of explanation
from sklearn.feature_extraction.image import grid_to_graph # noqa: E402
mini_adjacency = grid_to_graph(3, 3).toarray()
assert mini_adjacency.shape == (9, 9)
print(mini_adjacency[0])
Explanation: Clustering
Each of the aforementioned multiple comparisons corrections have the
disadvantage of not fully incorporating the correlation structure of the
data, namely that points close to one another (e.g., in space or time) tend
to be correlated. However, by defining the adjacency/adjacency/neighbor
structure in our data, we can use clustering to compensate.
To use this, we need to rethink our null hypothesis. Instead
of thinking about a null hypothesis about means per voxel (with one
independent test per voxel), we consider a null hypothesis about sizes
of clusters in our data, which could be stated like:
The distribution of spatial cluster sizes observed in two experimental
conditions are drawn from the same probability distribution.
Here we only have a single condition and we contrast to zero, which can
be thought of as:
The distribution of spatial cluster sizes is independent of the sign
of the data.
In this case, we again do permutations with a maximum statistic, but, under
each permutation, we:
Compute the test statistic for each voxel individually.
Threshold the test statistic values.
Cluster voxels that exceed this threshold (with the same sign) based on
adjacency.
Retain the size of the largest cluster (measured, e.g., by a simple voxel
count, or by the sum of voxel t-values within the cluster) to build the
null distribution.
After doing these permutations, the cluster sizes in our veridical data
are compared to this null distribution. The p-value associated with each
cluster is again given by the proportion of smaller null distribution
values. This can then be subjected to a standard p-value threshold
(e.g., p < 0.05) to reject the null hypothesis (i.e., find an effect of
interest).
This reframing to consider cluster sizes rather than individual means
maintains the advantages of the standard non-parametric permutation
test -- namely controlling FWER and making no assumptions of parametric
data distribution.
Critically, though, it also accounts for the correlation structure in the
data -- which in this toy case is spatial but in general can be
multidimensional (e.g., spatio-temporal) -- because the null distribution
will be derived from data in a way that preserves these correlations.
.. sidebar:: Effect size
For a nice description of how to compute the effect size obtained
in a cluster test, see this
`FieldTrip mailing list discussion <ft_cluster_effect_size_>`_.
However, there is a drawback. If a cluster significantly deviates from
the null, no further inference on the cluster (e.g., peak location) can be
made, as the entire cluster as a whole is used to reject the null.
Moreover, because the test statistic concerns the full data, the null
hypothesis (and our rejection of it) refers to the structure of the full
data. For more information, see also the comprehensive
FieldTrip tutorial <ft_cluster_>_.
Defining the adjacency matrix
First we need to define our adjacency (sometimes called "neighbors") matrix.
This is a square array (or sparse matrix) of shape (n_src, n_src) that
contains zeros and ones to define which spatial points are neighbors, i.e.,
which voxels are adjacent to each other. In our case this
is quite simple, as our data are aligned on a rectangular grid.
Let's pretend that our data were smaller -- a 3 x 3 grid. Thinking about
each voxel as being connected to the other voxels it touches, we would
need a 9 x 9 adjacency matrix. The first row of this matrix contains the
voxels in the flattened data that the first voxel touches. Since it touches
the second element in the first row and the first element in the second row
(and is also a neighbor to itself), this would be::
[1, 1, 0, 1, 0, 0, 0, 0, 0]
:mod:sklearn.feature_extraction provides a convenient function for this:
End of explanation
titles.append('Clustering')
# Reshape data to what is equivalent to (n_samples, n_space, n_time)
X.shape = (n_subjects, width, width)
# Compute threshold from t distribution (this is also the default)
threshold = stats.distributions.t.ppf(1 - alpha, n_subjects - 1)
t_clust, clusters, p_values, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold, adjacency=None,
n_permutations=n_permutations, out_type='mask')
# Put the cluster data in a viewable format
p_clust = np.ones((width, width))
for cl, p in zip(clusters, p_values):
p_clust[cl] = p
ts.append(t_clust)
ps.append(p_clust)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: In general the adjacency between voxels can be more complex, such as
those between sensors in 3D space, or time-varying activation at brain
vertices on a cortical surface. MNE provides several convenience functions
for computing adjacency matrices (see the
Statistics API <api_reference_statistics>).
Standard clustering
Here, since our data are on a grid, we can use adjacency=None to
trigger optimized grid-based code, and run the clustering algorithm.
End of explanation
titles.append(r'$\mathbf{C_{hat}}$')
stat_fun_hat = partial(ttest_1samp_no_p, sigma=sigma)
t_hat, clusters, p_values, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold, adjacency=None, out_type='mask',
n_permutations=n_permutations, stat_fun=stat_fun_hat, buffer_size=None)
p_hat = np.ones((width, width))
for cl, p in zip(clusters, p_values):
p_hat[cl] = p
ts.append(t_hat)
ps.append(p_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: "Hat" variance adjustment
This method can also be used in this context to correct for small
variances :footcite:RidgwayEtAl2012:
End of explanation
titles.append(r'$\mathbf{C_{TFCE}}$')
threshold_tfce = dict(start=0, step=0.2)
t_tfce, _, p_tfce, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold_tfce, adjacency=None,
n_permutations=n_permutations, out_type='mask')
ts.append(t_tfce)
ps.append(p_tfce)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: Threshold-free cluster enhancement (TFCE)
TFCE eliminates the free parameter initial threshold value that
determines which points are included in clustering by approximating
a continuous integration across possible threshold values with a standard
Riemann sum <https://en.wikipedia.org/wiki/Riemann_sum>__
:footcite:SmithNichols2009.
This requires giving a starting threshold start and a step
size step, which in MNE is supplied as a dict.
The smaller the step and closer to 0 the start value,
the better the approximation, but the longer it takes.
A significant advantage of TFCE is that, rather than modifying the
statistical null hypothesis under test (from one about individual voxels
to one about the distribution of clusters in the data), it modifies the data
under test while still controlling for multiple comparisons.
The statistical test is then done at the level of individual voxels rather
than clusters. This allows for evaluation of each point
independently for significance rather than only as cluster groups.
End of explanation
titles.append(r'$\mathbf{C_{hat,TFCE}}$')
t_tfce_hat, _, p_tfce_hat, H0 = permutation_cluster_1samp_test(
X, n_jobs=1, threshold=threshold_tfce, adjacency=None, out_type='mask',
n_permutations=n_permutations, stat_fun=stat_fun_hat, buffer_size=None)
ts.append(t_tfce_hat)
ps.append(p_tfce_hat)
mccs.append(True)
plot_t_p(ts[-1], ps[-1], titles[-1], mccs[-1])
Explanation: We can also combine TFCE and the "hat" correction:
End of explanation
fig = plt.figure(facecolor='w', figsize=(14, 3))
assert len(ts) == len(titles) == len(ps)
for ii in range(len(ts)):
ax = [fig.add_subplot(2, 10, ii + 1, projection='3d'),
fig.add_subplot(2, 10, 11 + ii)]
plot_t_p(ts[ii], ps[ii], titles[ii], mccs[ii], ax)
fig.tight_layout(pad=0, w_pad=0.05, h_pad=0.1)
plt.show()
Explanation: Visualize and compare methods
Let's take a look at these statistics. The top row shows each test statistic,
and the bottom shows p-values for various statistical tests, with the ones
with proper control over FWER or FDR with bold titles.
End of explanation |
7,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Package
Step1: Another utility class
Step2: Scaling features to a range
Example
Step3: MaxAbsScaler works in a very similar fashion, but scales in a way that the training data lies within the range [-1,1] by dividing through the largest maximum value in each feature. It is used for data that is already centered at zero or sparse data.
Scaling sparse data
MaxAbsScaler and maxabs_scale were specifically designed for scaling sparse data, and are the recommend way to go about this.
...
Scaling data with outliers
If your data contains many outliers, scaling using the mean and variance of the data is likely to not work very well. You can use robust_scale and RobustScaler as drop-in replacements instead.
...
Centering kernel matrices
...
Normalization
Normalization is the process of scaling individual samples to have unit norm. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples.
The function normalize provides a quick and easy way to perform this operation on a single array-like dataset, either using the l1 or l2 norms
Step4: Sparse input
normalize and Normalizer accept both dense array-like and sparse matrices from scipy.sparse as input.
Binarization
Feature binarization is the process of thresholding numerical features to get boolean values.
...
As for the Normalizer, the utility class Binarizer is meant to be used in the early stages of sklearn.pipeline.Pipeline.
Step5: The preprocessing module provides a companion function binarize to be used when the transformer API is not necessary.
binarize and Binarizer accept both dense array-like and sparse matrices from scipy.sparse as input.
Encoding categorical features
Integer representation cannot be used directly with scikit-learn estimators, as these expect continuous input, and would interpret the categories as being ordered, which is often not desired.
One possibility to convert categorical features to features that can be used with scikit-learn estimators is to use a one-of-K or one-hot encoding, which is implemented in OneHotEncoder. This estimator transforms each categorical feature with m possible values into m binary features, with only one active. | Python Code:
from sklearn import preprocessing
import numpy as np
X = np.array([[1.,-1.,2.],
[2., 0.,0.],
[0., 1.,-1.]])
X_scaled = preprocessing.scale(X)
X_scaled
X_scaled.mean(axis = 0)
X_scaled.std(axis=0)
Explanation: Package: sklearn.preprocessing
change raw feature vectors into a representation that is more suitable for the downstream estimators
Standardization, or mean removal and variance scaling
Standardization of datasets is a common requirement for many machine learning estimators implemented in the scikit. They might behave badly if the individual features do not more or less look like standard normally distributed data:Gaussian with zero mean and unit variance
The function scale provides a quick and easy way to perform this operation on a single array-like dataset:
End of explanation
scaler = preprocessing.StandardScaler().fit(X)
scaler
scaler.mean_
scaler.scale_
scaler.transform(X)
scaler.transform([[-1.,1.,0.]])
Explanation: Another utility class: StandardScaler, that implements the Transformer API to compute the mean and standard deviation on a training dataset so as to be able to later reapply the same transformation on the testing set.
End of explanation
X_train = np.array([[1., -1., 2.],
[2., 0., 0.],
[0., 1.,-1.]])
min_max_scaler = preprocessing.MinMaxScaler()
X_train_minmax = min_max_scaler.fit_transform(X_train)
X_train_minmax
X_test = np.array([[-3., -1., 4.]])
X_test_minmax = min_max_scaler.transform(X_test)
X_test_minmax
min_max_scaler.scale_
min_max_scaler.min_
Explanation: Scaling features to a range
Example: scaling feature to lie between a given minimum and maximum value, often between zero and one. Or the maximum absolute value of each feature is scaled to unit size
Use MinMaxScaler or MaxAbsScaler.
End of explanation
X = [[ 1., -1., 2.],
[ 2., 0., 0.],
[ 0., 1.,-1.]]
X_normalized = preprocessing.normalize(X,norm='l2')
X_normalized
Explanation: MaxAbsScaler works in a very similar fashion, but scales in a way that the training data lies within the range [-1,1] by dividing through the largest maximum value in each feature. It is used for data that is already centered at zero or sparse data.
Scaling sparse data
MaxAbsScaler and maxabs_scale were specifically designed for scaling sparse data, and are the recommend way to go about this.
...
Scaling data with outliers
If your data contains many outliers, scaling using the mean and variance of the data is likely to not work very well. You can use robust_scale and RobustScaler as drop-in replacements instead.
...
Centering kernel matrices
...
Normalization
Normalization is the process of scaling individual samples to have unit norm. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples.
The function normalize provides a quick and easy way to perform this operation on a single array-like dataset, either using the l1 or l2 norms:
End of explanation
X = [[ 1., -1., 2.],
[ 2., 0., 0.],
[ 0., 1.,-1.]]
binarizer = preprocessing.Binarizer(). fit(X) # fit does nothing
binarizer
binarizer.transform(X)
binarizer = preprocessing.Binarizer(threshold=1.1)
binarizer.transform(X)
Explanation: Sparse input
normalize and Normalizer accept both dense array-like and sparse matrices from scipy.sparse as input.
Binarization
Feature binarization is the process of thresholding numerical features to get boolean values.
...
As for the Normalizer, the utility class Binarizer is meant to be used in the early stages of sklearn.pipeline.Pipeline.
End of explanation
enc = preprocessing.OneHotEncoder()
enc.fit([[0,0,3],[1,1,0],[0,2,1],[1,0,2]])
enc.transform([[0,1,3]]).toarray()
Explanation: The preprocessing module provides a companion function binarize to be used when the transformer API is not necessary.
binarize and Binarizer accept both dense array-like and sparse matrices from scipy.sparse as input.
Encoding categorical features
Integer representation cannot be used directly with scikit-learn estimators, as these expect continuous input, and would interpret the categories as being ordered, which is often not desired.
One possibility to convert categorical features to features that can be used with scikit-learn estimators is to use a one-of-K or one-hot encoding, which is implemented in OneHotEncoder. This estimator transforms each categorical feature with m possible values into m binary features, with only one active.
End of explanation |
7,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Übungen zu SQL
Teacher
Wir möchten eine Abfrage erstellen, die einem Programm die Namen aller Lehrer für jeden Kurs und jeden Schüler übergibt.
Im späteren Ausdruck gibt es im Formular allerdings nur Platz für 2 Lehrernamen.
Wenn es nur einen Lehrer gibt, schreibe den NAmen in die erste zurüclgegebene Spalte und setze die zweite Spalte mit Leerzeichen bzw. NULL
Bei genau zwei Lehrern gib die Lehernamen in aufsteigender Reihenfolge zurück
Bei mehr als zwei Lehrern soll die Ausgabe in der ersten Spalte den ersten Lehrernmen zeigen, in der folgenden Spalte soll das Wort "Mehr" erscheinen.
Die Ausgangstabelle sieht we folgt aus.
CREATE TABLE Register
(course_nbr INTEGER NOT NULL,
student_name CHAR(10) NOT NULL,
teacher_name CHAR(10) NOT NULL,
..);
<p style="text-align
Step1: Lösung 1
Step2: Lösung 2
Benutzt ein Case-Konstrukt zur Abfrage der Anzahl der gefundenen Lehrer
Step3: Lösung 3
Freie Plätze im Restaurant
Stellen Sie sich vor, dass Sie ein großes Restaurant mit 1000 Sitzplätzen führen.
Sie möchten eine Übersicht der freien Plätze bzw. der freien Blöcke zwischen besetzten Plätzen.
Bsp | Python Code:
%load_ext sql
%sql mysql://steinam:steinam@localhost/celko
%%sql
select * from Register;
Explanation: Übungen zu SQL
Teacher
Wir möchten eine Abfrage erstellen, die einem Programm die Namen aller Lehrer für jeden Kurs und jeden Schüler übergibt.
Im späteren Ausdruck gibt es im Formular allerdings nur Platz für 2 Lehrernamen.
Wenn es nur einen Lehrer gibt, schreibe den NAmen in die erste zurüclgegebene Spalte und setze die zweite Spalte mit Leerzeichen bzw. NULL
Bei genau zwei Lehrern gib die Lehernamen in aufsteigender Reihenfolge zurück
Bei mehr als zwei Lehrern soll die Ausgabe in der ersten Spalte den ersten Lehrernmen zeigen, in der folgenden Spalte soll das Wort "Mehr" erscheinen.
Die Ausgangstabelle sieht we folgt aus.
CREATE TABLE Register
(course_nbr INTEGER NOT NULL,
student_name CHAR(10) NOT NULL,
teacher_name CHAR(10) NOT NULL,
..);
<p style="text-align: left;">  </p>
End of explanation
%%sql
SELECT R1.course_nbr, R1.student_name,
MIN(R1.teacher_name) as Teacher_1, NULL
FROM Register AS R1
GROUP BY R1.course_nbr, R1.student_name
HAVING COUNT(*) = 1
UNION
SELECT R1.course_nbr, R1.student_name,
MIN(R1.teacher_name) as Teacher_1,
MAX(R1.teacher_name) as Teacher_2
FROM Register AS R1
GROUP BY R1.course_nbr, R1.student_name
HAVING COUNT(*) = 2
UNION
SELECT R1.course_nbr, R1.student_name,
MIN(R1.teacher_name) as Teacher_1, '--More--' as Teacher_2
FROM Register AS R1
GROUP BY R1.course_nbr, R1.student_name
HAVING COUNT(*) > 2;
Explanation: Lösung 1
End of explanation
%%sql
SELECT course_nbr, student_name, MIN(teacher_name) as Teacher_1,
CASE COUNT(*) WHEN 1 THEN NULL
WHEN 2 THEN MAX(teacher_name)
ELSE '--More--' END as Teacher_2
FROM Register
GROUP BY course_nbr, student_name;
%%sql
-- andere Syntax, evtl verständlicher
SELECT course_nbr, student_name, MIN(teacher_name) as Teacher_1,
CASE WHEN COUNT(*) = 1 THEN NULL
WHEN COUNT(*) = 2 THEN MAX(teacher_name)
ELSE '--More--' END as Teacher_2
FROM Register
GROUP BY course_nbr, student_name;
Explanation: Lösung 2
Benutzt ein Case-Konstrukt zur Abfrage der Anzahl der gefundenen Lehrer
End of explanation
%%sql
create table seats
(seat integer)
insert into seats(seat) values(0);
insert into seats(seat) values(1001);
insert into seats(seat) values(101);
CREATE VIEW Firstseat (seat)
AS SELECT (seat + 1)
FROM seats
WHERE (seat + 1) NOT IN
(SELECT seat FROM seats)
AND (seat + 1) < 1001;
CREATE VIEW Lastseat (seat)
AS SELECT (seat - 1)
FROM seats
WHERE (seat - 1) NOT IN
(SELECT seat FROM seats)
AND (seat - 1) > 0;
-- nutzt die beiden Views
SELECT F1.seat AS start, L1.seat AS finish,
((L1.seat - F1.seat) + 1) AS available
FROM Firstseat F1, Lastseat L1
WHERE L1.seat = (SELECT MIN(L2.seat)
FROM Lastseat AS L2
WHERE F1.seat <= L2.seat)
order by start;
-- braucht keinen view
SELECT (R1.seat + 1) AS start,
(MIN(R2.seat) - 1) AS finish,
abs((R1.seat + 1) - (MIN(R2.seat))) as free
FROM seats AS R1
INNER JOIN
seats AS R2
ON R2.seat > R1.seat
GROUP BY R1.seat
HAVING (R1.seat + 1) < MIN(R2.seat);
Explanation: Lösung 3
Freie Plätze im Restaurant
Stellen Sie sich vor, dass Sie ein großes Restaurant mit 1000 Sitzplätzen führen.
Sie möchten eine Übersicht der freien Plätze bzw. der freien Blöcke zwischen besetzten Plätzen.
Bsp:
Besetzt ist Tisch 101, dann würde eine Abfrage wie folgt aussehen.
| Start | Ende | Frei |
|--------|--------|-------|
| 1 | 100 | 100 |
| 102 | 1000 | 999 |
Schreiben Sie eine Routine in ihrer Lieblingsprogrammiersprache zur Lösung des Problems.
Aufgabe
Schreiben Sie je ein SQL-Statement, welches Ihnen die jeweils ersten und letzen Plätze einer Lücke findet.
Im obigen Beispiel wären dies die Zahl 1 und 102 für die ersten freien Plätze, sowie 100 und 1000 für die letzten freien Pätze.
End of explanation |
7,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found
Our work will be orginized in the follwing order
Step1: Set columns 'Well Name' and 'Formation' to be category
Step2: Check distribution of classes in whole dataset
Step3: Check distribution of classes in each well
Step4: We can see that classes are very imbalanced in each well
Step5: Data Preparation and Model Selection
Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
Step6: The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
Step7: Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
Step8: General Approach for Parameter Tuning
We are going to preform the steps as follows
Step9: Step 2
Step10: Step 3
Step11: Step 5
Step12: Step 6
Step13: Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
Step14: Use final model to predict the given test data set | Python Code:
%matplotlib inline
import pandas as pd
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import matplotlib.colors as colors
import xgboost as xgb
import numpy as np
from sklearn.metrics import confusion_matrix, f1_score, accuracy_score, roc_auc_score
from classification_utilities import display_cm, display_adj_cm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import validation_curve
from sklearn.datasets import load_svmlight_files
from sklearn.model_selection import StratifiedKFold, cross_val_score, LeavePGroupsOut
from sklearn.datasets import make_classification
from xgboost.sklearn import XGBClassifier
from scipy.sparse import vstack
#use a fixed seed for reproducibility
seed = 123
np.random.seed(seed)
filename = './facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.head(10)
Explanation: In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found
Our work will be orginized in the follwing order:
•Background
•Exploratory Data Analysis
•Data Prepration and Model Selection
•Final Results
Background
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
•Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
•Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1.Nonmarine sandstone
2.Nonmarine coarse siltstone
3.Nonmarine fine siltstone
4.Marine siltstone and shale
5.Mudstone (limestone)
6.Wackestone (limestone)
7.Dolomite
8.Packstone-grainstone (limestone)
9.Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies/ Label/ Adjacent Facies
1 SS 2
2 CSiS 1,3
3 FSiS 2
4 SiSh 5
5 MS 4,6
6 WS 5,7
7 D 6,8
8 PS 6,7,9
9 BS 7,8
Exprolatory Data Analysis
After the background intorduction, we start to import the pandas library for some basic data analysis and manipulation. The matplotblib and seaborn are imported for data vislization.
End of explanation
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data.info()
training_data.describe()
Explanation: Set columns 'Well Name' and 'Formation' to be category
End of explanation
plt.figure(figsize=(5,5))
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',
'#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_counts = training_data['Facies'].value_counts().sort_index()
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')
Explanation: Check distribution of classes in whole dataset
End of explanation
wells = training_data['Well Name'].unique()
plt.figure(figsize=(15,9))
for index, w in enumerate(wells):
ax = plt.subplot(2,5,index+1)
facies_counts = pd.Series(np.zeros(9), index=range(1,10))
facies_counts = facies_counts.add(training_data[training_data['Well Name']==w]['Facies'].value_counts().sort_index())
#facies_counts.replace(np.nan,0)
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title=w)
ax.set_ylim(0,160)
Explanation: Check distribution of classes in each well
End of explanation
plt.figure(figsize=(5,5))
sns.heatmap(training_data.corr(), vmax=1.0, square=True)
Explanation: We can see that classes are very imbalanced in each well
End of explanation
X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 )
Y_train = training_data['Facies' ] - 1
dtrain = xgb.DMatrix(X_train, Y_train)
features = ['GR','ILD_log10','DeltaPHI','PHIND','PE','NM_M','RELPOS']
Explanation: Data Preparation and Model Selection
Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
End of explanation
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
Explanation: The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
End of explanation
skf = StratifiedKFold(n_splits=5)
cv = skf.split(X_train, Y_train)
def modelfit(alg, Xtrain, Ytrain, useTrainCV=True, cv_fold=skf):
#Fit the algorithm on the data
alg.fit(Xtrain, Ytrain,eval_metric='merror')
#Predict training set:
dtrain_prediction = alg.predict(Xtrain)
#dtrain_predprob = alg.predict_proba(Xtrain)[:,1]
#Pring model report
print ("\nModel Report")
print ("Accuracy : %.4g" % accuracy_score(Ytrain,dtrain_prediction))
print ("F1 score (Train) : %f" % f1_score(Ytrain,dtrain_prediction,average='micro'))
#Perform cross-validation:
if useTrainCV:
cv_score = cross_val_score(alg, Xtrain, Ytrain, cv=cv_fold, scoring='f1_micro')
print ("CV Score : Mean - %.7g | Std - %.7g | Min - %.7g | Max - %.7g" %
(np.mean(cv_score), np.std(cv_score), np.min(cv_score), np.max(cv_score)))
#Pring Feature Importance
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar',title='Feature Importances')
plt.ylabel('Feature Importance Score')
Explanation: Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
End of explanation
xgb1= XGBClassifier(
learning_rate=0.05,
objective = 'multi:softmax',
nthread = 4,
seed = seed
)
xgb1
modelfit(xgb1, X_train, Y_train)
Explanation: General Approach for Parameter Tuning
We are going to preform the steps as follows:
1.Choose a relatively high learning rate, e.g., 0.1. Usually somewhere between 0.05 and 0.3 should work for different problems.
2.Determine the optimum number of tress for this learning rate.XGBoost has a very usefull function called as "cv" which performs cross-validation at each boosting iteration and thus returns the optimum number of tress required.
3.Tune tree-based parameters(max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees.
4.Tune regularization parameters(lambda, alpha) for xgboost which can help reduce model complexity and enhance performance.
5.Lower the learning rate and decide the optimal parameters.
Step 1:Fix learning rate and number of estimators for tuning tree-based parameters
In order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values:
1.max_depth = 5
2.min_child_weight = 1
3.gamma = 0
4.subsample, colsample_bytree = 0.8 : This is a commonly used used start value.
5.scale_pos_weight = 1
Please note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.
End of explanation
param_test1={
'n_estimators':range(20, 100, 10)
}
gs1 = GridSearchCV(xgb1,param_grid=param_test1,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs1.fit(X_train, Y_train)
gs1.grid_scores_, gs1.best_params_,gs1.best_score_
gs1.best_estimator_
param_test2={
'max_depth':range(5,16,2),
'min_child_weight':range(1,15,2)
}
gs2 = GridSearchCV(gs1.best_estimator_,param_grid=param_test2,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs2.fit(X_train, Y_train)
gs2.grid_scores_, gs2.best_params_,gs2.best_score_
gs2.best_estimator_
modelfit(gs2.best_estimator_, X_train, Y_train)
Explanation: Step 2: Tune max_depth and min_child_weight
End of explanation
param_test3={
'gamma':[0,.05,.1,.15,.2,.3,.4],
'subsample':[0.6,.7,.75,.8,.85,.9],
'colsample_bytree':[i/10.0 for i in range(4,10)]
}
gs3 = GridSearchCV(gs2.best_estimator_,param_grid=param_test3,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs3.fit(X_train, Y_train)
gs3.grid_scores_, gs3.best_params_,gs3.best_score_
gs3.best_estimator_
modelfit(gs3.best_estimator_,X_train,Y_train)
Explanation: Step 3: Tune gamma
End of explanation
param_test4={
'reg_alpha':[0, 1e-5, 1e-2, 0.1, 0.2],
'reg_lambda':[0, .25,.5,.75,.1]
}
gs4 = GridSearchCV(gs3.best_estimator_,param_grid=param_test4,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs4.fit(X_train, Y_train)
gs4.grid_scores_, gs4.best_params_,gs4.best_score_
modelfit(gs4.best_estimator_,X_train, Y_train)
gs4.best_estimator_
param_test5={
'reg_alpha':[.15,0.2,.25,.3,.4],
}
gs5 = GridSearchCV(gs4.best_estimator_,param_grid=param_test5,
scoring='accuracy', n_jobs=4,iid=False, cv=skf)
gs5.fit(X_train, Y_train)
gs5.grid_scores_, gs5.best_params_,gs5.best_score_
modelfit(gs5.best_estimator_, X_train, Y_train)
gs5.best_estimator_
Explanation: Step 5: Tuning Regularization Parameters
End of explanation
xgb4 = XGBClassifier(
learning_rate = 0.025,
n_estimators=120,
max_depth=7,
min_child_weight=7,
gamma = 0.05,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.2,
reg_lambda =0.75,
objective='multi:softmax',
nthread =4,
seed = seed,
)
modelfit(xgb4,X_train, Y_train)
xgb5 = XGBClassifier(
learning_rate = 0.00625,
n_estimators=480,
max_depth=7,
min_child_weight=7,
gamma = 0.05,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.2,
reg_lambda =0.75,
objective='multi:softmax',
nthread =4,
seed = seed,
)
modelfit(xgb5,X_train, Y_train)
Explanation: Step 6: Reducing Learning Rate
End of explanation
# Load data
filename = './facies_vectors.csv'
data = pd.read_csv(filename)
# Change to category data type
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
X_train_nowell = X_train.drop(['Well Name'], axis=1)
Y_train = data['Facies' ] - 1
# Final recommended model based on the extensive parameters search
model_final = gs5.best_estimator_
model_final.fit( X_train_nowell , Y_train , eval_metric = 'merror' )
# Leave one well out for cross validation
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
# Split data for training and testing
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
# Train the model based on training data
# Predict on the test set
predictions = model_final.predict(test_X)
# Print report
print ("\n------------------------------------------------------")
print ("Validation on the leaving out well " + well_names[i])
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print ("\nConfusion Matrix Results")
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print ("\n------------------------------------------------------")
print ("Final Results")
print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
Explanation: Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
End of explanation
# Load test data
test_data = pd.read_csv('validation_data_nofacies.csv')
test_data['Well Name'] = test_data['Well Name'].astype('category')
X_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
# Predict facies of unclassified data
Y_predicted = model_final.predict(X_test)
test_data['Facies'] = Y_predicted + 1
# Store the prediction
test_data.to_csv('Prediction4.csv')
test_data[test_data['Well Name']=='STUART'].head()
test_data[test_data['Well Name']=='CRAWFORD'].head()
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
make_facies_log_plot(
test_data[test_data['Well Name'] == 'STUART'],
facies_colors)
Explanation: Use final model to predict the given test data set
End of explanation |
7,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <h1 style="text-align
Step2: Once the Naive Bayes Classifier has been trained with the train() method, we can use it to classify new elements | Python Code:
from collections import Counter, defaultdict
import numpy as np
class NaiveBaseClass:
def calculate_relative_occurences(self, list1):
no_examples = len(list1)
ro_dict = dict(Counter(list1))
for key in ro_dict.keys():
ro_dict[key] = ro_dict[key] / float(no_examples)
return ro_dict
def get_max_value_key(self, d1):
values = d1.values()
keys = d1.keys()
max_value_index = values.index(max(values))
max_key = keys[max_value_index]
return max_key
def initialize_nb_dict(self):
self.nb_dict = {}
for label in self.labels:
self.nb_dict[label] = defaultdict(list)
class NaiveBayes(NaiveBaseClass):
Naive Bayes Classifier method:
It is trained with a 2D-array X (dimensions m,n) and a 1D array Y (dimension 1,n).
X should have one column per feature (total n) and one row per training example (total m).
After training a hash table is filled with the class probabilities per feature.
We start with an empty hash table nb_dict, which has the form:
nb_dict = {
'class1': {
'feature1': [],
'feature2': [],
(...)
'featuren': []
}
'class2': {
'feature1': [],
'feature2': [],
(...)
'featuren': []
}
}
def train(self, X, Y):
self.labels = np.unique(Y)
no_rows, no_cols = np.shape(X)
self.initialize_nb_dict(labels)
self.class_probabilities = self.calculate_relative_occurences(Y)
#iterate over all classes
for label in self.labels:
#first we get a list of indices per class, so we can take a subset X_ of the matrix X, containing data of only that class.
row_indices = np.where(Y == label)[0]
X_ = X[row_indices, :]
#in this subset, we iterate over all the columns/features, and add all values of each feature to the hash table nb_dict
no_rows_, no_cols_ = np.shape(X_)
for jj in range(0,no_cols_):
nb_dict[label][jj] += list(X_[:,jj])
#Now we have a Hash table containing all occurences of feature values, per feature, per class
#We need to transform this Hash table to a Hash table with relative feature value occurences per class
for label in self.labels:
for jj in range(0,no_cols):
self.nb_dict[label][jj] = self.calculate_relative_occurences(nb_dict[label][jj])
Explanation: <h1 style="text-align: left;">Introduction:</h1>
<a href="https://en.wikipedia.org/wiki/Machine_learning" target="_blank">Machine Learning</a> is a vast area of Computer Science that is concerned with designing algorithms which form good models of the world around us (the data coming from the world around us).
Within Machine Learning many tasks are - or can be reformulated as - classification tasks.
In classification tasks we are trying to produce a model which can give the correlation between the input data $X$ and the class $C$ each input belongs to. This model is formed with the feature-values of the input-data. For example, the dataset contains datapoints belonging to the classes Apples, Pears and Oranges and based on the features of the datapoints (weight, color, size etc) we are trying to predict the class.
We need some amount of training data to train the Classifier, i.e. form a correct model of the data. We can then use the trained Classifier to classify new data. If the training dataset chosen correctly, the Classifier should predict the class probabilities of the new data with a similar accuracy (as it does for the training examples).
After construction, such a Classifier could for example tell us that document containing the words "Bose-Einstein condensate" should be categorized as a Physics article, while documents containing the words "Arbitrage" and "Hedging" should be categorized as a Finance article.
Another Classifier (whose dataset is illustrated below) could tell whether or not a person makes <a href="https://archive.ics.uci.edu/ml/datasets/Adult" target="_blank">more than 50K</a>, based on features such as Age, Education, Marital Status, Occupation etc.
As we can see, there is a input dataset $ X $ which corresponds to a 'output' $Y$. The dataset $X$ contains $m$ input examples $x^{(1)}, x^{(2)}, .. , x^{(m)}$, and each input example has $n$ feature values $x_1, x_2, ..., x_n$ (here $n\ =\ 7$).
There are three popular Classifiers within Machine Learning, which use three different mathematical approaches to classify data;
- Naive Bayes, which uses a statistical (Bayesian) approach,
- Logistic Regression, which uses a functional approach and
- Support Vector Machines, which uses a geometrical approach.
Previously we have already looked at <a href="http://ataspinar.com/2016/05/07/regression-logistic-regression-and-maximum-entropy-part-2-code-examples/" target="_blank">Logistic Regression</a>. Here we will see the theory behind the Naive Bayes Classifier together with its implementation in Python.
<h1 style="text-align: left;"><a name="ch2"></a>2. Naive Bayes Classification:</h1>
Naive Bayes classifiers are trying to classify data from a Statistical point of view.
The starting point is that the probability (datapoint $x^{i}$ belongs to a) class $C\ =\ c_i$ is given by the <a href="https://en.wikipedia.org/wiki/Posterior_probability">posterior probability</a> $P(C\ |\ x^{i})$. Here $x^{i}$ refers to an entry in the test set, consisting of n features; $x_1, x_2, ..., x_n$.
Using Bayes' rule, this posterior probability can be rewritten as:
$ P(C=c_i\ |\ x^{i}) = \frac{P(x^{i}\ |\ C=c_j) \cdot P(C=c_j)}{P(x^{i})} $
<br>
Since the marginal probability $P(x^{i})$ does not depends on the classes, it can be disregarded and the equation becomes:
$ P(C=c_j\ |\ x^{i}) = P(x^{i}\ |\ C=c_j) \cdot P(C=c_j) $
<br>
The training example $x^{(i)} $ belongs to the class $c_j$ which maximizes this probability, so:
$ C_{NB} = argmax\ P(x^{(i)}|C=c_j) \cdot P(C=c_j) $
$ C_{NB} = argmax\ P(x_1, x_2, .., x_n | C=c_j) \cdot P(C=c_j) $
<br>
Assuming <a href="https://en.wikipedia.org/wiki/Conditional_independence">conditional independence</a> of the features $ x_k$, this equation simplifies to:
$ C_{NB} = argmax\ P(x_1|C) \cdot P(x_2|C) \cdot \cdot\ \cdot P(x_n|C) \cdot P(C) $
$ C_{NB} = argmax\ P(C) \cdot \prod_i P(x_i|C) $
<br>
Here $P(x_i | C)$ is the conditional probability that feature i belongs to class $C$.
This probability can simply be calculated by calculating the relative values of feature $i$ per class.
This is should become more clear, if we look at our '50K income' example of above:
<br>
First, we select all of the entries belonging to one class:
<br>
<br>
Then we calculate the relative frequency of the values of each feature (per class):
New entries can be classified by multiplying the probabilities of each feature per class:<br>
if a new entry has for the three features illustrated above, the following values: <br>
+ native-country: United-states,
+ hours-per-week: 40,
+ occupation: Exec-managerial.
Then based on these features, the class probabilties will be:<br>
$P( C = C_{>50K})\ =\ (1/3) \cdot (2/3) \cdot (1/3) = (2/27) $<br>
$P( C = C_{<=50K})\ =\ (2/3) \cdot (2/3) \cdot (2/3) = (8/27) $<br>
The predicted class for this new entry therefore would be '<=50K'.
In practice we of course have much more features, and thousands/millions of training examples, but the way Naive Bayes classification works remains the same.
So we need to make a Hash table, containing the feature probabilities.
Once such a Hash table is made, new entries can be classified by multiplying the probabilities of each feature value, per class.
The code to train a Naive Bayes Classifier looks as follows.
End of explanation
def classify_single_elem(self, X_elem):
Y_dict = {}
#First we determine the class-probability of each class, and then we determine the class with the highest probability
for label in self.labels:
class_probability = self.class_probabilities[label]
for ii in range(0,len(X_elem)):
relative_feature_values = self.nb_dict[label][ii]
if X_elem[ii] in relative_feature_values.keys():
class_probability *= relative_feature_values[X_elem[ii]]
else:
class_probability *= 0
Y_dict[label] = class_probability
return self.get_max_value_key(Y_dict)
Explanation: Once the Naive Bayes Classifier has been trained with the train() method, we can use it to classify new elements:
End of explanation |
7,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
One of the newest features of BigBang is the ability to analyze git info for each project. For now, we mostly just look at commits over time. We can also analyze individual committers to run cohort visualization.
First, make sure that you've collected git and mail data. For now, we are looking at scipy, but you can analyze any git repo you'd like by loading its info. Below, we load the mail and git data into data tables.
Step1: The code below graphs the commits per day and commits per week for scipy. As we will see later, consolidating commits into larger time periods allows for smoother graphs. As you can see, the weekly graph is slightly smoother. We will find some more ways to smoothen these lines.
Step2: With some convolution, the two jutted graphs make much more sense. This graphs commits per week and emails per week. The fact that we have the git and mail data for the same project lets us analyze the relationship between emails and commits. We can look at whether or not weeks where there is a lot of emailing are followed by weeks of many commits. We can even go down to the individual level and analyze each commiter/emailer with questions like "Is a person less likely to commit if they email a lot?"
Step3: This is the top 20 (or fewer) committers to a project. An interesting question to answer for the future would be whether or not these committers are more likely to be in the same cohort.
Step4: Below, one can see cohort visualization of commits. Each cohort is a group of commiters that started working on the project around the same time. The first cohort is the first 1/5th of people to start committing, the second cohort is the second 1/5th of people to start committing, and so on. For Scipy, the first cohort of commiters tends to dominate, while the second has recently taken some more charge. | Python Code:
url = "http://mail.python.org/pipermail/scipy-dev/"
arx = Archive(url,archive_dir="../archives")
repo = repo_loader.get_repo("bigbang")
full_info = repo.commit_data;
act = arx.data.groupby("Date").size();
act = act.resample("D", how=np.sum)
act = act[act.index.year <= 2014]
act_week = act.resample("W", how=np.sum)
print((full_info["Parent Commit"]))
Explanation: One of the newest features of BigBang is the ability to analyze git info for each project. For now, we mostly just look at commits over time. We can also analyze individual committers to run cohort visualization.
First, make sure that you've collected git and mail data. For now, we are looking at scipy, but you can analyze any git repo you'd like by loading its info. Below, we load the mail and git data into data tables.
End of explanation
fig = plt.figure(figsize=(10, 7.5));
commits_per_day = repo.commits_per_day()
commits_per_week = repo.commits_per_week()
commits_per_day.plot()
fig = plt.figure(figsize=(10, 7.5));
commits_per_week.plot()
Explanation: The code below graphs the commits per day and commits per week for scipy. As we will see later, consolidating commits into larger time periods allows for smoother graphs. As you can see, the weekly graph is slightly smoother. We will find some more ways to smoothen these lines.
End of explanation
fig = plt.figure(figsize=(10, 7.5));
simp = 5
convulation_array = [1.0/(simp) for n in range(simp)];
c_array = np.convolve(commits_per_week, convulation_array, "same")
e_array = np.convolve(act_week, convulation_array, "same");
plt.plot(act_week.index, e_array) # The Blue
plt.plot(commits_per_week.index, c_array) # The Green
fig.axes[0].xaxis_date()
Explanation: With some convolution, the two jutted graphs make much more sense. This graphs commits per week and emails per week. The fact that we have the git and mail data for the same project lets us analyze the relationship between emails and commits. We can look at whether or not weeks where there is a lot of emailing are followed by weeks of many commits. We can even go down to the individual level and analyze each commiter/emailer with questions like "Is a person less likely to commit if they email a lot?"
End of explanation
plt.figure(figsize=(10, 7.5));
df = repo.by_committer();
if (len(df > 20)):
df = df[len(df)-20:]
df.plot(kind="bar")
Explanation: This is the top 20 (or fewer) committers to a project. An interesting question to answer for the future would be whether or not these committers are more likely to be in the same cohort.
End of explanation
n = 5
import numpy as np
def first_commit_fn(df):
if (len(df) < 1):
return;
else:
return df
dataFrame = full_info
commits_by_time = dataFrame.groupby(["Committer Name", dataFrame['Time'].map(lambda x: x.toordinal()/100)], sort=True).size();
time = dataFrame.groupby(dataFrame['Time'].map(lambda x: x.toordinal()/100)).size().order();
first_commits = dataFrame.groupby("Committer Name").min().sort("Time");
commits_by_time = (commits_by_time.reindex(index = time.index.values, level=1, fill_value=0))
cohorts = np.array_split(first_commits, n);
convulation_array = [.1,.1,.1,.1,.1,.1,.1,.1,.1,.1];
cohort_activity = [(commits_by_time.loc[cohort.index.values].sum(None, False, 1, False)).reindex(index = time.index.values) for cohort in cohorts];
for i in range(len(cohort_activity)):
cohort_activity[i] = np.convolve(cohort_activity[i], convulation_array)
to_graph = pd.DataFrame(cohort_activity).transpose()
to_graph.plot(kind="bar",stacked=True, linewidth=0)
byCommitter = repo.by_committer();
totalCohortCommits = [];
for cohort in cohorts:
cohortPeople = byCommitter.reindex(cohort.index);
totalCohortCommits.append(cohortPeople.sum())
commitsPerCohort = pd.DataFrame(totalCohortCommits);
commitsPerCohort.transpose().plot(kind="bar")
Explanation: Below, one can see cohort visualization of commits. Each cohort is a group of commiters that started working on the project around the same time. The first cohort is the first 1/5th of people to start committing, the second cohort is the second 1/5th of people to start committing, and so on. For Scipy, the first cohort of commiters tends to dominate, while the second has recently taken some more charge.
End of explanation |
7,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initializing filters on known motifs
In the scenario where data is scarse, it is often useful to initialize the filters of the first convolutional layer to some known position weights matrices (PWM's). That way, the model already starts with a parameter configuration much closer to the 'right' one.
Concise provides access to 2 PWM databases
Step1: Let's choose PUM2 PWM (RBP in Human)
Step2: Visualization - PWM class
The PWM class provides a method plotPWM to visualize the PWM.
Step3: We can select the PWM with id 129.
Step4: Initialize the conv filters with PWM values
Step5: ci.PSSMKernelInitializer will set the filters of the first convolutional layer to the values of the position-specific scoring matrix (PSSM)
Step6: Test-set performance
Step7: Filter visualization | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
# RBP PWM's
from concise.data import attract
dfa = attract.get_metadata()
dfa
# TF PWM's
from concise.data import encode
dfe = encode.get_metadata()
dfe
# TF PWM's
from concise.data import hocomoco
dfh = hocomoco.get_metadata()
dfh
Explanation: Initializing filters on known motifs
In the scenario where data is scarse, it is often useful to initialize the filters of the first convolutional layer to some known position weights matrices (PWM's). That way, the model already starts with a parameter configuration much closer to the 'right' one.
Concise provides access to 2 PWM databases:
transcription factors from ENCODE (2067 PWMs)
transcription factors from HOCOMOCO v10 (640 PWMs)
rna-binding proteins from ATtrACT (1583 PWMs)
Find the motif of interest
Each PWM database is provided as a module under concise.data. It provides two functions:
concise.data.<db>.get_metadata() - returns a pandas.DataFrame with metadata information about each PWM
concise.data.<db>.get_pwm_list() - given a list of PWM ids, return a list with concise.utils.pwm.PWM instances
Metadata tables
End of explanation
dfa_pum2 = dfa[dfa.Gene_name.str.match("PUM2") & \
dfa.Organism.str.match("Homo_sapiens")]
dfa_pum2
Explanation: Let's choose PUM2 PWM (RBP in Human):
End of explanation
# Visualize the PUM2 Motifs from different experiments
from concise.utils.pwm import PWM
dfa_pum2_uniq = dfa_pum2[["Experiment_description", "PWM_id"]].drop_duplicates()
pwm_list = attract.get_pwm_list(dfa_pum2_uniq.PWM_id)
for i, pwm in enumerate(pwm_list):
print("PWM_id:", pwm.name, "; Experiment_description:", dfa_pum2_uniq.Experiment_description.iloc[i])
pwm.plotPWM(figsize=(3,1))
Explanation: Visualization - PWM class
The PWM class provides a method plotPWM to visualize the PWM.
End of explanation
pwm_list = [pwm for pwm in pwm_list if pwm.name == "129"]
pwm_list
Explanation: We can select the PWM with id 129.
End of explanation
import concise.layers as cl
import keras.layers as kl
import concise.initializers as ci
import concise.regularizers as cr
from keras.callbacks import EarlyStopping
from concise.preprocessing import encodeDNA
from keras.models import Model, load_model
from keras.optimizers import Adam
# get the data
def load(split="train", st=None):
dt = pd.read_csv("../data/RBP/PUM2_{0}.csv".format(split))
# DNA/RNA sequence
xseq = encodeDNA(dt.seq) # list of sequences -> np.ndarray
# response variable
y = dt.binding_site.as_matrix().reshape((-1, 1)).astype("float")
return {"seq": xseq}, y
train, valid, test = load("train"), load("valid"), load("test")
# deduce sequence length
seq_length = train[0]["seq"].shape[1]
# define the model
def model(train, filters=1, kernel_size=9, pwm_list=None, lr=0.001):
seq_length = train[0]["seq"].shape[1]
if pwm_list is None:
kinit = "glorot_uniform"
binit = "zeros"
else:
kinit = ci.PSSMKernelInitializer(pwm_list, add_noise_before_Pwm2Pssm=True)
binit = "zeros"
# sequence
in_dna = cl.InputDNA(seq_length=seq_length, name="seq")
x = cl.ConvDNA(filters=filters,
kernel_size=kernel_size,
activation="relu",
kernel_initializer=kinit,
bias_initializer=binit,
name="conv1")(in_dna)
x = kl.AveragePooling1D(pool_size=4)(x)
x = kl.Flatten()(x)
x = kl.Dense(units=1)(x)
m = Model(in_dna, x)
m.compile(Adam(lr=lr), loss="binary_crossentropy", metrics=["acc"])
return m
Explanation: Initialize the conv filters with PWM values
End of explanation
# create two models: with and without PWM initialization
m_rand_init = model(train, filters=3, pwm_list=None) # random initialization
m_pwm_init = model(train, filters=3, pwm_list=pwm_list) # motif initialization
print("Random initialization:")
m_rand_init.get_layer("conv1").plot_weights(figsize=(3, 5));
print("Known PWM initialization:")
m_pwm_init.get_layer("conv1").plot_weights(figsize=(3, 5));
# train the models
m_rand_init.fit(train[0], train[1], epochs=50, validation_data=valid,
verbose=0,
callbacks=[EarlyStopping(patience=5)])
m_pwm_init.fit(train[0], train[1], epochs=50, validation_data=valid,
verbose=0,
callbacks=[EarlyStopping(patience=5)]);
Explanation: ci.PSSMKernelInitializer will set the filters of the first convolutional layer to the values of the position-specific scoring matrix (PSSM):
$$ pssm_{ij} = log \frac{pwm_{ij}}{b_j} \;,$$
where $b_j$ is the background probability of observing base $j$.
We add gaussian noise to each individual filter. Let's visualize the filters:
End of explanation
import concise.eval_metrics as cem
# performance on the test-set
# Random initialization
print("Random intiailzation auPR:", cem.auprc(test[1], m_rand_init.predict(test[0])))
# PWM initialization
print("Known PWM initialization auPR:", cem.auprc(test[1], m_pwm_init.predict(test[0])))
Explanation: Test-set performance
End of explanation
m_rand_init.get_layer("conv1").plot_weights(plot_type="motif_pwm_info", figsize=(3, 5));
m_pwm_init.get_layer("conv1").plot_weights(plot_type="motif_pwm_info", figsize=(3, 5));
Explanation: Filter visualization
End of explanation |
7,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimiztion with mystic
Step2: mystic
Step4: Diagnostic tools
Callbacks
Step6: NOTE IPython does not handle shell prompt interactive programs well, so the above should be run from a command prompt. An IPython-safe version is below.
Step8: Monitors
Step9: Solution trajectory and model plotting
Step11: Solver "tuning" and extension
Solver class interface
Step12: Algorithm configurability
Termination conditions
Step14: Solver population
EXERCISE
Step17: Range (i.e. 'box') constraints
Symbolic constraints interface
Step19: Penatly functions
Step22: "Operators" that directly constrain search space
Step25: Special cases
Integer and mixed integer programming
Step27: EXERCISE
Step30: Linear and quadratic constraints | Python Code:
%matplotlib inline
Explanation: Optimiztion with mystic
End of explanation
Example:
- Minimize Rosenbrock's Function with Nelder-Mead.
- Plot of parameter convergence to function minimum.
Demonstrates:
- standard models
- minimal solver interface
- parameter trajectories using retall
# Nelder-Mead solver
from mystic.solvers import fmin
# Rosenbrock function
from mystic.models import rosen
# tools
import pylab
if __name__ == '__main__':
# initial guess
x0 = [0.8,1.2,0.7]
# use Nelder-Mead to minimize the Rosenbrock function
solution = fmin(rosen, x0, disp=0, retall=1)
allvecs = solution[-1]
# plot the parameter trajectories
pylab.plot([i[0] for i in allvecs])
pylab.plot([i[1] for i in allvecs])
pylab.plot([i[2] for i in allvecs])
# draw the plot
pylab.title("Rosenbrock parameter convergence")
pylab.xlabel("Nelder-Mead solver iterations")
pylab.ylabel("parameter value")
pylab.legend(["x", "y", "z"])
pylab.show()
Explanation: mystic: approximates that scipy.optimize interface
End of explanation
Example:
- Minimize Rosenbrock's Function with Nelder-Mead.
- Dynamic plot of parameter convergence to function minimum.
Demonstrates:
- standard models
- minimal solver interface
- parameter trajectories using callback
- solver interactivity
# Nelder-Mead solver
from mystic.solvers import fmin
# Rosenbrock function
from mystic.models import rosen
# tools
from mystic.tools import getch
import pylab
pylab.ion()
# draw the plot
def plot_frame():
pylab.title("Rosenbrock parameter convergence")
pylab.xlabel("Nelder-Mead solver iterations")
pylab.ylabel("parameter value")
pylab.draw()
return
iter = 0
step, xval, yval, zval = [], [], [], []
# plot the parameter trajectories
def plot_params(params):
global iter, step, xval, yval, zval
step.append(iter)
xval.append(params[0])
yval.append(params[1])
zval.append(params[2])
pylab.plot(step,xval,'b-')
pylab.plot(step,yval,'g-')
pylab.plot(step,zval,'r-')
pylab.legend(["x", "y", "z"])
pylab.draw()
iter += 1
return
if __name__ == '__main__':
# initial guess
x0 = [0.8,1.2,0.7]
# suggest that the user interacts with the solver
print "NOTE: while solver is running, press 'Ctrl-C' in console window"
getch()
plot_frame()
# use Nelder-Mead to minimize the Rosenbrock function
solution = fmin(rosen, x0, disp=1, callback=plot_params, handler=True)
print solution
# don't exit until user is ready
getch()
Explanation: Diagnostic tools
Callbacks
End of explanation
Example:
- Minimize Rosenbrock's Function with Powell's method.
- Dynamic print of parameter convergence to function minimum.
Demonstrates:
- standard models
- minimal solver interface
- parameter trajectories using callback
# Powell's Directonal solver
from mystic.solvers import fmin_powell
# Rosenbrock function
from mystic.models import rosen
iter = 0
# plot the parameter trajectories
def print_params(params):
global iter
from numpy import asarray
print "Generation %d has best fit parameters: %s" % (iter,asarray(params))
iter += 1
return
if __name__ == '__main__':
# initial guess
x0 = [0.8,1.2,0.7]
print_params(x0)
# use Powell's method to minimize the Rosenbrock function
solution = fmin_powell(rosen, x0, disp=1, callback=print_params, handler=False)
print solution
Explanation: NOTE IPython does not handle shell prompt interactive programs well, so the above should be run from a command prompt. An IPython-safe version is below.
End of explanation
Example:
- Minimize Rosenbrock's Function with Powell's method.
Demonstrates:
- standard models
- minimal solver interface
- customized monitors
# Powell's Directonal solver
from mystic.solvers import fmin_powell
# Rosenbrock function
from mystic.models import rosen
# tools
from mystic.monitors import VerboseLoggingMonitor
if __name__ == '__main__':
print "Powell's Method"
print "==============="
# initial guess
x0 = [1.5, 1.5, 0.7]
# configure monitor
stepmon = VerboseLoggingMonitor(1,1)
# use Powell's method to minimize the Rosenbrock function
solution = fmin_powell(rosen, x0, itermon=stepmon)
print solution
import mystic
mystic.log_reader('log.txt')
Explanation: Monitors
End of explanation
import mystic
mystic.model_plotter(mystic.models.rosen, 'log.txt', depth=True, scale=1, bounds="-2:2:.1, -2:2:.1, 1")
Explanation: Solution trajectory and model plotting
End of explanation
Example:
- Solve 8th-order Chebyshev polynomial coefficients with DE.
- Callable plot of fitting to Chebyshev polynomial.
- Monitor Chi-Squared for Chebyshev polynomial.
Demonstrates:
- standard models
- expanded solver interface
- built-in random initial guess
- customized monitors and termination conditions
- customized DE mutation strategies
- use of monitor to retrieve results information
# Differential Evolution solver
from mystic.solvers import DifferentialEvolutionSolver2
# Chebyshev polynomial and cost function
from mystic.models.poly import chebyshev8, chebyshev8cost
from mystic.models.poly import chebyshev8coeffs
# tools
from mystic.termination import VTR
from mystic.strategy import Best1Exp
from mystic.monitors import VerboseMonitor
from mystic.tools import getch, random_seed
from mystic.math import poly1d
import pylab
pylab.ion()
# draw the plot
def plot_exact():
pylab.title("fitting 8th-order Chebyshev polynomial coefficients")
pylab.xlabel("x")
pylab.ylabel("f(x)")
import numpy
x = numpy.arange(-1.2, 1.2001, 0.01)
exact = chebyshev8(x)
pylab.plot(x,exact,'b-')
pylab.legend(["Exact"])
pylab.axis([-1.4,1.4,-2,8],'k-')
pylab.draw()
return
# plot the polynomial
def plot_solution(params,style='y-'):
import numpy
x = numpy.arange(-1.2, 1.2001, 0.01)
f = poly1d(params)
y = f(x)
pylab.plot(x,y,style)
pylab.legend(["Exact","Fitted"])
pylab.axis([-1.4,1.4,-2,8],'k-')
pylab.draw()
return
if __name__ == '__main__':
print "Differential Evolution"
print "======================"
# set range for random initial guess
ndim = 9
x0 = [(-100,100)]*ndim
random_seed(123)
# draw frame and exact coefficients
plot_exact()
# configure monitor
stepmon = VerboseMonitor(50)
# use DE to solve 8th-order Chebyshev coefficients
npop = 10*ndim
solver = DifferentialEvolutionSolver2(ndim,npop)
solver.SetRandomInitialPoints(min=[-100]*ndim, max=[100]*ndim)
solver.SetGenerationMonitor(stepmon)
solver.enable_signal_handler()
solver.Solve(chebyshev8cost, termination=VTR(0.01), strategy=Best1Exp, \
CrossProbability=1.0, ScalingFactor=0.9, \
sigint_callback=plot_solution)
solution = solver.Solution()
# use monitor to retrieve results information
iterations = len(stepmon)
cost = stepmon.y[-1]
print "Generation %d has best Chi-Squared: %f" % (iterations, cost)
# use pretty print for polynomials
print poly1d(solution)
# compare solution with actual 8th-order Chebyshev coefficients
print "\nActual Coefficients:\n %s\n" % poly1d(chebyshev8coeffs)
# plot solution versus exact coefficients
plot_solution(solution)
from mystic.solvers import DifferentialEvolutionSolver
print "\n".join([i for i in dir(DifferentialEvolutionSolver) if not i.startswith('_')])
Explanation: Solver "tuning" and extension
Solver class interface
End of explanation
from mystic.termination import VTR, ChangeOverGeneration, And, Or
stop = Or(And(VTR(), ChangeOverGeneration()), VTR(1e-8))
from mystic.models import rosen
from mystic.monitors import VerboseMonitor
from mystic.solvers import DifferentialEvolutionSolver
solver = DifferentialEvolutionSolver(3,40)
solver.SetRandomInitialPoints([-10,-10,-10],[10,10,10])
solver.SetGenerationMonitor(VerboseMonitor(10))
solver.SetTermination(stop)
solver.SetObjective(rosen)
solver.SetStrictRanges([-10,-10,-10],[10,10,10])
solver.SetEvaluationLimits(generations=600)
solver.Solve()
print solver.bestSolution
Explanation: Algorithm configurability
Termination conditions
End of explanation
from mystic.constraints import *
from mystic.penalty import quadratic_equality
from mystic.coupler import inner
from mystic.math import almostEqual
from mystic.tools import random_seed
random_seed(213)
def test_penalize():
from mystic.math.measures import mean, spread
def mean_constraint(x, target):
return mean(x) - target
def range_constraint(x, target):
return spread(x) - target
@quadratic_equality(condition=range_constraint, kwds={'target':5.0})
@quadratic_equality(condition=mean_constraint, kwds={'target':5.0})
def penalty(x):
return 0.0
def cost(x):
return abs(sum(x) - 5.0)
from mystic.solvers import fmin
from numpy import array
x = array([1,2,3,4,5])
y = fmin(cost, x, penalty=penalty, disp=False)
assert round(mean(y)) == 5.0
assert round(spread(y)) == 5.0
assert round(cost(y)) == 4*(5.0)
def test_solve():
from mystic.math.measures import mean
def mean_constraint(x, target):
return mean(x) - target
def parameter_constraint(x):
return x[-1] - x[0]
@quadratic_equality(condition=mean_constraint, kwds={'target':5.0})
@quadratic_equality(condition=parameter_constraint)
def penalty(x):
return 0.0
x = solve(penalty, guess=[2,3,1])
assert round(mean_constraint(x, 5.0)) == 0.0
assert round(parameter_constraint(x)) == 0.0
assert issolution(penalty, x)
def test_solve_constraint():
from mystic.math.measures import mean
@with_mean(1.0)
def constraint(x):
x[-1] = x[0]
return x
x = solve(constraint, guess=[2,3,1])
assert almostEqual(mean(x), 1.0, tol=1e-15)
assert x[-1] == x[0]
assert issolution(constraint, x)
def test_as_constraint():
from mystic.math.measures import mean, spread
def mean_constraint(x, target):
return mean(x) - target
def range_constraint(x, target):
return spread(x) - target
@quadratic_equality(condition=range_constraint, kwds={'target':5.0})
@quadratic_equality(condition=mean_constraint, kwds={'target':5.0})
def penalty(x):
return 0.0
ndim = 3
constraints = as_constraint(penalty, solver='fmin')
#XXX: this is expensive to evaluate, as there are nested optimizations
from numpy import arange
x = arange(ndim)
_x = constraints(x)
assert round(mean(_x)) == 5.0
assert round(spread(_x)) == 5.0
assert round(penalty(_x)) == 0.0
def cost(x):
return abs(sum(x) - 5.0)
npop = ndim*3
from mystic.solvers import diffev
y = diffev(cost, x, npop, constraints=constraints, disp=False, gtol=10)
assert round(mean(y)) == 5.0
assert round(spread(y)) == 5.0
assert round(cost(y)) == 5.0*(ndim-1)
def test_as_penalty():
from mystic.math.measures import mean, spread
@with_spread(5.0)
@with_mean(5.0)
def constraint(x):
return x
penalty = as_penalty(constraint)
from numpy import array
x = array([1,2,3,4,5])
def cost(x):
return abs(sum(x) - 5.0)
from mystic.solvers import fmin
y = fmin(cost, x, penalty=penalty, disp=False)
assert round(mean(y)) == 5.0
assert round(spread(y)) == 5.0
assert round(cost(y)) == 4*(5.0)
def test_with_penalty():
from mystic.math.measures import mean, spread
@with_penalty(quadratic_equality, kwds={'target':5.0})
def penalty(x, target):
return mean(x) - target
def cost(x):
return abs(sum(x) - 5.0)
from mystic.solvers import fmin
from numpy import array
x = array([1,2,3,4,5])
y = fmin(cost, x, penalty=penalty, disp=False)
assert round(mean(y)) == 5.0
assert round(cost(y)) == 4*(5.0)
def test_with_mean():
from mystic.math.measures import mean, impose_mean
@with_mean(5.0)
def mean_of_squared(x):
return [i**2 for i in x]
from numpy import array
x = array([1,2,3,4,5])
y = impose_mean(5, [i**2 for i in x])
assert mean(y) == 5.0
assert mean_of_squared(x) == y
def test_with_mean_spread():
from mystic.math.measures import mean, spread, impose_mean, impose_spread
@with_spread(50.0)
@with_mean(5.0)
def constrained_squared(x):
return [i**2 for i in x]
from numpy import array
x = array([1,2,3,4,5])
y = impose_spread(50.0, impose_mean(5.0,[i**2 for i in x]))
assert almostEqual(mean(y), 5.0, tol=1e-15)
assert almostEqual(spread(y), 50.0, tol=1e-15)
assert constrained_squared(x) == y
def test_constrained_solve():
from mystic.math.measures import mean, spread
@with_spread(5.0)
@with_mean(5.0)
def constraints(x):
return x
def cost(x):
return abs(sum(x) - 5.0)
from mystic.solvers import fmin_powell
from numpy import array
x = array([1,2,3,4,5])
y = fmin_powell(cost, x, constraints=constraints, disp=False)
assert almostEqual(mean(y), 5.0, tol=1e-15)
assert almostEqual(spread(y), 5.0, tol=1e-15)
assert almostEqual(cost(y), 4*(5.0), tol=1e-6)
if __name__ == '__main__':
test_penalize()
test_solve()
test_solve_constraint()
test_as_constraint()
test_as_penalty()
test_with_penalty()
test_with_mean()
test_with_mean_spread()
test_constrained_solve()
Example:
- Minimize Rosenbrock's Function with Powell's method.
Demonstrates:
- standard models
- minimal solver interface
- parameter constraints solver and constraints factory decorator
- statistical parameter constraints
- customized monitors
# Powell's Directonal solver
from mystic.solvers import fmin_powell
# Rosenbrock function
from mystic.models import rosen
# tools
from mystic.monitors import VerboseMonitor
from mystic.math.measures import mean, impose_mean
from mystic.math import almostEqual
if __name__ == '__main__':
print "Powell's Method"
print "==============="
# initial guess
x0 = [0.8,1.2,0.7]
# use the mean constraints factory decorator
from mystic.constraints import with_mean
# define constraints function
@with_mean(1.0)
def constraints(x):
# constrain the last x_i to be the same value as the first x_i
x[-1] = x[0]
return x
# configure monitor
stepmon = VerboseMonitor(1)
# use Powell's method to minimize the Rosenbrock function
solution = fmin_powell(rosen, x0, constraints=constraints, itermon=stepmon)
print solution
Explanation: Solver population
EXERCISE: Use mystic to find the minimun for the peaks test function, with the bound specified by the mystic.models.peaks documentation.
EXERCISE: Use mystic to do a fit to the noisy data in the scipy.optimize.curve_fit example (the least squares fit).
Functional constraints
PENALTY: $\psi(x) = f(x) + k*p(x)$
CONSTRAINT: $\psi(x) = f(c(x)) = f(x')$
End of explanation
%%file spring.py
"a Tension-Compression String"
def objective(x):
x0,x1,x2 = x
return x0**2 * x1 * (x2 + 2)
bounds = [(0,100)]*3
# with penalty='penalty' applied, solution is:
xs = [0.05168906, 0.35671773, 11.28896619]
ys = 0.01266523
from mystic.symbolic import generate_constraint, generate_solvers, solve
from mystic.symbolic import generate_penalty, generate_conditions
equations =
1.0 - (x1**3 * x2)/(71785*x0**4) <= 0.0
(4*x1**2 - x0*x1)/(12566*x0**3 * (x1 - x0)) + 1./(5108*x0**2) - 1.0 <= 0.0
1.0 - 140.45*x0/(x2 * x1**2) <= 0.0
(x0 + x1)/1.5 - 1.0 <= 0.0
pf = generate_penalty(generate_conditions(equations), k=1e12)
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf, npop=40,
gtol=500, disp=True, full_output=True)
print result[0]
equations =
1.0 - (x1**3 * x2)/(71785*x0**4) <= 0.0
(4*x1**2 - x0*x1)/(12566*x0**3 * (x1 - x0)) + 1./(5108*x0**2) - 1.0 <= 0.0
1.0 - 140.45*x0/(x2 * x1**2) <= 0.0
(x0 + x1)/1.5 - 1.0 <= 0.0
from mystic.symbolic import generate_constraint, generate_solvers, solve
from mystic.symbolic import generate_penalty, generate_conditions
ineql, eql = generate_conditions(equations)
print "CONVERTED SYMBOLIC TO SINGLE CONSTRAINTS FUNCTIONS"
print ineql
print eql
print "\nTHE INDIVIDUAL INEQUALITIES"
for f in ineql:
print f.__doc__
print "\nGENERATED THE PENALTY FUNCTION FOR ALL CONSTRAINTS"
pf = generate_penalty((ineql, eql))
print pf.__doc__
x = [-0.1, 0.5, 11.0]
print "\nPENALTY FOR {}: {}".format(x, pf(x))
Explanation: Range (i.e. 'box') constraints
Symbolic constraints interface
End of explanation
equations =
1.0 - (x1**3 * x2)/(71785*x0**4) <= 0.0
(4*x1**2 - x0*x1)/(12566*x0**3 * (x1 - x0)) + 1./(5108*x0**2) - 1.0 <= 0.0
1.0 - 140.45*x0/(x2 * x1**2) <= 0.0
(x0 + x1)/1.5 - 1.0 <= 0.0
"a Tension-Compression String"
from spring import objective, bounds, xs, ys
from mystic.constraints import as_constraint
from mystic.penalty import quadratic_inequality
def penalty1(x): # <= 0.0
return 1.0 - (x[1]**3 * x[2])/(71785*x[0]**4)
def penalty2(x): # <= 0.0
return (4*x[1]**2 - x[0]*x[1])/(12566*x[0]**3 * (x[1] - x[0])) + 1./(5108*x[0]**2) - 1.0
def penalty3(x): # <= 0.0
return 1.0 - 140.45*x[0]/(x[2] * x[1]**2)
def penalty4(x): # <= 0.0
return (x[0] + x[1])/1.5 - 1.0
@quadratic_inequality(penalty1, k=1e12)
@quadratic_inequality(penalty2, k=1e12)
@quadratic_inequality(penalty3, k=1e12)
@quadratic_inequality(penalty4, k=1e12)
def penalty(x):
return 0.0
solver = as_constraint(penalty)
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=penalty, npop=40,
gtol=500, disp=True, full_output=True)
print result[0]
Explanation: Penatly functions
End of explanation
Crypto problem in Google CP Solver.
Prolog benchmark problem
'''
Name : crypto.pl
Original Source: P. Van Hentenryck's book
Adapted by : Daniel Diaz - INRIA France
Date : September 1992
'''
def objective(x):
return 0.0
nletters = 26
bounds = [(1,nletters)]*nletters
# with penalty='penalty' applied, solution is:
# A B C D E F G H I J K L M N O P Q
xs = [ 5, 13, 9, 16, 20, 4, 24, 21, 25, 17, 23, 2, 8, 12, 10, 19, 7, \
# R S T U V W X Y Z
11, 15, 3, 1, 26, 6, 22, 14, 18]
ys = 0.0
# constraints
equations =
B + A + L + L + E + T - 45 == 0
C + E + L + L + O - 43 == 0
C + O + N + C + E + R + T - 74 == 0
F + L + U + T + E - 30 == 0
F + U + G + U + E - 50 == 0
G + L + E + E - 66 == 0
J + A + Z + Z - 58 == 0
L + Y + R + E - 47 == 0
O + B + O + E - 53 == 0
O + P + E + R + A - 65 == 0
P + O + L + K + A - 59 == 0
Q + U + A + R + T + E + T - 50 == 0
S + A + X + O + P + H + O + N + E - 134 == 0
S + C + A + L + E - 51 == 0
S + O + L + O - 37 == 0
S + O + N + G - 61 == 0
S + O + P + R + A + N + O - 82 == 0
T + H + E + M + E - 72 == 0
V + I + O + L + I + N - 100 == 0
W + A + L + T + Z - 34 == 0
var = list('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
# Let's say we know the vowels.
bounds[0] = (5,5) # A
bounds[4] = (20,20) # E
bounds[8] = (25,25) # I
bounds[14] = (10,10) # O
bounds[20] = (1,1) # U
from mystic.constraints import unique, near_integers, has_unique
from mystic.symbolic import generate_penalty, generate_conditions
pf = generate_penalty(generate_conditions(equations,var),k=1)
from mystic.constraints import as_constraint
cf = as_constraint(pf)
from mystic.penalty import quadratic_equality
@quadratic_equality(near_integers)
@quadratic_equality(has_unique)
def penalty(x):
return pf(x)
from numpy import round, hstack, clip
def constraint(x):
x = round(x).astype(int) # force round and convert type to int
x = clip(x, 1,nletters) #XXX: hack to impose bounds
x = unique(x, range(1,nletters+1))
return x
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
from mystic.monitors import Monitor, VerboseMonitor
mon = VerboseMonitor(10)
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf,
constraints=constraint, npop=52, ftol=1e-8, gtol=1000,
disp=True, full_output=True, cross=0.1, scale=0.9, itermon=mon)
print result[0]
Explanation: "Operators" that directly constrain search space
End of explanation
Eq 10 in Google CP Solver.
Standard benchmark problem.
def objective(x):
return 0.0
bounds = [(0,10)]*7
# with penalty='penalty' applied, solution is:
xs = [6., 0., 8., 4., 9., 3., 9.]
ys = 0.0
# constraints
equations =
98527*x0 + 34588*x1 + 5872*x2 + 59422*x4 + 65159*x6 - 1547604 - 30704*x3 - 29649*x5 == 0.0
98957*x1 + 83634*x2 + 69966*x3 + 62038*x4 + 37164*x5 + 85413*x6 - 1823553 - 93989*x0 == 0.0
900032 + 10949*x0 + 77761*x1 + 67052*x4 - 80197*x2 - 61944*x3 - 92964*x5 - 44550*x6 == 0.0
73947*x0 + 84391*x2 + 81310*x4 - 1164380 - 96253*x1 - 44247*x3 - 70582*x5 - 33054*x6 == 0.0
13057*x2 + 42253*x3 + 77527*x4 + 96552*x6 - 1185471 - 60152*x0 - 21103*x1 - 97932*x5 == 0.0
1394152 + 66920*x0 + 55679*x3 - 64234*x1 - 65337*x2 - 45581*x4 - 67707*x5 - 98038*x6 == 0.0
68550*x0 + 27886*x1 + 31716*x2 + 73597*x3 + 38835*x6 - 279091 - 88963*x4 - 76391*x5 == 0.0
76132*x1 + 71860*x2 + 22770*x3 + 68211*x4 + 78587*x5 - 480923 - 48224*x0 - 82817*x6 == 0.0
519878 + 94198*x1 + 87234*x2 + 37498*x3 - 71583*x0 - 25728*x4 - 25495*x5 - 70023*x6 == 0.0
361921 + 78693*x0 + 38592*x4 + 38478*x5 - 94129*x1 - 43188*x2 - 82528*x3 - 69025*x6 == 0.0
from mystic.symbolic import generate_penalty, generate_conditions
pf = generate_penalty(generate_conditions(equations))
from numpy import round as npround
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf,
constraints=npround, npop=40, gtol=50, disp=True, full_output=True)
print result[0]
Explanation: Special cases
Integer and mixed integer programming
End of explanation
"Pressure Vessel Design"
def objective(x):
x0,x1,x2,x3 = x
return 0.6224*x0*x2*x3 + 1.7781*x1*x2**2 + 3.1661*x0**2*x3 + 19.84*x0**2*x2
bounds = [(0,1e6)]*4
# with penalty='penalty' applied, solution is:
xs = [0.72759093, 0.35964857, 37.69901188, 240.0]
ys = 5804.3762083
from mystic.symbolic import generate_constraint, generate_solvers, solve
from mystic.symbolic import generate_penalty, generate_conditions
equations =
-x0 + 0.0193*x2 <= 0.0
-x1 + 0.00954*x2 <= 0.0
-pi*x2**2*x3 - (4/3.)*pi*x2**3 + 1296000.0 <= 0.0
x3 - 240.0 <= 0.0
pf = generate_penalty(generate_conditions(equations), k=1e12)
if __name__ == '__main__':
from mystic.solvers import diffev2
from mystic.math import almostEqual
result = diffev2(objective, x0=bounds, bounds=bounds, penalty=pf, npop=40, gtol=500,
disp=True, full_output=True)
print result[0]
Explanation: EXERCISE: Convert the following "Pressure Vessel Design" code to use explicit penalty functions and not symbolic constraints.
End of explanation
Minimize: f = 2*x[0] + 1*x[1]
Subject to: -1*x[0] + 1*x[1] <= 1
1*x[0] + 1*x[1] >= 2
1*x[1] >= 0
1*x[0] - 2*x[1] <= 4
where: -inf <= x[0] <= inf
def objective(x):
x0,x1 = x
return 2*x0 + x1
equations =
-x0 + x1 - 1.0 <= 0.0
-x0 - x1 + 2.0 <= 0.0
x0 - 2*x1 - 4.0 <= 0.0
bounds = [(None, None),(0.0, None)]
# with penalty='penalty' applied, solution is:
xs = [0.5, 1.5]
ys = 2.5
from mystic.symbolic import generate_conditions, generate_penalty
pf = generate_penalty(generate_conditions(equations), k=1e3)
if __name__ == '__main__':
from mystic.solvers import fmin_powell
from mystic.math import almostEqual
result = fmin_powell(objective, x0=[0.0,0.0], bounds=bounds,
penalty=pf, disp=True, full_output=True, gtol=3)
print result[0]
Explanation: Linear and quadratic constraints
End of explanation |
7,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regresión multiple utilizando grandiente descendente de TensorFlow
Step1: Input
Generamos la muestra de grado 5
Step2: Problema
Calcular los coeficientes que mejor se ajusten a la muestra sabiendo que es de grado 5
Generamos la matriz de coeficientes de grado 5
Step3: Solucion 1
Step4: Mostramos la curva de error por iteracion
Solución 2
Step5: Resultados | Python Code:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
%matplotlib inline
import sys
sys.path.append('/home/pedro/git/ElCuadernillo/ElCuadernillo/20160220_TensorFlowRegresionMultiple')
import gradient_descent_tensorflow as gdt
Explanation: Regresión multiple utilizando grandiente descendente de TensorFlow
End of explanation
grado=4
tamano=100000
x,y,coeficentes=gdt.generar_muestra(grado,tamano)
print ("Coeficientes: ",coeficentes)
plt.plot(x,y,'.')
Explanation: Input
Generamos la muestra de grado 5
End of explanation
train_x=gdt.generar_matriz_coeficientes(x,grado) # MatrizA
train_y=np.reshape(y,(y.shape[0],-1)) # VectorColumna
Explanation: Problema
Calcular los coeficientes que mejor se ajusten a la muestra sabiendo que es de grado 5
Generamos la matriz de coeficientes de grado 5
End of explanation
pesos_gd,ecm,t_c_gd=gdt.regression_gradient_descent(train_x,train_y,diff_error_parada=1e-4)
Explanation: Solucion 1: Por medio gradient descent
Se va a calcular minimizando ecm por medio de GradientDescent
End of explanation
pesos_sgd,ecm,t_c_sgd=gdt.regression_stochastic_gradient_descent(train_x,train_y,1,diff_error_parada=1e-4)
Explanation: Mostramos la curva de error por iteracion
Solución 2: Por medio stochastic gradient descent
Mucho mas rápido para grandes volumenes de datos
End of explanation
plt=gdt.grafica_resultados(coeficentes,pesos_gd,pesos_sgd,t_c_gd,t_c_sgd)
plt.show()
(0.339/(100*382))/(8.45/(100000*618))
(100000*618)/(100*382)
0.339/382
0.014158576051779935/0.0008874345549738221
Explanation: Resultados:
End of explanation |
7,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cual es la mejor estrategia para adivinar?
Por Miguel Escalona
Step1: ¡Adivina Quién es!
El juego de adivina quién es, consiste en adivinar el personaje que tu oponente ha seleccionado antes de que él/ella adivine el tuyo.
La dinámica del juego es
Step2: 1. Cargando los datos
Para la carga de datos usaremos la función read_csv de pandas. Pandas cuenta con un amplio listado de funciones para la carga de datos. Mas informacion en la documentación de la API.
Step3: 2. ¿Cuántos personajes tenemos con cada característica?
Step4: Pregunta, ¿cuántas personas tienen la boca grande?
Step5: y cuántos de estos son hombres?
Step6: 3. Separamos el target de los features
Step7: 4. Codificación de variables categóricas
Step8: 5. Entrenando un arbol de decisión
Step9: 5.1 Obtención de los pesos de cada feature
Step10: 6. Visualizando el arbol (requiere graphviz)
Si no lo tienes instalado, puedes ejecutar
conda install graphviz
en una terminal
Step11: 7. Es la hora de jugar!
Step12: 8. Esto se pone mejor!!!
probemos otro clasificador del sklearn
Step13: Ahora creamos un nuevo personaje, con las caracteristicas que queramos...
Step14: podemos modificar los features de nuestro personaje, llamando a la funcion modifica_feature_de_personaje
Step15: Comparemos que dice cada clasificador
Step16: Veamos las probailidades del random forest | Python Code:
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (15.0, 6.0)
import numpy as np
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
from IPython.display import Image
Explanation: Cual es la mejor estrategia para adivinar?
Por Miguel Escalona
End of explanation
Image('data/guess_who_board.jpg', width=700)
Explanation: ¡Adivina Quién es!
El juego de adivina quién es, consiste en adivinar el personaje que tu oponente ha seleccionado antes de que él/ella adivine el tuyo.
La dinámica del juego es:
* Cada jugador elige un personaje al azar
* Por turnos, cada jugador realiza preguntas de sí o no, e intenta adivinar el personaje del oponente.
* Las preguntas válidas están basadas en la apariencia de los personajes y deberían ser fáciles de responder.
* Ejemplo de pregunta válida: ¿Tiene el cabello negro?
* Ejemplo de pregunta no válida: ¿Luce como un ex-presidiario?
A continuación, cargamos el tablero con los personajes.
End of explanation
# Carga el modulo pandas
# Escribe aqui tu codigo para cargar los datos (utiliza read_csv), llama a los datos df
df =
df.head()
Explanation: 1. Cargando los datos
Para la carga de datos usaremos la función read_csv de pandas. Pandas cuenta con un amplio listado de funciones para la carga de datos. Mas informacion en la documentación de la API.
End of explanation
#Separamos los tipos de variables
categorical_var = 'color de cabello'
binary_vars = list(set(df.keys()) - set([categorical_var, 'NOMBRE']))
# *** Escribe tu codigo aquí ***
# Para las variables booleanas calculamos la suma
# *** Escribe tu codigo aquí ***
# Para las variables categoricas, observamos la frecuencia de cada categoría
Explanation: 2. ¿Cuántos personajes tenemos con cada característica?
End of explanation
# *** Escribe tu codigo aquí ***
Explanation: Pregunta, ¿cuántas personas tienen la boca grande?
End of explanation
# *** Escribe tu codigo aquí ***
Explanation: y cuántos de estos son hombres?
End of explanation
labels = df['NOMBRE']
del df['NOMBRE']
df.head()
# inspección del target
Explanation: 3. Separamos el target de los features
End of explanation
from sklearn.feature_extraction import DictVectorizer
vectorizer = DictVectorizer(sparse=False)
ab=vectorizer.fit_transform(df.to_dict('records'))
dft = pd.DataFrame(ab, columns=vectorizer.get_feature_names())
dft.head().T
Explanation: 4. Codificación de variables categóricas
End of explanation
from sklearn.tree import DecisionTreeClassifier
clasificador = DecisionTreeClassifier(criterion='entropy', random_state=42)
clasificador.fit(dft, labels)
Explanation: 5. Entrenando un arbol de decisión
End of explanation
feat = pd.DataFrame(index=dft.keys(), data=clasificador.feature_importances_, columns=['score'])
feat = feat.sort_values(by='score', ascending=False)
# grafica feat, para ver las variables mas relevantes
Explanation: 5.1 Obtención de los pesos de cada feature
End of explanation
from sklearn.tree import export_graphviz
dotfile = open('quien_es_quien_tree.dot', 'w')
export_graphviz(
clasificador,
out_file = dotfile,
filled=True,
feature_names = dft.columns,
class_names=list(labels),
rotate=True,
max_depth=None,
rounded=True,
)
dotfile.close()
!dot -Tpng quien_es_quien_tree.dot -o quien_es_quien_tree.png
Image('quien_es_quien_tree.png', width=1000)
Explanation: 6. Visualizando el arbol (requiere graphviz)
Si no lo tienes instalado, puedes ejecutar
conda install graphviz
en una terminal
End of explanation
# Elige un personaje por su numero de observacion
observacion_numero = 17
mi_personaje = dft.iloc[observacion_numero]
mi_personaje
personaje = clasificador.predict(mi_personaje)[0]
print('El personaje elegido es: ' + personaje + ' y en realidad es: ' + labels[observacion_numero+1])
Explanation: 7. Es la hora de jugar!
End of explanation
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(criterion='entropy', random_state=42)
rfc.fit(dft, labels)
Explanation: 8. Esto se pone mejor!!!
probemos otro clasificador del sklearn
End of explanation
new_per = np.zeros(len(dft.keys()))
nuevo_personaje = pd.DataFrame(index=dft.keys(), data=new_per, columns=['features']).T
nuevo_personaje.T
def modifica_feature_de_personaje(data, feature, nuevo_valor=1.0):
data[feature] = nuevo_valor
return data
Image('data/guess_who_board.jpg', width=700)
Explanation: Ahora creamos un nuevo personaje, con las caracteristicas que queramos...
End of explanation
nuevo_personaje = modifica_feature_de_personaje(nuevo_personaje, 'bigote', 1.0)
nuevo_personaje.T
Explanation: podemos modificar los features de nuestro personaje, llamando a la funcion modifica_feature_de_personaje
End of explanation
print('El arbol de decision dice que es: ' + clasificador.predict(nuevo_personaje)[0])
print('El random forest cree que es: ' + rfc.predict(nuevo_personaje)[0])
Explanation: Comparemos que dice cada clasificador
End of explanation
ind = range(24)
plt.bar(ind,rfc.predict_proba(nuevo_personaje)[0])
plt.xticks(ind, labels.values, rotation='vertical')
plt.show()
Explanation: Veamos las probailidades del random forest
End of explanation |
7,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting
This tutorial explains the high-level interface to plotting provided by the Bundle. You are of course always welcome to access arrays and plot manually.
PHOEBE 2.4 uses autofig 1.2 as an intermediate layer for highend functionality to matplotlib.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
Step1: This first line is only necessary for ipython noteboooks - it allows the plots to be shown on this page instead of in interactive mode. Depending on your version of Jupyter, Python, and matplotlib - you may or may not need this line in order to see plots in the notebook.
Step2: First we're going to create some fake observations so that we can show how to plot observational data. In real life, we would use something like np.loadtxt to get arrays from a data file instead.
Step3: Now we'll create a new Bundle and attach an orbit dataset (without observations) and a light curve dataset (with our "fake" observations - see Datasets for more details)
Step4: And run a forward model. See Computing Observables for more details.
Step5: Showing and Saving
NOTE
Step6: Any call to plot returns two objects - the autofig and matplotlib figure instances. Generally we won't need to do anything with these, but having them returned could come in handy if you want to manually edit either before drawing/saving the image.
In this example with so many different models and datasets, it is quite simple to build a single plot by filtering the bundle and calling the plot method on the resulting ParameterSet.
Step7: Selecting Arrays
So far, each plotting call automatically chose default arrays from that dataset to plot along each axis. To override these defaults, simply point to the qualifier of the array that you'd like plotted along a given axis.
Step8: To see the list of available qualifiers that could be passed for x or y, call the qualifiers (or twigs) property on the ParameterSet.
Step9: For more information on each of the available arrays, see the relevant tutorial on that dataset method | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Plotting
This tutorial explains the high-level interface to plotting provided by the Bundle. You are of course always welcome to access arrays and plot manually.
PHOEBE 2.4 uses autofig 1.2 as an intermediate layer for highend functionality to matplotlib.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
logger = phoebe.logger()
Explanation: This first line is only necessary for ipython noteboooks - it allows the plots to be shown on this page instead of in interactive mode. Depending on your version of Jupyter, Python, and matplotlib - you may or may not need this line in order to see plots in the notebook.
End of explanation
b = phoebe.default_binary()
b.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101))
b.run_compute(irrad_method='none')
times = b.get_value('times', context='model')
fluxes = b.get_value('fluxes', context='model') + np.random.normal(size=times.shape) * 0.01
sigmas = np.ones_like(times) * 0.05
Explanation: First we're going to create some fake observations so that we can show how to plot observational data. In real life, we would use something like np.loadtxt to get arrays from a data file instead.
End of explanation
b = phoebe.default_binary()
b.set_value('q', 0.8)
b.set_value('ecc', 0.1)
b.set_value('incl@orbit', 80)
b.set_value('irrad_method', 'none')
b.add_dataset('orb', compute_times=np.linspace(0,4,1000), dataset='orb01', component=['primary', 'secondary'])
b.add_dataset('lc', times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01')
Explanation: Now we'll create a new Bundle and attach an orbit dataset (without observations) and a light curve dataset (with our "fake" observations - see Datasets for more details):
End of explanation
b.run_compute(irrad_method='none')
Explanation: And run a forward model. See Computing Observables for more details.
End of explanation
afig, mplfig = b.plot(show=True)
Explanation: Showing and Saving
NOTE: in IPython notebooks calling plot will display directly below the call to plot. When not in IPython you have several options for viewing the figure:
call b.show or b.savefig after calling plot.
use the returned autofig and matplotlib figures however you'd like
pass show=True to the plot method.
pass save='myfilename' to the plot method, which is the same as calling plt.savefig('myfilename').
Default Plots
To see the options for plotting that are dataset-dependent see the tutorials on that dataset method:
ORB dataset
MESH dataset
LC dataset
RV dataset
LP dataset
By calling the plot method on the Bundle (or any ParameterSet) without any arguments, a plot or series of subplots will be built based on the contents of that ParameterSet.
End of explanation
afig, mplfig = b.filter(dataset='lc01').plot(show=True)
afig, mplfig = b.plot(dataset='lc01', show=True)
Explanation: Any call to plot returns two objects - the autofig and matplotlib figure instances. Generally we won't need to do anything with these, but having them returned could come in handy if you want to manually edit either before drawing/saving the image.
In this example with so many different models and datasets, it is quite simple to build a single plot by filtering the bundle and calling the plot method on the resulting ParameterSet.
End of explanation
afig, mplfig = b.filter(dataset='orb01').plot(x='times', y='vus', show=True)
Explanation: Selecting Arrays
So far, each plotting call automatically chose default arrays from that dataset to plot along each axis. To override these defaults, simply point to the qualifier of the array that you'd like plotted along a given axis.
End of explanation
b.filter(context='model', dataset='orb01').qualifiers
Explanation: To see the list of available qualifiers that could be passed for x or y, call the qualifiers (or twigs) property on the ParameterSet.
End of explanation
afig, mplfig = b.plot(dataset='lc01', x='phases', z=0, show=True)
Explanation: For more information on each of the available arrays, see the relevant tutorial on that dataset method:
ORB dataset
MESH dataset
LC dataset
RV dataset
LP dataset
Selecting Phase
And to plot in phase we just send x='phases' or x='phases:binary'.
Setting x='phases' will use the ephemeris from the top-level of the hierarchy
(as if you called b.get_ephemeris()), whereas passing a string after the colon,
will use the ephemeris of that component.
End of explanation |
7,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Partial derivative equations
A linear equation is defined as
Step1: Function to calculate loss
$$ L(b,m) = \frac{1}{2N} \sum_{i=1}^N (b + m x_{i} - y_{i})^2 $$
Step2: Function to calculate gradient
$$ \frac{\partial}{\partial b} L(b,m) = \frac{1}{N} \sum_{i=1}^N (b + m x_{i} - y_{i}) $$
$$ \frac{\partial}{\partial m} L(b,m) = \frac{1}{N} \sum_{i=1}^N (b + m x_{i} - y_{i}) x_{i} $$
Step3: Prepare data
Step4: Fit ourselves
Step5: Fit using scikit
Step6: Compare fits
Step7: Plot our fit and scikit fit
Step8: Gradient descent animation
Step9: Example from Udacity | Python Code:
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import animation, rc
from sklearn.linear_model import LinearRegression
from IPython.display import HTML
Explanation: Partial derivative equations
A linear equation is defined as:
$$ y(x_{0},x_{1},\ldots,x_{n}) = b + m_{0} x_{0} + m_{1} x_{1} + \ldots + m_{n} x_{n} $$
To simplify, let's consider one slope m and one intercept b:
$$ y(x) = b + m x $$
Loss function is defined as:
$$ L(b,m)
= \frac{1}{2N} \sum_{i=1}^N (y(x_{i}) - y_{i})^2
= \frac{1}{2N} \sum_{i=1}^N (b + m x_{i} - y_{i})^2 $$
Let
$$ g(x) = \frac{1}{2N} \sum_{i=1}^N x^2 $$
$$ f(b,m) = b + m x_{i} - y_{i} $$
Then we can rewrite loss function as:
$$ L(b,m) = \frac{1}{2N} \sum_{i=1}^N f(b,m)^2 = g(f(b,m)) $$
To calculate the gradient, we use chain rule:
$$ \frac{\partial}{\partial x} g(x)
= \frac{\partial}{\partial x} (\frac{1}{2N} \sum_{i=1}^N x^2)
= \frac{2}{2N} \sum_{i=1}^N x
= \frac{1}{N} \sum_{i=1}^N x $$
$$ \frac{\partial}{\partial b} f(b,m)
= \frac{\partial}{\partial b} (b + m x_{i} - y_{i})
= b^0 + 0 - 0
= 1 $$
$$ \frac{\partial}{\partial m} f(b,m)
= \frac{\partial}{\partial m} (b + m x_{i} - y_{i})
= 0 + x_{i} m^0 - 0
= x_{i} $$
$$ \frac{\partial}{\partial b} L(b,m)
= \frac{\partial}{\partial b} g(f(b,m))
= \frac{\partial}{\partial b} g(f(b,m)) \cdot \frac{\partial}{\partial b} f(b,m)
= \frac{\partial}{\partial b} g(b + m x_{i} - y_{i}) \cdot \frac{\partial}{\partial b} f(m,b)
= \frac{1}{N} \sum_{i=1}^N (b + m x_{i} - y_{i}) $$
$$ \frac{\partial}{\partial m} L(b,m)
= \frac{\partial}{\partial m} g(f(b,m))
= \frac{\partial}{\partial m} g(f(b,m)) \cdot \frac{\partial}{\partial m} f(b,m)
= \frac{\partial}{\partial m} g(b + m x_{i} - y_{i}) \cdot \frac{\partial}{\partial m} f(m,b)
= \frac{1}{N} \sum_{i=1}^N (b + m x_{i} - y_{i}) x_{i} $$
End of explanation
def loss(b, m, sample_x, sample_y):
total_error = 0.0
n = len(sample_x)
for i in range(0, n):
x = sample_x[i]
y = sample_y[i]
total_error += (b + m * x - y) ** 2
return total_error / (2.0 * float(n))
Explanation: Function to calculate loss
$$ L(b,m) = \frac{1}{2N} \sum_{i=1}^N (b + m x_{i} - y_{i})^2 $$
End of explanation
def gradient(b, m, sample_x, sample_y):
gradient_b = 0.0
gradient_m = 0.0
n = len(sample_x)
for i in range(0, n):
x = sample_x[i]
y = sample_y[i]
tmp = b + m * x - y
gradient_b += tmp
gradient_m += tmp * x
gradient_b = gradient_b / (float(n))
gradient_m = gradient_m / (float(n))
return gradient_b, gradient_m
Explanation: Function to calculate gradient
$$ \frac{\partial}{\partial b} L(b,m) = \frac{1}{N} \sum_{i=1}^N (b + m x_{i} - y_{i}) $$
$$ \frac{\partial}{\partial m} L(b,m) = \frac{1}{N} \sum_{i=1}^N (b + m x_{i} - y_{i}) x_{i} $$
End of explanation
sample_points = np.genfromtxt("linear_regression_data.csv", delimiter=",")
sample_x = sample_points[:, 0]
sample_y = sample_points[:, 1]
predict_x = np.linspace(20, 80, num=100)
Explanation: Prepare data
End of explanation
def our_fit(learning_rate, num_iterations, b_init, m_init):
b = b_init
m = m_init
b_intermediate = []
m_intermediate = []
for i in range(num_iterations):
gradient_b, gradient_m = gradient(b, m, sample_x, sample_y)
b -= learning_rate * gradient_b
m -= learning_rate * gradient_m
# store intermediate value for later animation
b_intermediate.append(b)
m_intermediate.append(m)
return b, m, b_intermediate, m_intermediate
learning_rate = 0.00001
num_iterations = 2000
our_b_init = 0
our_m_init = 0
our_b, our_m, our_b_intermediate, our_m_intermediate = our_fit(learning_rate, num_iterations, our_b_init, our_m_init)
print("our fit: b=%f m=%f loss=%f" % (our_b_init, our_m_init, loss(our_b_init, our_m_init, sample_x, sample_y)))
print("our fit: b=%f m=%f loss=%f" % (our_b, our_m, loss(our_b, our_m, sample_x, sample_y)))
def our_predict(x):
return our_b + our_m * x
Explanation: Fit ourselves
End of explanation
sci_model = LinearRegression()
sci_model.fit(sample_x.reshape(-1, 1), sample_y.reshape(-1, 1))
sci_b = sci_model.intercept_[0]
sci_m = sci_model.coef_[0]
print("sci fit: b=%f m=%f loss=%f" % (sci_b, sci_m, loss(sci_b, sci_m, sample_x, sample_y)))
def sci_predict(x):
return sci_b + sci_m * x
Explanation: Fit using scikit
End of explanation
print("b: our=%f sci=%f diff=%f" % (our_b, sci_b, abs(our_b - sci_b)))
print("m: our=%f sci=%f diff=%f" % (our_m, sci_m, abs(our_m - sci_m)))
def compare_predict(x):
our_y = our_predict(x)
sci_y = sci_predict(x)
print("y(%f): our=%10.6f sci=%10.6f diff=%f" % (x, our_y, sci_y, abs(our_y - sci_y)))
compare_predict(10.23456)
compare_predict(40.23454)
compare_predict(32.39456)
compare_predict(88.23453)
Explanation: Compare fits
End of explanation
plt.figure(figsize=(10,10))
plt.plot(sample_x, sample_y, "bo", label="sample points")
plt.plot(predict_x, sci_predict(predict_x), "g-", label="scikit fit")
plt.plot(predict_x, our_predict(predict_x), "r-", label="our fit")
plt.legend(loc='upper left', frameon=False)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Compare our fit vs scikit-learn fit")
Explanation: Plot our fit and scikit fit
End of explanation
# create graph objects for animation
fig, ax = plt.subplots(figsize=(10,10))
plt.axis([20, 80, 25, 130])
plt.plot(sample_x, sample_y, "bo", label="sample points")
plt.plot(predict_x, sci_predict(predict_x), "g-", label="scikit fit")
line, = ax.plot([], [], "r-", lw=2, label="our fit")
plt.legend(loc='upper left', frameon=False)
plt.xlabel("x")
plt.ylabel("y")
plt.title("gradient descent animation")
points = np.genfromtxt("linear_regression_data.csv", delimiter=",")
# first prepare animation data
learning_rate = 0.00001
num_iterations = 250
b_init = 0
m_init = 0
b, m, b_intermediate, m_intermediate = our_fit(learning_rate, num_iterations, b_init, m_init)
def linear(b, m):
return b + m * predict_x
# initialization function: plot the background of each frame
def init():
line.set_data(predict_x, linear(b_init, m_init))
return (line,)
# animation function. This is called sequentially
def animate(i):
line.set_data(predict_x, linear(b_intermediate[i], m_intermediate[i]))
return (line,)
# call the animator. blit=True means only re-draw the parts that have changed.
rc('animation', html='html5')
animation.FuncAnimation(fig, animate, init_func=init, frames=num_iterations, interval=20, blit=True)
Explanation: Gradient descent animation
End of explanation
bmi_life_data = pd.read_csv("bmi_and_life_expectancy.csv")
bmi_life_model = LinearRegression()
bmi_life_model.fit(bmi_life_data[['BMI']], bmi_life_data[['Life expectancy']])
# Predict life expectancy for a BMI value of 21.07931
laos_life_exp = bmi_life_model.predict(21.07931)
print("life expectancy of Laos is %f" % laos_life_exp)
Explanation: Example from Udacity
End of explanation |
7,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transliteration
Transliteration is the conversion of a text from one script to another.
For instance, a Latin transliteration of the Greek phrase "Ελληνική Δημοκρατία", usually translated as 'Hellenic Republic', is "Ellēnikḗ Dēmokratía".
Step1: Languages Coverage
Step2: Downloading Necessary Models
Step4: Example
We tag each word in the text with one part of speech.
Step5: We can query all the tagged words
Step6: Command Line Interface | Python Code:
from polyglot.transliteration import Transliterator
Explanation: Transliteration
Transliteration is the conversion of a text from one script to another.
For instance, a Latin transliteration of the Greek phrase "Ελληνική Δημοκρατία", usually translated as 'Hellenic Republic', is "Ellēnikḗ Dēmokratía".
End of explanation
from polyglot.downloader import downloader
print(downloader.supported_languages_table("transliteration2"))
Explanation: Languages Coverage
End of explanation
%%bash
polyglot download embeddings2.en transliteration2.ar
Explanation: Downloading Necessary Models
End of explanation
from polyglot.text import Text
blob = We will meet at eight o'clock on Thursday morning.
text = Text(blob)
Explanation: Example
We tag each word in the text with one part of speech.
End of explanation
for x in text.transliterate("ar"):
print(x)
Explanation: We can query all the tagged words
End of explanation
!polyglot --lang en tokenize --input testdata/cricket.txt | polyglot --lang en transliteration --target ar | tail -n 30
Explanation: Command Line Interface
End of explanation |
7,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Object Oriented Programming
What is an Object?
First some semantics
Step1: Note the reference to object, this means that our new class inherits from object. We won't be going into too much detail about inheritance, but for now you should always inherit from object when defining a class.
Once a class is defined you can create an instance of that class, which is an object. In Python we do this by calling the class name as if it were a function
Step2: A class can store some data (after all, an empty class isn't very interesting!)
Step3: We can access variables stored in a class by writing the name of the instance followed by a dot and then the name of the variable
Step4: Classes can also contain functions. Functions attached to classes are called methods
Step5: The first argument to every method automatically refers to the object we're calling the method on, by convention we call that argument self.
Step6: Notice we don't have to pass the self argument, Python's object system does this for you.
Some methods are called special methods. Their names start and end with a double underscore. A particularly useful special method is __init__, which initializes an object.
Step7: The __init__ method is called when we create an instance of a class. Now when we call the class name we can pass the arguments required by __init__
Step8: Methods on an object have acces to the variables defined on the object | Python Code:
class A(object):
pass
Explanation: Object Oriented Programming
What is an Object?
First some semantics:
- An object is essentially a container which holds some data, and crucially some associated methods for working with that data.
- We define objects, and their behaviours, using something called a class.
- We create objects by instantiating classes, so, objects are instances of classes.
Note, these are very similar to structures, with associated functions attached.
Why do we need objects?
This is all very nice, but why bother with the overhead and confusion of objects and classes? People have been working with functional programs for decades and they seem to work!
A few core ideas:
Modularity
Separation of concerns
Abstraction over complex mechanisms
We've used a lot of objects already!
Most of the code we've been using already has made heavy use of object-oriented programming:
NumPy arrays are objects (with attributes like shape and methods like mean())
Iris cubes are objects
CIS datasets are objects
Matplotlib axes/figures/lines etc. are all objects
Object-Oriented Programming in Python
In many languages we're forced into using classes and objects for everything (e.g. Java and C#), but some languages don't support objects at all (e.g. R and Fortran 77).
In python we have (in my opinion) a nice half-way house, we have a full OO implementation when we need it (including multiple inheritance, abstract classes etc), but we can use functional code when it's more desirable to do so.
Defining a class in Python is easy:
End of explanation
a_object = A()
print(type(a_object))
Explanation: Note the reference to object, this means that our new class inherits from object. We won't be going into too much detail about inheritance, but for now you should always inherit from object when defining a class.
Once a class is defined you can create an instance of that class, which is an object. In Python we do this by calling the class name as if it were a function:
End of explanation
class B(object):
value = 1
Explanation: A class can store some data (after all, an empty class isn't very interesting!):
End of explanation
b_object = B()
print(b_object.value)
Explanation: We can access variables stored in a class by writing the name of the instance followed by a dot and then the name of the variable:
End of explanation
class B(object):
value = 1
def show_value(self):
print('self.value is {}'.format(self.value))
Explanation: Classes can also contain functions. Functions attached to classes are called methods:
End of explanation
b1 = B()
b1.show_value()
b1.value = 999
b1.show_value()
Explanation: The first argument to every method automatically refers to the object we're calling the method on, by convention we call that argument self.
End of explanation
class C(object):
def __init__(self, value):
self.var = value
Explanation: Notice we don't have to pass the self argument, Python's object system does this for you.
Some methods are called special methods. Their names start and end with a double underscore. A particularly useful special method is __init__, which initializes an object.
End of explanation
c1 = C("Python!")
c2 = C("Hello")
print(c1.var)
print(c2.var)
Explanation: The __init__ method is called when we create an instance of a class. Now when we call the class name we can pass the arguments required by __init__:
End of explanation
class Counter(object):
def __init__(self, start=0):
self.value = start
def increment(self):
self.value += 1
counter1 = Counter()
print(counter1.value)
counter1.increment()
print(counter1.value)
counter2 = Counter(start=10)
counter2.increment()
counter2.increment()
print(counter2.value)
Explanation: Methods on an object have acces to the variables defined on the object:
End of explanation |
7,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas 3
Step1: <a id=wants></a>
Example
Step2: Reminders
What kind of object does each of the following produce?
Step3: Wants
We might imagine doing several different things with this data
Step4: Comments. The problem here is that the columns include both the numbers (which we want to plot) and some descriptive information (which we don't).
<a id='index'></a>
Setting and resetting the index
We start by setting and resetting the index. That may sound like a step backwards -- haven't we done this already? -- but it reminds us of some things that will be handy later.
Take the dataframe dd. What would we like in the index? Evenutally we'd like the dates [2011, 2012, 2013], but right now the row labels are more naturally the variable or country. Here are some varriants.
Setting the index
Step5: Exercise. Set Variable as the index.
Comment. Note that the new index brought its name along
Step6: Let's take a closer look at the index
Step7: That's a lot to process, so we break it into pieces.
ddi.index.names contains a list of level names. (Remind yourself that lists are ordered, so this tracks levels.)
ddi.index.levels contains the values in each level.
Here's what they like like here
Step8: Knowing the order of the index components and being able to inspect their values and names is fundamental to working with a multi-index.
Resetting the index
We've seen that set_index pushes columns into the index. Here we see that reset_index does the reverse
Step9: Comment. By default, reset_index pushes one or more index levels into columns. If we want to discard that level of the index altogether, we use the parameter drop=True.
Step10: Exercise. For the dataframe ddi
Step11: Comment. We see here that the multi-index for the rows has been turned into a multi-index for the columns. Works the same way.
The only problem here is that the column labels are more complicated than we might want. Here, for example, is what we get with the plot method. As usual, .plot() plots all the columns of the dataframe, but here that means we're mixing variables. And the legend contains all the levels of the column labels.
Step12: Comment. Ooooh, that's ugly! We're on the right track, but evidently not there yet.
Referring to variables with a multi-index
Can we refer to variables in the same way? Sort of, as long as we refer to the top level of the column index. It gives us a dataframe that's a subset of the original one.
Let's try each of these
Step13: Exercise. With the dataframe ddt
Step14: Exercise. Use the dataframe ddts to plot Debt and Surplus across time for Argentina. Hint
Step15: Exercise. Use a combination of xs and standard slicing with [...] to extract the variable Debt for Greece.
Exercise. Use the dataframe ddt -- and the xs method -- to plot Debt and Surplus across time for Argentina.
<a id='stack'></a>
Stacking and unstacking
The set_index and reset_index methods work on the row labels -- the index. They move columns to the index and the reverse. The stack and unstack methods move index levels to and from column levels
Step16: Let's remind ourselves what we want. We want to
move the column index (Year) into the row index
move the Variable and ISO levels the other way, into the column labels.
The first one uses stack, the second one unstack.
Stacking
We stack our data, one variable on top of another, with a multi-index to keep track of what's what. In simple terms, we change the data from a wide format to a long format. The stack method takes the lowest column level and makes it the lowest row level.
Step17: Unstacking
Stacking moves columns into the index, "stacking" the data up into longer columns. Unstacking does the reverse, taking levels of the row index and turning them into column labels. Roughly speaking we're rotating or pivoting the data.
Step18: Exercise. Run the code below and explain what each line of code does.
Step19: Exercise (challenging). Take the unstacked dataframe dds. Use some combination of stack, unstack, and plot to plot the variable Surplus against Year for all three countries. Challenging mostly because you need to work out the steps by yourself.
<a id='pivot'></a>
Pivoting
The pivot method
Step20: Pivoting the data
Let's think specifically about what we want. We want to graph Emp against fsize for (say) 2013. This calls for
Step21: Comment. Note that all the parameters here are columns. That's not a choice, it's the way the the pivot method is written.
We do a plot for fun
Step22: <a id='review'></a>
Review
We return to the OECD's healthcare data, specifically a subset of their table on the number of doctors per one thousand population. This loads and cleans the data | Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
Explanation: Pandas 3: Shaping data
The second in a series of notebooks that describe Pandas' powerful data management tools. This one covers shaping methods: switching rows and columns, pivoting, and stacking. We'll see that this is all about the indexes: the row and column labels.
Outline:
Example: WEO debt and deficits. Something to work with.
Indexing. Setting and resetting the index. Multi-indexes.
Switching rows and columns. Transpose. Referring to variables with multi-indexes.
Stack and unstack. Managing column structure and labels.
Pivot. Unstack shortcut if we start with wide data.
Review. Apply what we've learned.
More data management topics coming.
Note: requires internet access to run.
<!--
internal links http://sebastianraschka.com/Articles/2014_ipython_internal_links.html
-->
This IPython notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course Data Bootcamp.
<a id=prelims></a>
Preliminaries
Import packages, etc.
End of explanation
url1 = 'http://www.imf.org/external/pubs/ft/weo/2015/02/weodata/'
url2 = 'WEOOct2015all.xls'
url = url1 + url2
weo = pd.read_csv(url, sep='\t',
usecols=[1,2,3,4,6,40,41,42],
thousands=',',
na_values=['n/a', '--'])
print('Variable dtypes:\n', weo.dtypes, sep='')
# create debt and deficits dataframe: two variables and three countries
variables = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
countries = ['ARG', 'DEU', 'GRC']
dd = weo[weo['WEO Subject Code'].isin(variables) & weo['ISO'].isin(countries)]
# change column labels to something more intuitive
dd = dd.rename(columns={'WEO Subject Code': 'Variable',
'Subject Descriptor': 'Description'})
# rename variables
dd['Variable'] = dd['Variable'].replace(to_replace=['GGXWDG_NGDP', 'GGXCNL_NGDP'],
value=['Debt', 'Surplus'])
dd
Explanation: <a id=wants></a>
Example: WEO debt and deficits
We spend most of our time on one of the examples from the previous notebook. The problem in this example is that variables run across rows, rather than down columns. Our want is to flip some of the rows and columns so that we can plot the data against time. The question is how.
We use a small subset of the IMF's World Economic Outlook database that contains two variables and three countries.
End of explanation
dd.index
dd.columns
dd['ISO']
dd[['ISO', 'Variable']]
dd[dd['ISO'] == 'ARG']
Explanation: Reminders
What kind of object does each of the following produce?
End of explanation
dd.T
Explanation: Wants
We might imagine doing several different things with this data:
Plot a specific variable (debt or surplus) for a given date.
Time series plots for a specific country.
Time series plots for a specific variable.
Depending on which we want, we might organize the data differently. We'll focus on the last two.
Here's a brute force approach to the problem: simply transpose the data. This is where that leads:
End of explanation
dd.set_index('Country')
# we can do the same thing with a list, which will be meaningful soon...
dd.set_index(['Country'])
Explanation: Comments. The problem here is that the columns include both the numbers (which we want to plot) and some descriptive information (which we don't).
<a id='index'></a>
Setting and resetting the index
We start by setting and resetting the index. That may sound like a step backwards -- haven't we done this already? -- but it reminds us of some things that will be handy later.
Take the dataframe dd. What would we like in the index? Evenutally we'd like the dates [2011, 2012, 2013], but right now the row labels are more naturally the variable or country. Here are some varriants.
Setting the index
End of explanation
ddi = dd.set_index(['Variable', 'Country', 'ISO', 'Description', 'Units'])
ddi
Explanation: Exercise. Set Variable as the index.
Comment. Note that the new index brought its name along: Country in the two examples, Variable in the exercise. That's incredibly useful because we can refer to index levels by name. If we happen to have an index without a name, we can set it with
python
df.index.name = 'Whatever name we like'
Multi-indexes
We can put more than one variable in an index, which gives us a multi-index. This is sometimes called a hierarchical index because the levels of the index (as they're called) are ordered.
Multi-indexes are more common than you might think. One reason is that data itself is often multi-dimensional. A typical spreadsheet has two dimensions: the variable and the observation. The WEO data is naturally three dimensional: the variable, the year, and the country. (Think about that for a minute, it's deeper than it sounds.)
The problem we're having is fitting this nicely into two dimensions. A multi-index allows us to manage that. A two-dimensional index would work here -- the country and the variable code -- but right now we have some redundancy.
Example. We push all the descriptive, non-numerical columns into the index, leaving the dataframe itself with only numbers, which seems like a step in thee right direction.
End of explanation
ddi.index
Explanation: Let's take a closer look at the index
End of explanation
# Chase and Spencer like double quotes
print("The level names are:\n", ddi.index.names, "\n", sep="")
print("The levels (aka level values) are:\n", ddi.index.levels, sep="")
Explanation: That's a lot to process, so we break it into pieces.
ddi.index.names contains a list of level names. (Remind yourself that lists are ordered, so this tracks levels.)
ddi.index.levels contains the values in each level.
Here's what they like like here:
End of explanation
ddi.head(2)
ddi.reset_index()
# or we can reset the index by level
ddi.reset_index(level=1).head(2)
# or by name
ddi.reset_index(level='Country').head(2)
# or do more than one at a time
ddi.reset_index(level=[1,3]).head(2)
Explanation: Knowing the order of the index components and being able to inspect their values and names is fundamental to working with a multi-index.
Resetting the index
We've seen that set_index pushes columns into the index. Here we see that reset_index does the reverse: it pushes components of the index back to the columns.
Example.
End of explanation
ddi.reset_index(level=[1,3], drop=True).head(2)
Explanation: Comment. By default, reset_index pushes one or more index levels into columns. If we want to discard that level of the index altogether, we use the parameter drop=True.
End of explanation
ddt = ddi.T
ddt
Explanation: Exercise. For the dataframe ddi:
Use the reset_index method to move the Units level of the index to a column of the dataframe.
Use the drop parameter of reset_index to delete Units from the dataframe.
Switching rows and columns
If we take the dataframe ddi, we see that the everything's been put into the index but the data itself. Perhaps we can get what we want if we just flip the rows and columns. Roughly speaking, we refer to this as pivoting.
First look at switching rows and columns
The simplest way to flip rows and columns is to use the T or transpose property. When we do that, we end up with a lot of stuff in the column labels, as the multi-index for the rows gets rotated into the columns. Other than that, we're good. We can even do a plot. The only problem is all the stuff we've pushed into the column labels -- it's kind of a mess.
End of explanation
ddt.plot()
Explanation: Comment. We see here that the multi-index for the rows has been turned into a multi-index for the columns. Works the same way.
The only problem here is that the column labels are more complicated than we might want. Here, for example, is what we get with the plot method. As usual, .plot() plots all the columns of the dataframe, but here that means we're mixing variables. And the legend contains all the levels of the column labels.
End of explanation
# indexing by variable
debt = ddt['Debt']
Explanation: Comment. Ooooh, that's ugly! We're on the right track, but evidently not there yet.
Referring to variables with a multi-index
Can we refer to variables in the same way? Sort of, as long as we refer to the top level of the column index. It gives us a dataframe that's a subset of the original one.
Let's try each of these:
ddt['Debt']
ddt['Debt']['Argentina']
ddt['Debt', 'Argentina']
ddt['ARG']
What do you see? What's going on? The theme is that we can reference the top level, which in ddi is the Variable. If we try to access a lower level, it bombs.
End of explanation
ddts = ddt.swaplevel(0,1, axis=1)
ddts
Explanation: Exercise. With the dataframe ddt:
What type of object is Debt?
Construct a line plot of Debt over time with one line for each country.
Example. Let's do this together. How would we fix up the legend? What approaches cross your mind? (No code, just the general approach.)
Swapping levels
Since variables refer to the first level of the column index, it's not clear how we would group data by country. Suppose, for example, we wanted to plot Debt and Surplus for a specific country. What would we do?
One way to do that is to make the country the top level with the swaplevel method. Note the axis parameter. With axis=1 we swap column levels, with axis=0 (the default) we swap row levels.
End of explanation
#ddt.xs?
ddt.xs("Argentina", axis=1, level="Country")
ddt.xs("Argentina", axis=1, level="Country")["Debt"]
Explanation: Exercise. Use the dataframe ddts to plot Debt and Surplus across time for Argentina. Hint: In the plot method, set subplots=True so that each variable is in a separate subplot.
The xs method
Another approach to extracting data that cuts across levels of the row or column index: the xs method. This is recent addition tpo Pandas and an extremely good method once you get the hang of it.
The basic syntax is df.xs(item, axis=X, level=N), where N is the name or number of an index level and X describes if we are extracting from the index or column names. Setting X=0 (so axis=0) will slice up the data along the index, X=1 extracts data for column labels.
Here's how we could use xs to get the Argentina data without swapping the level of the column labels
End of explanation
# drop some of the index levels (think s for small)
dds = ddi.reset_index(level=[1,3,4], drop=True)
# give a name to the column labels
dds.columns.name = 'Year'
dds
Explanation: Exercise. Use a combination of xs and standard slicing with [...] to extract the variable Debt for Greece.
Exercise. Use the dataframe ddt -- and the xs method -- to plot Debt and Surplus across time for Argentina.
<a id='stack'></a>
Stacking and unstacking
The set_index and reset_index methods work on the row labels -- the index. They move columns to the index and the reverse. The stack and unstack methods move index levels to and from column levels:
stack stacks the data up, moving the columns to the index and creating a long dataframe.
unstack does the reverse, moving columns or index levels into the column labels and creating a wide dataframe.
We use both to shape (or reshape) our data. We use set_index to push things into the index. And then use reset_index to push some of them back to the columns. That gives us pretty fine-grainded control over the shape of our data.
We start by simplifying our initial dataframe.
End of explanation
# convert to long format. Notice printing is different... what `type` is ds?
ds = dds.stack()
ds
# same thing with explicit reference to column name
dds.stack(level='Year').head(8)
# or with level number
dds.stack(level=0).head(8)
Explanation: Let's remind ourselves what we want. We want to
move the column index (Year) into the row index
move the Variable and ISO levels the other way, into the column labels.
The first one uses stack, the second one unstack.
Stacking
We stack our data, one variable on top of another, with a multi-index to keep track of what's what. In simple terms, we change the data from a wide format to a long format. The stack method takes the lowest column level and makes it the lowest row level.
End of explanation
# now go long to wide
ds.unstack() # default is lowest value level='ISO'
# different level
ds.unstack(level='Variable')
# or two at once
ds.unstack(level=['Variable', 'ISO'])
Explanation: Unstacking
Stacking moves columns into the index, "stacking" the data up into longer columns. Unstacking does the reverse, taking levels of the row index and turning them into column labels. Roughly speaking we're rotating or pivoting the data.
End of explanation
# stacked dataframe
ds.head(8)
du1 = ds.unstack()
du2 = du1.unstack()
Explanation: Exercise. Run the code below and explain what each line of code does.
End of explanation
url = 'http://www2.census.gov/ces/bds/firm/bds_f_sz_release.csv'
raw = pd.read_csv(url)
raw.head()
sizes = ['a) 1 to 4', 'b) 5 to 9', 'c) 10 to 19', 'd) 20 to 49']
bds = raw[(raw['year2']>=2012) & raw['fsize'].isin(sizes)][['year2', 'fsize', 'Firms', 'Emp']]
bds
Explanation: Exercise (challenging). Take the unstacked dataframe dds. Use some combination of stack, unstack, and plot to plot the variable Surplus against Year for all three countries. Challenging mostly because you need to work out the steps by yourself.
<a id='pivot'></a>
Pivoting
The pivot method: a short cut to some kinds of unstacking. In rough terms, it takes a wide dataframe and constructs a long one. The inputs are columns, not index levels.
Example: BDS data
The Census's Business Dynamnics Statistics collects annual information about the hiring decisions of firms by size and age. This table list the number of firms and total employment by employment size categories: 1 to 4 employees, 5 to 9, and so on.
Apply want operator. Our want is to plot total employment (the variable Emp) against size (variable fsize). Both are columns in the original data.
Here we construct a subset of the data, where we look at two years rather than the whole 1976-2013 period.
End of explanation
# pivot and divide by a million (dividing so bars aren't too long)
bdsp = bds.pivot(index='fsize', columns='year2', values='Emp')/10**6
bdsp
Explanation: Pivoting the data
Let's think specifically about what we want. We want to graph Emp against fsize for (say) 2013. This calls for:
The index should be the size categories fsize.
The column labels should be the entries of year2, namely 2012 and 2013.
The data should come from the variable Emp.
These inputs translate directly into the following pivot method:
End of explanation
# plot 2013 as bar chart
fig, ax = plt.subplots()
bdsp[2013].plot.barh(ax=ax)
ax.set_ylabel('')
ax.set_xlabel('Number of Employees (millions)')
Explanation: Comment. Note that all the parameters here are columns. That's not a choice, it's the way the the pivot method is written.
We do a plot for fun:
End of explanation
url1 = 'http://www.oecd.org/health/health-systems/'
url2 = 'OECD-Health-Statistics-2015-Frequently-Requested-Data.xls'
docs = pd.read_excel(url1+url2,
skiprows=3,
usecols=[0, 51, 52, 53, 54, 55, 57],
sheetname='Physicians',
na_values=['..'],
skip_footer=21)
# rename country variable
names = list(docs)
docs = docs.rename(columns={names[0]: 'Country'})
# strip footnote numbers from country names
docs['Country'] = docs['Country'].str.rsplit(n=1).str.get(0)
docs = docs.head()
docs
Explanation: <a id='review'></a>
Review
We return to the OECD's healthcare data, specifically a subset of their table on the number of doctors per one thousand population. This loads and cleans the data:
End of explanation |
7,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Applying a 3D convolutional neural network to the data
Welcome everyone to my coverage of the Kaggle Data Science Bowl 2017. My goal here is that anyone, even people new to kaggle, can follow along. If you are completely new to data science, I will do my best to link to tutorials and provide information on everything you need to take part.
This notebook is my actual personal initial run through this data and my notes along the way. I am by no means an expert data analyst, statistician, and certainly not a doctor. This initial pass is not going to win the competition, but hopefully it can serve as a starting point or, at the very least, you can learn something new along with me.
This is a "raw" look into the actual code I used on my first pass, there's a ton of room for improvment. If you see something that you could improve, share it with me!
Quick introduction to Kaggle
<iframe width="560" height="315" src="https
Step1: At this point, we've got the list of patients by their IDs, and their associated labels stored in a dataframe. Now, we can begin to iterate through the patients and gather their respective data. We're almost certainly going to need to do some preprocessing of this data, but we'll see.
Step2: Above, we iterate through each patient, we grab their label, we get the full path to that specific patient (inside THAT path contains ~200ish scans which we also iterate over, BUT also want to sort, since they wont necessarily be in proper order).
Do note here that the actual scan, when loaded by dicom, is clearly not JUST some sort of array of values, instead it's got attributes. There are a few attributes here of arrays, but not all of them. We're sorting by the actual image position in the scan. Later, we could actually put these together to get a full 3D rendering of the scan. That's not in my plans here, since that's already been something covered very well, see this kernel
Step3: Alright, so above we just went ahead and grabbed the pixel_array attribute, which is what I assume to be the scan slice itself (we will confirm this soon), but immediately I am surprised by this non-uniformity of slices. This isn't quite ideal and will cause a problem later. All of our images are the same size, but the slices arent. In terms of a 3D rendering, these actually are not the same size.
We've got to actually figure out a way to solve that uniformity problem, but also...these images are just WAY too big for a convolutional neural network to handle without some serious computing power.
Thus, we already know out of the gate that we're going to need to downsample this data quite a bit, AND somehow make the depth uniform.
Welcome to datascience!
Okay, next question is...just how much data do we have here?
Step4: Oh.
(1595 in real data, 20 if you're in the Kaggle sample dataset)
Well, that's also going to be a challenge for the convnet to figure out, but we're going to try! Also, there are outside datasources for more lung scans. For example, you can grab data from the LUNA2016 challenge
Step5: Now, I am not a doctor, but I'm going to claim a mini-victory and say that's our first CT scan slice.
We have about 200 slices though, I'd feel more comfortable if I saw a few more. Let's look at the first 12, and resize them with opencv. If you do not have opencv, do a pip install cv2
Want to learn more about what you can do with Open CV? Check out the Image analysis and manipulation with OpenCV and Python tutorial.
You will also need numpy here. You probably already have numpy if you installed pandas, but, just in case, numpy is pip install numpy
Section 2
Step7: Alright, so we're resizing our images from 512x512 to 150x150. 150 is still going to wind up likely being waaaaaaay to big. That's fine, we can play with that constant more later, we just want to know how to do it.
Okay, so now what? I think we need to address the whole non-uniformity of depth next. To be honest, I don't know of any super smooth way of doing this, but that's fine. I can at least think of A way, and that's all we need.
My thought is that, what we have is really a big list of slices. What we need is to be able to just take any list of images, whether it's got 200 scans, 150 scans, or 300 scans, and set it to be some fixed number.
Let's say we want to have 20 scans instead. How can we do this?
Well, first, we need something that will take our current list of scans, and chunk it into a list of lists of scans.
I couldn't think of anything off the top of my head for this, so I Googled "how to chunk a list into a list of lists." This is how real programming is happens.
As per Ned Batchelder via Link
Step8: The struggle is real. Okay, what you're about to see you shouldn't attempt if anyone else is watching, like if you're going to show your code to the public...
Step9: Okay, the Python gods are really not happy with me for that hacky solution. If any of you would like to improve this chunking/averaging code, feel free. Really, any of this code...if you have improvements, share them! This is going to stay pretty messy. But hey, we did it! We figured out a way to make sure our 3 dimensional data can be at any resolution we want or need. Awesome!
That's actually a decently large hurdle. Are we totally done? ...maybe not. One major issue is these colors and ranges of data. It's unclear to me whether or not a model would appreciate that. Even if we do a grayscale colormap in the imshow, you'll see that some scans are just darker overall than others. This might be problematic and we might need to actually normalize this dataset.
I expect that, with a large enough dataset, this wouldn't be an actual issue, but, with this size of data, it might be of huge importance.
In effort to not turn this notebook into an actual book, however, we're going to move forward! We can now see our new data by doing
Step11: Section 3
Step12: Section 4
Step13: Now we're ready for the network itself
Step14: Why 54080 magic number? To get this, I simply run the script once, and see what the error yells at me for the expected size multiple. This is certainly not the right way to go about it, but that's my 100% honest method, and my first time working in a 3D convnet. AFAIK, it's the padding that causes this to not be EXACTLY 50,000, (50 x 50 x 20 is the size of our actual input data, which is 50,000 total).
Someone feel free to enlighten me how one could actually calculate this number beforehand.
Now we're set to train the network. I am not going to ask the Kaggle online kernel to even bother building this computation graph, so I will comment out the line to actually run this. Just uncomment it locally and it will run. When running locally, make sure your training data is NOT the sample images, it should be the stage1 images. Your training file should be ~700mb with ~1400 total labeled samples.
Step15: Example output that I got
Step16: So, actually, our dataset has 1035 non-cancer examples and 362 cancerous examples. Thus, an algorithm that always predicted no-cancer with our model would be ~ 74% accurate (1035/1397).
We'd definitely want to confirm our testing set actually has this ratio before assuming anything. It might be the case our testing set has more cancerous examples, or maybe less, we really don't know. We can though | Python Code:
import dicom # for reading dicom files
import os # for doing directory operations
import pandas as pd # for some simple data analysis (right now, just to load in the labels data and quickly reference it)
# Change this to wherever you are storing your data:
# IF YOU ARE FOLLOWING ON KAGGLE, YOU CAN ONLY PLAY WITH THE SAMPLE DATA, WHICH IS MUCH SMALLER
data_dir = '../input/sample_images/'
patients = os.listdir(data_dir)
labels_df = pd.read_csv('../input/stage1_labels.csv', index_col=0)
labels_df.head()
Explanation: Applying a 3D convolutional neural network to the data
Welcome everyone to my coverage of the Kaggle Data Science Bowl 2017. My goal here is that anyone, even people new to kaggle, can follow along. If you are completely new to data science, I will do my best to link to tutorials and provide information on everything you need to take part.
This notebook is my actual personal initial run through this data and my notes along the way. I am by no means an expert data analyst, statistician, and certainly not a doctor. This initial pass is not going to win the competition, but hopefully it can serve as a starting point or, at the very least, you can learn something new along with me.
This is a "raw" look into the actual code I used on my first pass, there's a ton of room for improvment. If you see something that you could improve, share it with me!
Quick introduction to Kaggle
<iframe width="560" height="315" src="https://www.youtube.com/embed/ulq9DjCJPDU?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe>
If you are new to kaggle, create an account, and start downloading the data. It's going to take a while. I found the torrent to download the fastest, so I'd suggest you go that route. When you create an account, head to competitions in the nav bar, choose the Data Science Bowl, then head to the "data" tab. You will need to accept the terms of the competition to proceed with downloading the data.
Just in case you are new, how does all this work?
In general, Kaggle competitions will come with training and testing data for you to build a model on, where both the training and testing data comes with labels so you can fit a model. Then there will be actual "blind" or "out of sample" testing data that you will actually use your model on, which will spit out an output CSV file with your predictions based on the input data. This is what you will upload to kaggle, and your score here is what you compete with. There's always a sample submission file in the dataset, so you can see how to exactly format your output predictions.
In this case, the submission file should have two columns, one for the patient's id and another for the prediction of the liklihood that this patient has cancer, like:
id,cancer
01e349d34c02410e1da273add27be25c,0.5
05a20caf6ab6df4643644c923f06a5eb,0.5
0d12f1c627df49eb223771c28548350e,0.5
...
You can submit up to 3 entries a day, so you want to be very happy with your model, and you are at least slightly disincentivised from trying to simply fit the answer key over time. It's still possible to cheat. If you do cheat, you wont win anything, since you will have to disclose your model for any prizes.
At the end, you can submit 2 final submissions (allowing you to compete with 2 models if you like).
This current competition is a 2 stage competition, where you have to participate in both stages to win.
Stage one has you competing based on a validation dataset. At the release of stage 2, the validation set answers are released and then you make predictions on a new test set that comes out at the release of this second stage.
About this specific competition
At its core, the aim here is to take the sample data, consisting of low-dose CT scan information, and predict what the liklihood of a patient having lung cancer is. Your submission is scored based on the log loss of your predictions.
The dataset is pretty large at ~140GB just in initial training data, so this can be somewhat restrictive right out of the gate. I am going to do my best to make this tutorial one that anyone can follow within the built-in Kaggle kernels
Requirements and suggestions for following along ##
I will be using Python 3, and you should at least know the basics of Python 3.
We will also be making use of:
Pandas for some data analysis
Matplotlib for data visualization
You do not need to go through all of those tutorials to follow here, but, if you are confused, it might be useful to poke around those.
For the actual dependency installs and such, I will link to them as we go.
Alright, let's get started!
Section 1: Handling Data
Assuming you've downloaded the data, what exactly are we working with here? The data consists of many 2D "slices," which, when combined, produce a 3-dimensional rendering of whatever was scanned. In this case, that's the chest cavity of the patient. We've got CT scans of about 1500 patients, and then we've got another file that contains the labels for this data.
There are numerous ways that we could go about creating a classifier. Being a realistic data science problem, we actually don't really know what the best path is going to be. That's why this is a competition. Thus, we have to begin by simply trying things and seeing what happens!
I have a few theories about what might work, but my first interest was to try a 3D Convolutional Neural Network. I've never had data to try one on before, so I was excited to try my hand at it!
Before we can feed the data through any model, however, we need to at least understand the data we're working with. We know the scans are in this "dicom" format, but what is that? If you're like me, you have no idea what that is, or how it will look in Python! You can learn more about DICOM from Wikipedia if you like, but our main focus is what this will actually be in Python terms.
Luckily for us, there already exists a Python package for reading dicom files: Pydicom.
Do a pip install pydicom and pip install pandas and let's see what we've got!
<iframe width="560" height="315" src="https://www.youtube.com/embed/KlffppN47lc?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe>
End of explanation
for patient in patients[:1]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
# a couple great 1-liners from: https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
print(len(slices),label)
print(slices[0])
Explanation: At this point, we've got the list of patients by their IDs, and their associated labels stored in a dataframe. Now, we can begin to iterate through the patients and gather their respective data. We're almost certainly going to need to do some preprocessing of this data, but we'll see.
End of explanation
for patient in patients[:3]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
# a couple great 1-liners from: https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
print(slices[0].pixel_array.shape, len(slices))
Explanation: Above, we iterate through each patient, we grab their label, we get the full path to that specific patient (inside THAT path contains ~200ish scans which we also iterate over, BUT also want to sort, since they wont necessarily be in proper order).
Do note here that the actual scan, when loaded by dicom, is clearly not JUST some sort of array of values, instead it's got attributes. There are a few attributes here of arrays, but not all of them. We're sorting by the actual image position in the scan. Later, we could actually put these together to get a full 3D rendering of the scan. That's not in my plans here, since that's already been something covered very well, see this kernel: https://www.kaggle.com/gzuidhof/data-science-bowl-2017/full-preprocessing-tutorial
One immediate thing to note here is those rows and columns...holy moly, 512 x 512! This means, our 3D rendering is a 195 x 512 x 512 right now. That's huge!
Alright, so we already know that we're going to absolutely need to resize this data. Being 512 x 512, I am already expecting all this data to be the same size, but let's see what we have from other patients too:
End of explanation
len(patients)
Explanation: Alright, so above we just went ahead and grabbed the pixel_array attribute, which is what I assume to be the scan slice itself (we will confirm this soon), but immediately I am surprised by this non-uniformity of slices. This isn't quite ideal and will cause a problem later. All of our images are the same size, but the slices arent. In terms of a 3D rendering, these actually are not the same size.
We've got to actually figure out a way to solve that uniformity problem, but also...these images are just WAY too big for a convolutional neural network to handle without some serious computing power.
Thus, we already know out of the gate that we're going to need to downsample this data quite a bit, AND somehow make the depth uniform.
Welcome to datascience!
Okay, next question is...just how much data do we have here?
End of explanation
import matplotlib.pyplot as plt
for patient in patients[:1]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
# the first slice
plt.imshow(slices[0].pixel_array)
plt.show()
Explanation: Oh.
(1595 in real data, 20 if you're in the Kaggle sample dataset)
Well, that's also going to be a challenge for the convnet to figure out, but we're going to try! Also, there are outside datasources for more lung scans. For example, you can grab data from the LUNA2016 challenge: https://luna16.grand-challenge.org/data/ for another 888 scans.
Do note that, if you do wish to compete, you can only use free datasets that are available to anyone who bothers to look.
I'll have us stick to just the base dataset, again mainly so anyone can poke around this code in the kernel environment.
Now, let's see what an actual slice looks like. If you do not have matplotlib, do pip install matplotlib
<iframe width="560" height="315" src="https://www.youtube.com/embed/MqcZYw8Tgpc?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe>
Want to learn more about Matplotlib? Check out the Data Visualization with Python and Matplotlib tutorial.
End of explanation
import cv2
import numpy as np
IMG_PX_SIZE = 150
for patient in patients[:1]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
fig = plt.figure()
for num,each_slice in enumerate(slices[:12]):
y = fig.add_subplot(3,4,num+1)
new_img = cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE))
y.imshow(new_img)
plt.show()
Explanation: Now, I am not a doctor, but I'm going to claim a mini-victory and say that's our first CT scan slice.
We have about 200 slices though, I'd feel more comfortable if I saw a few more. Let's look at the first 12, and resize them with opencv. If you do not have opencv, do a pip install cv2
Want to learn more about what you can do with Open CV? Check out the Image analysis and manipulation with OpenCV and Python tutorial.
You will also need numpy here. You probably already have numpy if you installed pandas, but, just in case, numpy is pip install numpy
Section 2: Processing and viewing our Data
<iframe width="560" height="315" src="https://www.youtube.com/embed/lqhMTkouBx0?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe>
End of explanation
import math
def chunks(l, n):
# Credit: Ned Batchelder
# Link: http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks
Yield successive n-sized chunks from l.
for i in range(0, len(l), n):
yield l[i:i + n]
def mean(l):
return sum(l) / len(l)
IMG_PX_SIZE = 150
HM_SLICES = 20
data_dir = '../input/sample_images/'
patients = os.listdir(data_dir)
labels_df = pd.read_csv('../input/stage1_labels.csv', index_col=0)
for patient in patients[:10]:
try:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
new_slices = []
slices = [cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) for each_slice in slices]
chunk_sizes = math.ceil(len(slices) / HM_SLICES)
for slice_chunk in chunks(slices, chunk_sizes):
slice_chunk = list(map(mean, zip(*slice_chunk)))
new_slices.append(slice_chunk)
print(len(slices), len(new_slices))
except:
# some patients don't have labels, so we'll just pass on this for now
pass
Explanation: Alright, so we're resizing our images from 512x512 to 150x150. 150 is still going to wind up likely being waaaaaaay to big. That's fine, we can play with that constant more later, we just want to know how to do it.
Okay, so now what? I think we need to address the whole non-uniformity of depth next. To be honest, I don't know of any super smooth way of doing this, but that's fine. I can at least think of A way, and that's all we need.
My thought is that, what we have is really a big list of slices. What we need is to be able to just take any list of images, whether it's got 200 scans, 150 scans, or 300 scans, and set it to be some fixed number.
Let's say we want to have 20 scans instead. How can we do this?
Well, first, we need something that will take our current list of scans, and chunk it into a list of lists of scans.
I couldn't think of anything off the top of my head for this, so I Googled "how to chunk a list into a list of lists." This is how real programming is happens.
As per Ned Batchelder via Link: http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks, we've got ourselves a nice chunker generator. Awesome!
Thanks Ned!
Okay, once we've got these chunks of these scans, what are we going to do? Well, we can just average them together. My theory is that a scan is a few millimeters of actual tissue at most. Thus, we can hopefully just average this slice together, and maybe we're now working with a centimeter or so. If there's a growth there, it should still show up on scan.
This is just a theory, it has to be tested.
As we continue through this, however, you're hopefully going to see just how many theories we come up with, and how many variables we can tweak and change to possibly get better results.
End of explanation
for patient in patients[:10]:
try:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
new_slices = []
slices = [cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) for each_slice in slices]
chunk_sizes = math.ceil(len(slices) / HM_SLICES)
for slice_chunk in chunks(slices, chunk_sizes):
slice_chunk = list(map(mean, zip(*slice_chunk)))
new_slices.append(slice_chunk)
if len(new_slices) == HM_SLICES-1:
new_slices.append(new_slices[-1])
if len(new_slices) == HM_SLICES-2:
new_slices.append(new_slices[-1])
new_slices.append(new_slices[-1])
if len(new_slices) == HM_SLICES+2:
new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],])))
del new_slices[HM_SLICES]
new_slices[HM_SLICES-1] = new_val
if len(new_slices) == HM_SLICES+1:
new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],])))
del new_slices[HM_SLICES]
new_slices[HM_SLICES-1] = new_val
print(len(slices), len(new_slices))
except Exception as e:
# again, some patients are not labeled, but JIC we still want the error if something
# else is wrong with our code
print(str(e))
Explanation: The struggle is real. Okay, what you're about to see you shouldn't attempt if anyone else is watching, like if you're going to show your code to the public...
End of explanation
for patient in patients[:1]:
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
new_slices = []
slices = [cv2.resize(np.array(each_slice.pixel_array),(IMG_PX_SIZE,IMG_PX_SIZE)) for each_slice in slices]
chunk_sizes = math.ceil(len(slices) / HM_SLICES)
for slice_chunk in chunks(slices, chunk_sizes):
slice_chunk = list(map(mean, zip(*slice_chunk)))
new_slices.append(slice_chunk)
if len(new_slices) == HM_SLICES-1:
new_slices.append(new_slices[-1])
if len(new_slices) == HM_SLICES-2:
new_slices.append(new_slices[-1])
new_slices.append(new_slices[-1])
if len(new_slices) == HM_SLICES+2:
new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],])))
del new_slices[HM_SLICES]
new_slices[HM_SLICES-1] = new_val
if len(new_slices) == HM_SLICES+1:
new_val = list(map(mean, zip(*[new_slices[HM_SLICES-1],new_slices[HM_SLICES],])))
del new_slices[HM_SLICES]
new_slices[HM_SLICES-1] = new_val
fig = plt.figure()
for num,each_slice in enumerate(new_slices):
y = fig.add_subplot(4,5,num+1)
y.imshow(each_slice, cmap='gray')
plt.show()
Explanation: Okay, the Python gods are really not happy with me for that hacky solution. If any of you would like to improve this chunking/averaging code, feel free. Really, any of this code...if you have improvements, share them! This is going to stay pretty messy. But hey, we did it! We figured out a way to make sure our 3 dimensional data can be at any resolution we want or need. Awesome!
That's actually a decently large hurdle. Are we totally done? ...maybe not. One major issue is these colors and ranges of data. It's unclear to me whether or not a model would appreciate that. Even if we do a grayscale colormap in the imshow, you'll see that some scans are just darker overall than others. This might be problematic and we might need to actually normalize this dataset.
I expect that, with a large enough dataset, this wouldn't be an actual issue, but, with this size of data, it might be of huge importance.
In effort to not turn this notebook into an actual book, however, we're going to move forward! We can now see our new data by doing:
End of explanation
import numpy as np
import pandas as pd
import dicom
import os
import matplotlib.pyplot as plt
import cv2
import math
IMG_SIZE_PX = 50
SLICE_COUNT = 20
def chunks(l, n):
# Credit: Ned Batchelder
# Link: http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks
Yield successive n-sized chunks from l.
for i in range(0, len(l), n):
yield l[i:i + n]
def mean(a):
return sum(a) / len(a)
def process_data(patient,labels_df,img_px_size=50, hm_slices=20, visualize=False):
label = labels_df.get_value(patient, 'cancer')
path = data_dir + patient
slices = [dicom.read_file(path + '/' + s) for s in os.listdir(path)]
slices.sort(key = lambda x: int(x.ImagePositionPatient[2]))
new_slices = []
slices = [cv2.resize(np.array(each_slice.pixel_array),(img_px_size,img_px_size)) for each_slice in slices]
chunk_sizes = math.ceil(len(slices) / hm_slices)
for slice_chunk in chunks(slices, chunk_sizes):
slice_chunk = list(map(mean, zip(*slice_chunk)))
new_slices.append(slice_chunk)
if len(new_slices) == hm_slices-1:
new_slices.append(new_slices[-1])
if len(new_slices) == hm_slices-2:
new_slices.append(new_slices[-1])
new_slices.append(new_slices[-1])
if len(new_slices) == hm_slices+2:
new_val = list(map(mean, zip(*[new_slices[hm_slices-1],new_slices[hm_slices],])))
del new_slices[hm_slices]
new_slices[hm_slices-1] = new_val
if len(new_slices) == hm_slices+1:
new_val = list(map(mean, zip(*[new_slices[hm_slices-1],new_slices[hm_slices],])))
del new_slices[hm_slices]
new_slices[hm_slices-1] = new_val
if visualize:
fig = plt.figure()
for num,each_slice in enumerate(new_slices):
y = fig.add_subplot(4,5,num+1)
y.imshow(each_slice, cmap='gray')
plt.show()
if label == 1: label=np.array([0,1])
elif label == 0: label=np.array([1,0])
return np.array(new_slices),label
# stage 1 for real.
data_dir = '../input/sample_images/'
patients = os.listdir(data_dir)
labels = pd.read_csv('../input/stage1_labels.csv', index_col=0)
much_data = []
for num,patient in enumerate(patients):
if num % 100 == 0:
print(num)
try:
img_data,label = process_data(patient,labels,img_px_size=IMG_SIZE_PX, hm_slices=SLICE_COUNT)
#print(img_data.shape,label)
much_data.append([img_data,label])
except KeyError as e:
print('This is unlabeled data!')
np.save('muchdata-{}-{}-{}.npy'.format(IMG_SIZE_PX,IMG_SIZE_PX,SLICE_COUNT), much_data)
Explanation: Section 3: Preprocessing our Data
<iframe width="560" height="315" src="https://www.youtube.com/embed/_DAeMDMHgtY?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe>
Okay, so we know what we've got, and what we need to do with it.
We have a few options at this point, we could take the code that we have already and do the processing "online." By this, I mean, while training the network, we can actually just loop over our patients, resize the data, then feed it through our neural network. We actually don't have to have all of the data prepared before we go through the network.
If you can preprocess all of the data into one file, and that one file doesn't exceed your available memory, then training should likely be faster, so you can more easily tweak your neural network and not be processing your data the same way over and over.
In many more realistic examples in the world, however, your dataset will be so large, that you wouldn't be able to read it all into memory at once anyway, but you could still maintain one big database or something.
Bottom line: There are tons of options here. Our dataset is only 1500 (even less if you are following in the Kaggle kernel) patients, and will be, for example, 20 slices of 150x150 image data if we went off the numbers we have now, but this will need to be even smaller for a typical computer most likely.
Regardless, this much data wont be an issue to keep in memory or do whatever the heck we want.
If at all possible, I prefer to separate out steps in any big process like this, so I am going to go ahead and pre-process the data, so our neural network code is much simpler. Also, there's no good reason to maintain a network in GPU memory while we're wasting time processing the data which can be easily done on a CPU.
Now, I will just make a slight modification to all of the code up to this point, and add some new final lines to preprocess this data and save the array of arrays to a file:
End of explanation
import tensorflow as tf
import numpy as np
IMG_SIZE_PX = 50
SLICE_COUNT = 20
n_classes = 2
batch_size = 10
x = tf.placeholder('float')
y = tf.placeholder('float')
keep_rate = 0.8
def conv3d(x, W):
return tf.nn.conv3d(x, W, strides=[1,1,1,1,1], padding='SAME')
def maxpool3d(x):
# size of window movement of window as you slide about
return tf.nn.max_pool3d(x, ksize=[1,2,2,2,1], strides=[1,2,2,2,1], padding='SAME')
Explanation: Section 4: 3D Convolutional Neural Network
Moment-o-truth
<iframe width="560" height="315" src="https://www.youtube.com/embed/CPZ5ihaNfJc?list=PLQVvvaa0QuDd5meH8cStO9cMi98tPT12_" frameborder="0" allowfullscreen></iframe>
Okay, we've got preprocessed, normalized, data. Now we're ready to feed it through our 3D convnet and...see what happens!
Now, I am not about to stuff a neural networks tutorial into this one. If you're already familiar with neural networks and TensorFlow, great! If not, as you might guess, I have a tutorial...or tutorials... for you!
To install the CPU version of TensorFlow, just do pip install tensorflow
To install the GPU version of TensorFlow, you need to get alllll the dependencies and such.
Installation tutorials:
Installing the GPU version of TensorFlow in Ubuntu
Installing the GPU version of TensorFlow on a Windows machine
Using TensorFlow and concept tutorials:
Introduction to deep learning with neural networks
Introduction to TensorFlow
Intro to Convolutional Neural Networks
Convolutional Neural Network in TensorFlow tutorial
Now, the data we have is actually 3D data, not 2D data that's covered in most convnet tutorials, including mine above. So what changes? EVERYTHING! OMG IT'S THE END OF THE WORLD AS WE KNOW IT!!
It's not really all too bad. Your convolutional window/padding/strides need to change. Do note that, now, to have a bigger window, your processing penalty increases significantly as we increase in size, obviously much more than with 2D windows.
Okay, let's begin.
End of explanation
def convolutional_neural_network(x):
# # 5 x 5 x 5 patches, 1 channel, 32 features to compute.
weights = {'W_conv1':tf.Variable(tf.random_normal([3,3,3,1,32])),
# 5 x 5 x 5 patches, 32 channels, 64 features to compute.
'W_conv2':tf.Variable(tf.random_normal([3,3,3,32,64])),
# 64 features
'W_fc':tf.Variable(tf.random_normal([54080,1024])),
'out':tf.Variable(tf.random_normal([1024, n_classes]))}
biases = {'b_conv1':tf.Variable(tf.random_normal([32])),
'b_conv2':tf.Variable(tf.random_normal([64])),
'b_fc':tf.Variable(tf.random_normal([1024])),
'out':tf.Variable(tf.random_normal([n_classes]))}
# image X image Y image Z
x = tf.reshape(x, shape=[-1, IMG_SIZE_PX, IMG_SIZE_PX, SLICE_COUNT, 1])
conv1 = tf.nn.relu(conv3d(x, weights['W_conv1']) + biases['b_conv1'])
conv1 = maxpool3d(conv1)
conv2 = tf.nn.relu(conv3d(conv1, weights['W_conv2']) + biases['b_conv2'])
conv2 = maxpool3d(conv2)
fc = tf.reshape(conv2,[-1, 54080])
fc = tf.nn.relu(tf.matmul(fc, weights['W_fc'])+biases['b_fc'])
fc = tf.nn.dropout(fc, keep_rate)
output = tf.matmul(fc, weights['out'])+biases['out']
return output
Explanation: Now we're ready for the network itself:
End of explanation
much_data = np.load('muchdata-50-50-20.npy')
# If you are working with the basic sample data, use maybe 2 instead of 100 here... you don't have enough data to really do this
train_data = much_data[:-100]
validation_data = much_data[-100:]
def train_neural_network(x):
prediction = convolutional_neural_network(x)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) )
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost)
hm_epochs = 10
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
saver.restore(sess, MODEL_PATH)
successful_runs = 0
total_runs = 0
for epoch in range(hm_epochs):
epoch_loss = 0
for data in train_data:
total_runs += 1
try:
X = data[0]
Y = data[1]
_, c = sess.run([optimizer, cost], feed_dict={x: X, y: Y})
epoch_loss += c
successful_runs += 1
except Exception as e:
# I am passing for the sake of notebook space, but we are getting 1 shaping issue from one
# input tensor. Not sure why, will have to look into it. Guessing it's
# one of the depths that doesn't come to 20.
pass
#print(str(e))
print('Epoch', epoch+1, 'completed out of',hm_epochs,'loss:',epoch_loss)
correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
print('Accuracy:',accuracy.eval({x:[i[0] for i in validation_data], y:[i[1] for i in validation_data]}))
print('Done. Finishing accuracy:')
print('Accuracy:',accuracy.eval({x:[i[0] for i in validation_data], y:[i[1] for i in validation_data]}))
print('fitment percent:',successful_runs/total_runs)
# Run this locally:
# train_neural_network(x)
Explanation: Why 54080 magic number? To get this, I simply run the script once, and see what the error yells at me for the expected size multiple. This is certainly not the right way to go about it, but that's my 100% honest method, and my first time working in a 3D convnet. AFAIK, it's the padding that causes this to not be EXACTLY 50,000, (50 x 50 x 20 is the size of our actual input data, which is 50,000 total).
Someone feel free to enlighten me how one could actually calculate this number beforehand.
Now we're set to train the network. I am not going to ask the Kaggle online kernel to even bother building this computation graph, so I will comment out the line to actually run this. Just uncomment it locally and it will run. When running locally, make sure your training data is NOT the sample images, it should be the stage1 images. Your training file should be ~700mb with ~1400 total labeled samples.
End of explanation
labels_df.cancer.value_counts()
Explanation: Example output that I got:
Epoch 1 completed out of 10 loss: 195148607547.0
Accuracy: 0.63
Epoch 2 completed out of 10 loss: 14236109414.9
Accuracy: 0.6
Epoch 3 completed out of 10 loss: 5744945978.94
Accuracy: 0.7
Epoch 4 completed out of 10 loss: 3268944715.44
Accuracy: 0.6
Epoch 5 completed out of 10 loss: 1916325681.66
Accuracy: 0.6
Epoch 6 completed out of 10 loss: 1014763813.3
Accuracy: 0.46
Epoch 7 completed out of 10 loss: 680146186.953
Accuracy: 0.54
Epoch 8 completed out of 10 loss: 289082075.259
Accuracy: 0.62
Epoch 9 completed out of 10 loss: 122785997.913
Accuracy: 0.57
Epoch 10 completed out of 10 loss: 96427552.5371
Accuracy: 0.51
Done. Finishing accuracy:
Accuracy: 0.69
fitment percent: 0.9992289899768697
Section 5: Concluding Remarks
So how did we do? Well, we overfit almost certainly. How about our accuracy? Due to the lower amount of data on Kaggle, I have no idea what number you're seeing, just know it's probably not all that great. Even if it was, what was the number to beat? Was it 50%, since it's either cancer or not? Not quite. The real number we need to beat is if our network was to always predict a single class. Let's see what the best score our classifer could get is if it just always picked the most common class:
End of explanation
labels_df.ix[-100:].cancer.value_counts()
Explanation: So, actually, our dataset has 1035 non-cancer examples and 362 cancerous examples. Thus, an algorithm that always predicted no-cancer with our model would be ~ 74% accurate (1035/1397).
We'd definitely want to confirm our testing set actually has this ratio before assuming anything. It might be the case our testing set has more cancerous examples, or maybe less, we really don't know. We can though:
End of explanation |
7,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loss under a Local Perceptron model
Step1: Loss under a Mixture of Gaussians model
Step2: We use autograd for functions that deliver gradients of those losses
Step3: Just a pretty display
Red and Black are target 0 and 1 patterns respectively.
They will get "filled in" once the perceptron is getting them correct.
Step4: Learning, starting from random weights and bias. | Python Code:
def sigmoid(phi):
return 1.0/(1.0 + np.exp(-phi))
def calc_prob_class1(params):
# Sigmoid perceptron ('logistic regression')
tildex = X - params['mean']
W = params['wgts']
phi = np.dot(tildex, W)
return sigmoid(phi) # Sigmoid perceptron ('logistic regression')
def calc_membership(params):
# NB. this is just a helper function for training_loss really.
tildex = X - params['mean']
W, r2, R2 = params['wgts'], params['r2'], params['R2']
Dr2 = np.power(np.dot(tildex, W), 2.0)
L2X = (np.power(tildex, 2.0)).sum(1)
DR2 = L2X - Dr2
dist2 = (Dr2/r2) + (DR2/R2) # rescaled 'distance' to the shifted 'origin'
membership = np.exp(-0.5*dist2)
#print(membership)
return np.array(membership)
def classification_loss(params):
membership = calc_membership(params)
Y = calc_prob_class1(params)
return np.sum(membership*(Targ*np.log(Y) + (1-Targ)*np.log(1-Y)))
Explanation: Loss under a Local Perceptron model
End of explanation
def MoG_loss(params):
membership = calc_membership(params)
return np.sum(membership)
Explanation: Loss under a Mixture of Gaussians model
End of explanation
classification_gradient = grad(classification_loss)
MoG_gradient = grad(MoG_loss)
Explanation: We use autograd for functions that deliver gradients of those losses
End of explanation
# Be able to show the current solution, against the data in 2D.
def show_result(params, X, Targ):
print("Parameters:")
for key in params.keys():
print(key,'\t', params[key])
print("Loss:", training_loss(params))
membership = calc_membership(params)
Y = calc_prob_class1(params)
pl.clf()
marksize = 8
cl ={0:'red', 1:'black'}
for i, x in enumerate(X):
pl.plot(x[0],x[1],'x',color=cl[int(Targ[i])],alpha=.4,markersize=marksize)
pl.plot(x[0],x[1],'o',color=cl[int(Targ[i])],alpha=1.-float(abs(Targ[i]-Y[i])),markersize=marksize)
pl.axis('equal')
s = X.ravel().max() - X.ravel().min()
m, w = params['mean'], params['wgts']
# Show the mean in blue
#pl.arrow(0, 0, m[0], m[1], head_width=0.25, head_length=0.5, fc='b', ec='b', linewidth=1, alpha=.95)
# Show the perceptron decision boundary, in green
pl.arrow(m[0]-w[0], m[1]-w[1], w[0], w[1], head_width=s, head_length=s/5, fc='g', ec='g', linewidth=3, alpha=.5)
pl.show()
Explanation: Just a pretty display
Red and Black are target 0 and 1 patterns respectively.
They will get "filled in" once the perceptron is getting them correct.
End of explanation
def do_one_learning_step(params,X,Targ,rate):
grads = classification_gradient(params)
params['wgts'] = params['wgts'] + rate * grads['wgts'] # one step of learning
params['mean'] = params['mean'] + rate * grads['mean'] # one step of learning
return (params)
init_w = rng.normal(0,1,size=(Nins))
init_m = 4.*rng.normal(0,1,size=(Nins))
rate = 0.5 / Npats
params = {'wgts':init_w, 'mean':init_m, 'r2':1000.0, 'R2':1000.0}
for t in range(250):
params = do_one_learning_step(params,X,Targ,rate)
show_result(params, X, Targ)
Y = sigmoid(np.dot(X-params['mean'], params['wgts']))
print('vanilla loss: ', np.sum(Targ*np.log(Y) + (1-Targ)*np.log(1-Y)))
Explanation: Learning, starting from random weights and bias.
End of explanation |
7,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Values
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right
Step2: Example
In the following example, we create a pipeline with a PCollection of key-value pairs.
Then, we apply Values to extract the values and discard the keys. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master//Users/dcavazos/src/beam/examples/notebooks/documentation/transforms/python/elementwise/values-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/values"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
!pip install --quiet -U apache-beam
Explanation: Values
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.util.html#apache_beam.transforms.util.Values"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Takes a collection of key-value pairs, and returns the value of each element.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Garden plants' >> beam.Create([
('🍓', 'Strawberry'),
('🥕', 'Carrot'),
('🍆', 'Eggplant'),
('🍅', 'Tomato'),
('🥔', 'Potato'),
])
| 'Values' >> beam.Values()
| beam.Map(print)
)
Explanation: Example
In the following example, we create a pipeline with a PCollection of key-value pairs.
Then, we apply Values to extract the values and discard the keys.
End of explanation |
7,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Check Environment
This notebook checks that you have correctly created the environment and that all packages needed are installed.
Environment
The next command should return a line like (Mac/Linux)
Step1: Python 3.5
The next line should say that you're using Python 3.5.x from Continuum Analytics. At the time of publication it looks like this (Mac/Linux)
Step2: Jupyter
Check that Jupyter is running from within the environment. The next line should look like (Mac/Linux)
Step3: Other packages
Here we will check that all the packages are installed and have the correct versions. If everything is ok you should see | Python Code:
import os
import sys
sys.executable
Explanation: Check Environment
This notebook checks that you have correctly created the environment and that all packages needed are installed.
Environment
The next command should return a line like (Mac/Linux):
/<YOUR-HOME-FOLDER>/anaconda/envs/ztdl/bin/python
or like (Windows 10):
C:\\<YOUR-HOME-FOLDER>\\Anaconda3\\envs\\ztdl\\python.exe
In particular you should make sure that you are using the python executable from within the course environment.
If that's not the case do this:
close this notebook
go to the terminal and stop jupyer notebook
make sure that you have activated the environment, you should see a prompt like:
(ztdl) $
(optional) if you don't see that prompt activate the environment:
mac/linux:
source activate ztdl
windows:
activate ztdl
restart jupyter notebook
End of explanation
import sys
sys.version
Explanation: Python 3.5
The next line should say that you're using Python 3.5.x from Continuum Analytics. At the time of publication it looks like this (Mac/Linux):
3.5.3 |Continuum Analytics, Inc.| (default, Mar 6 2017, 12:15:08) \n[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)
or like this (Windows 10):
3.5.3 |Continuum Analytics, Inc.| (default, May 11 2017, 13:52:01) [MSC v.1900 64 bit (AMD64)]
but date and exact version of GCC may change in the future.
If you see a different version of python, go back to the previous step and make sure you created and activated the environment correctly.
End of explanation
import jupyter
jupyter.__file__
Explanation: Jupyter
Check that Jupyter is running from within the environment. The next line should look like (Mac/Linux):
/<YOUR-HOME-FOLDER>/anaconda/envs/ztdl/lib/python3.5/site-packages/jupyter.py'
or like this (Windows 10):
C:\\Users\\paperspace\\Anaconda3\\envs\\ztdl\\lib\\site-packages\\jupyter.py
End of explanation
import pip
import numpy
import jupyter
import matplotlib
import sklearn
import scipy
import pandas
import PIL
import seaborn
import h5py
import tensorflow
import keras
'''
assert(pip.__version__ == '9.0.1')
assert(numpy.__version__ == '1.12.0')
assert(matplotlib.__version__ == '2.0.0')
assert(sklearn.__version__ == '0.18.1')
assert(scipy.__version__ == '0.19.0')
assert(pandas.__version__ == '0.19.2')
assert(PIL.__version__ == '4.0.0')
assert(seaborn.__version__ == '0.7.1')
assert(h5py.__version__ == '2.7.0')
assert(tensorflow.__version__ == '1.1.0')
assert(keras.__version__ == '2.0.4')
'''
print("Houston we are go!")
Explanation: Other packages
Here we will check that all the packages are installed and have the correct versions. If everything is ok you should see:
Using TensorFlow backend.
Houston we are go!
If there's any issue here please make sure you have checked the previous steps and if it's all good please send us a question in the Q&A forum.
End of explanation |
7,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examining the MPI-Leipzig Mind-Brain-Body Dataset
The MRI data are available at https
Step1: We can investigate what keys are available in any .tsv header by examining the corresponding .json file
Step2: And we can investigate what values are associated with those behavioral keys by examining the .tsv files
Step3: Investigating the range of data available
Provided in this repository is the script find_subjects_behavior_data.py which takes as arguments a list of .tsv files and a list of corresponding behavior keys to examine for each file. It pulls out the data associated with the behavior key given (ex
Step4: Now we have an easy way to see the range of the behavioral data available. A non-static version of the above code is below.
MRI data available
If you'd like to interactively work with find_subjects_data, a non-static version of the code is below | Python Code:
%%bash
ls MPI-Leipzig/behavioral_data_MPILMBB/phenotype | head
Explanation: Examining the MPI-Leipzig Mind-Brain-Body Dataset
The MRI data are available at https://openfmri.org/dataset/ds000221/. The behavioral data are available via NITRC: https://www.nitrc.org/projects/mpilmbb/. Note I was required to edit one file in the NITRC data (phenotype/BDI.json); it was missing a few " marks required to be valid json format.
Behavoiral Data available
Each .json file describes the headers of the correspondingly name .tsv file.
End of explanation
%%bash
cat MPI-Leipzig/behavioral_data_MPILMBB/phenotype/BDI.json | head
Explanation: We can investigate what keys are available in any .tsv header by examining the corresponding .json file:
End of explanation
%%bash
head MPI-Leipzig/behavioral_data_MPILMBB/phenotype/BDI.tsv
Explanation: And we can investigate what values are associated with those behavioral keys by examining the .tsv files
End of explanation
#Allow us to import python files in scripts
import sys
sys.path.append('./scripts')
import matplotlib.pyplot as plt
import numpy as np
import find_subjects_behavior_data as fsbd
#Arguments that would normally be passed through the command line call
behavior_files = [
"MPI-Leipzig/behavioral_data_MPILMBB/phenotype/BDI.tsv",
"MPI-Leipzig/behavioral_data_MPILMBB/phenotype/HADS.tsv",
"MPI-Leipzig/behavioral_data_MPILMBB/phenotype/NEO.tsv"
]
behavior_keys = [
"BDI_summary_sum",
"HADS-D_summary_sum",
"NEO_N"
]
#Get data using find_subject_data
subjects, complete_subjects, raw_data, complete_raw_data = fsbd.get_data(behavior_files, behavior_keys)
fsbd.draw_figure(behavior_keys, raw_data, complete_raw_data)
plt.show()
Explanation: Investigating the range of data available
Provided in this repository is the script find_subjects_behavior_data.py which takes as arguments a list of .tsv files and a list of corresponding behavior keys to examine for each file. It pulls out the data associated with the behavior key given (ex: NEO_N) for each subject in the corresponding .tsv file. It provides the function get_data, which returns:
- subjects: a dictionary with the subject names as keys. The values are themselves dictionaries keyed by behavior key name. For example:
subjects['sub-000021'] = {
'BDI_summary_sum':1.0,
'HADS-D_summary_sum':2.0,
'NEO_N':63.0
}
Note the sub-dictionary values will always be floats. If the behavior test was not recorded for that subject, the behavior key will not be present in that subject's dictionary.
- complete_subjects: a dictionary structured as subjects. Only includes subjects that have values for all behavior keys given.
- raw_data: a dictionary keyed by behavior name. The value is a list of floats corresponding to all the entries in the .tsv file for that behavior key.
- complete_raw_data: a dictionary keyed by behavior name. The value is a list of floats corresponding to the entries in the .tsv file for that behavior key that also have values for all other behavior keys. Note each behavior key's value will always be a subset of the behavior key's value in raw_data
When run at the command-line, find_subjects_behavior_data.py will produce a set of box plots. Each box extends from the lower to upper quartile values of the data, with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers. A dotted green line indicates the mean. The first column of box plots plot all available data for a given behavior key. The second column of box plots plot the data for a given behavior key such that the subjects who provide that data also have data for every behavior key (taken from complete_raw_data). A row exists for each behavior key.
End of explanation
%matplotlib notebook
#Allow us to import python files in scripts
import sys
sys.path.append('./scripts')
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import interactive
import find_subjects_behavior_data as fsbd
#Arguments that would normally be passed through the command line call
behavior_files = [
"MPI-Leipzig/behavioral_data_MPILMBB/phenotype/BDI.tsv",
"MPI-Leipzig/behavioral_data_MPILMBB/phenotype/HADS.tsv",
"MPI-Leipzig/behavioral_data_MPILMBB/phenotype/NEO.tsv"
]
behavior_keys = [
"BDI_summary_sum",
"HADS-D_summary_sum",
"NEO_N"
]
#Get data using find_subject_data
subjects, complete_subjects, raw_data, complete_raw_data = fsbd.get_data(behavior_files, behavior_keys)
def draw_figure():
fsbd.draw_figure(behavior_keys, raw_data, complete_raw_data)
interactive_plot = interactive(draw_figure)
interactive_plot
Explanation: Now we have an easy way to see the range of the behavioral data available. A non-static version of the above code is below.
MRI data available
If you'd like to interactively work with find_subjects_data, a non-static version of the code is below:
End of explanation |
7,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linearity
We consider a second order system of the form
Step1: Initial position and intial velocity cases
\begin{align}
f(t) = 0, \quad x(0) = 1 m, \quad \dot{x}(0) = 0 \
f(t) = 0, \quad x(0) = 0, \quad \dot{x}(0) = 1 \frac{m}{sec}
\end{align}
We analytically found these solutions so lets write a function for it.
Step2: Impulse response
We also know the response to an impulse!
Step3: Numerical vs. Analytical responses
Now we compare our analytical solution to a numerical one | Python Code:
%matplotlib inline
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
from ipywidgets import interact
import ipywidgets as widgets
Explanation: Linearity
We consider a second order system of the form:
\begin{align}
G(s) = \frac{1}{ms^2 + cs + k}
\end{align}
with
\begin{align}
m = 1 kg, \quad c = 4 \frac{Ns}{m}, \quad k = 5 \frac{N}{m}
\end{align}
Lets see if this idea of linear systems actually applies!
End of explanation
def x1(t,x0):
x1 = x0*(np.exp(-2*t)*np.cos(t) + 2*np.exp(-2*t)*np.sin(t))
return x1
def x2(t, xd0):
x2 = xd0*(np.exp(-2*t)*np.sin(t))
return x2
Explanation: Initial position and intial velocity cases
\begin{align}
f(t) = 0, \quad x(0) = 1 m, \quad \dot{x}(0) = 0 \
f(t) = 0, \quad x(0) = 0, \quad \dot{x}(0) = 1 \frac{m}{sec}
\end{align}
We analytically found these solutions so lets write a function for it.
End of explanation
def x3(t,f0):
x3 = f0*(np.exp(-2*t)*np.sin(t))
return x3
Explanation: Impulse response
We also know the response to an impulse!
End of explanation
def pltresp(x0, xd0):
num = 1
den = [1, 4, 5]
time = np.linspace(0,5,100)
sys = signal.TransferFunction(num,den)
t, resp = signal.impulse2(sys,X0=(xd0, x0), T=time)
# compute our analytical approximation
x1resp = x1(time,x0)
x2resp = x2(time,xd0)
x3resp = x3(time,1)
fig, ax = plt.subplots(1,1,figsize=(16,8))
ax.plot(t,resp, label='Numerical')
ax.plot(time,x1resp+x2resp+x3resp,label='Analytical')
ax.set_title('Response')
ax.set_xlabel('Time (sec)')
ax.set_ylabel('Response')
ax.grid(True)
plt.legend()
return 0
_ = interact(pltresp, x0=(0,5,0.1), xd0=(0.0,5,0.1))
Explanation: Numerical vs. Analytical responses
Now we compare our analytical solution to a numerical one
End of explanation |
7,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Node Label Prediction \ Link Prediction
Step1: We will start by node label prediction. Download this network. It contains protein communications in Baker’s yeast. Each node (protein) has a special binary attribute ICSC (intracellular signaling cascade).
Step2: It might not be clear from the picture above but the level of homogeneity is quite high. For each node we are able to compute the average value of label
Step3: Iterative Classification Method
ICM is kind of NN-classifier. Our prediction is based on the largest ratio of nearest neighbours of unlabeled nodes.
Task 1
Randomly set unlabeled nodes.
Implement classification rule of ICM (HINT look at the code cell above)
Implement function for classification error and use it wisely
Step4: Label Propagation
Now instead of looking at neigbours we are switching random walks between all the nodes
Just to recall the Label Propagation method
Step5: Link Prediction - Scoring functions
In this section we will implement some scoring functions for link prediction and compare the values for adjacent and non-adjacent nodes.
Load french blog network and compute the following scores
Step6: Shortest Path Length
Step7: Number of common neighbours
Step8: Jaccard Score
Step9: Adamic/Adar Score
$Score(a,b) = \sum\limits_{v \in \text{NN}(a) \cap \text{NN}(b)} \frac{1}{\log |\text{NN}(v)|}$
Step10: Preferential Attachment score
$Score(a,b) = |\text{NN}(a)| \times |\text{NN}(b)|$
Step11: Katz Score
$Score(a,b) = \sum\limits_{\text{Paths}{x,y}} \beta^{|p{a,b}|}\times|p_{a,b}|$
Step12: Let's compare the scores behavious for pairs of nodes with or without edge in-between | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
import networkx as nx
%matplotlib inline
Explanation: Node Label Prediction \ Link Prediction
End of explanation
g = nx.read_gml('./data/ppi.CC.gml.txt')
cc = list(nx.connected_components(g))
g = nx.subgraph(g,cc[0])
g = nx.relabel.convert_node_labels_to_integers(g)
labels = np.array(nx.get_node_attributes(g, 'ICSC').values(), dtype=float)
nx.draw_spring(g, node_color = labels)
Explanation: We will start by node label prediction. Download this network. It contains protein communications in Baker’s yeast. Each node (protein) has a special binary attribute ICSC (intracellular signaling cascade).
End of explanation
nnICSC = np.asarray(map(lambda(v): np.mean(labels[g.neighbors(v)]), g.nodes_iter()))
nnICSC
plt.figure(figsize=(10,5))
plt.hist(nnICSC[np.where(labels == 1)], bins=6,)
Explanation: It might not be clear from the picture above but the level of homogeneity is quite high. For each node we are able to compute the average value of label
End of explanation
lablNN = labels[:]
idx = np.random.randint(0,len(lablNN), size=40)
lablNN[idx] = np.nan
# Your code here
## Get the adjacency matrix
A = nx.adjacency_matrix( g )
## Find the unclassified nodes
unlabelled = np.isnan( lablNN )
## Slice the adjacency matrix
# B = A[unlabelled].tocsc()[:,~unlabelled].tocsr()
B = A.tocsc()[:,~unlabelled].tocsr()
## Compute the mean label of the labelled neighbours of each unlabelled node.
new_labels = B.dot( lablNN[~unlabelled] ) / B.sum( axis = 1 ).getA1( )
## Update the labels
lablNN[unlabelled] = new_labels[unlabelled]
Explanation: Iterative Classification Method
ICM is kind of NN-classifier. Our prediction is based on the largest ratio of nearest neighbours of unlabeled nodes.
Task 1
Randomly set unlabeled nodes.
Implement classification rule of ICM (HINT look at the code cell above)
Implement function for classification error and use it wisely
End of explanation
# It is better to initialize like that
fixedLabels = labels[:]+1
curLabels = labels[:]+1
# And indicate labeled nodes instead of unlabeled
idxLabeled = np.zeros((g.number_of_nodes(),), dtype=bool)
idxLabeled[np.random.randint(0,len(labels), size=90)] = True
curLabels[~idxLabeled] = 0
A = nx.adj_matrix( g )
D = sp.sparse.diags( 1.0 / A.sum( axis = 1 ).getA1( ), offsets = 0 )
P = D.dot( A )
def LabelPropagation(G, idxLabeled, curLabels, fixedLabels, iters = 1000):
A = nx.adj_matrix( g )
D = sp.sparse.diags( 1.0 / A.sum( axis = 1 ).getA1( ), offsets = 0 )
P = D.dot( A )
# Your code here
return np.round(resultLabels)
Explanation: Label Propagation
Now instead of looking at neigbours we are switching random walks between all the nodes
Just to recall the Label Propagation method:
1. Compute $P = D^{-1}A$
2. Set $Y^{(0)} = (Y_l,0)$ ($Y_l$ - labeled data)
3. repeat
* $Y^{(t+1)} = PY^{(t)}$
* $Y_l^{(t+1)} = Y_l$
4. until $Y^{(t)}$ converges
End of explanation
g = nx.read_gml('./data/fblog.gml.txt')
vNum = g.number_of_nodes()
def matrixPlot(A):
plt.figure(1, figsize=(6, 6))
plt.imshow(A,
interpolation="none"
)
Explanation: Link Prediction - Scoring functions
In this section we will implement some scoring functions for link prediction and compare the values for adjacent and non-adjacent nodes.
Load french blog network and compute the following scores:
End of explanation
# Your code here
spath = nx.floyd_warshall_numpy( g )
matrixPlot( spath )
Explanation: Shortest Path Length
End of explanation
# Your code here
A = nx.adjacency_matrix( g )
common_neighbour = A.dot( A ).todense()
matrixPlot( common_neighbour )
Explanation: Number of common neighbours
End of explanation
# Your code here
jaccard_score = np.asarray( [ ( len( np.intersect1d( g[v].keys(), g[u].keys() ) ) + 0.0 ) / len( np.union1d( g[v].keys(), g[u].keys() ) )
for v in g.nodes_iter( ) for u in g.nodes_iter( ) ], dtype = np.float ).reshape( 2*[g.number_of_nodes()] )
matrixPlot( jaccard_score )
Explanation: Jaccard Score
End of explanation
# Your code here
adar_score = np.asarray( [ np.sum( [ 1.0 / np.log( len( g[w].keys() ) ) for w in np.intersect1d( g[v].keys(), g[u].keys() ) ] )
for v in g.nodes_iter( ) for u in g.nodes_iter( ) ], dtype = np.float ).reshape( 2*[g.number_of_nodes()] )
matrixPlot( adar_score )
Explanation: Adamic/Adar Score
$Score(a,b) = \sum\limits_{v \in \text{NN}(a) \cap \text{NN}(b)} \frac{1}{\log |\text{NN}(v)|}$
End of explanation
# Your code here
pref_score = np.asarray( [ len( g[v].keys() ) * len( g[u].keys() )
for v in g.nodes_iter( ) for u in g.nodes_iter( ) ], dtype = np.float ).reshape( 2*[g.number_of_nodes()] )
matrixPlot( pref_score )
Explanation: Preferential Attachment score
$Score(a,b) = |\text{NN}(a)| \times |\text{NN}(b)|$
End of explanation
# Your code here
A = nx.adjacency_matrix( g ).tocsc()
beta = 0.5
I = sp.sparse.eye(*A.shape)
katzScore = ( sp.sparse.linalg.inv( I - beta * A ) - I ).todense()
matrixPlot( katzScore )
Explanation: Katz Score
$Score(a,b) = \sum\limits_{\text{Paths}{x,y}} \beta^{|p{a,b}|}\times|p_{a,b}|$
End of explanation
A = np.asarray(nx.adj_matrix(g).todense())
xyTriu = np.vstack(np.triu_indices_from(A, k=1)).T
wEdge = [katzScore[xy[0],xy[1]] for xy in xyTriu if A[xy[0],xy[1]]]
woEdge = [katzScore[xy[0],xy[1]] for xy in xyTriu if ~A[xy[0],xy[1]]]
data = [wEdge, woEdge]
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10,5))
axes.violinplot(data, showmeans=True)
axes.set_xticklabels(['', 'With Edges', '', 'W/o Edges'])
Explanation: Let's compare the scores behavious for pairs of nodes with or without edge in-between
End of explanation |
7,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Streamfunction and velocity potential from zonal and meridional wind component
windspharm is a Python library developed by
Andrew Dawson which provides an pythonic interface to the pyspharm module, which is basically a bindings to the [spherepack] Fortran library
Installation
1) Download and unpack pyspharm
2) Download spherepack from http
Step1: usual imports
Step2: defines a function to plot a 2D field map
Step3: load the wind data using xray | Python Code:
from windspharm.standard import VectorWind
from windspharm.tools import prep_data, recover_data, order_latdim
Explanation: Streamfunction and velocity potential from zonal and meridional wind component
windspharm is a Python library developed by
Andrew Dawson which provides an pythonic interface to the pyspharm module, which is basically a bindings to the [spherepack] Fortran library
Installation
1) Download and unpack pyspharm
2) Download spherepack from http://www2.cisl.ucar.edu/resources/legacy/spherepack and unpack
3) copy all the Fortran files in [path_to_spherepack]/spherepack3.2/src to [path_to_pyspharm]/pyspharm-1.0.8/src
4) install pyspharm:
```
pyspharm-1.0.8 ᐅ python setup.py build
pyspharm-1.0.8 ᐅ python setup.py install
```
5) install windspharm
ᐅ pip install windspharm
windspharm has 3 different interfaces:
standard: expects numpy arrays as inputs
cdms: cdms2 objects (cdms2 is a class for opening netcdf files [amongst other formats]) part of the cdat-lite package or UV-CDAT distribution)
iris: iris cubes
We are going to use xray here, and thus use the standard interface, passing the underlying numpy arrays
End of explanation
import os, sys
import pandas as pd
import numpy as np
from numpy import ma
from matplotlib import pyplot as plt
from mpl_toolkits.basemap import Basemap as bm
dpath = os.path.join(os.environ.get('HOME'), 'data/NCEP1')
Explanation: usual imports
End of explanation
def plot_field(X, lat, lon, vmin, vmax, step, cmap=plt.get_cmap('jet'), ax=False, title=False, grid=False):
if not ax:
f, ax = plt.subplots(figsize=(10, (X.shape[0] / float(X.shape[1])) * 10))
m.ax = ax
im = m.contourf(lons, lats, X, np.arange(vmin, vmax+step, step), latlon=True, cmap=cmap, extend='both', ax=ax)
m.drawcoastlines()
if grid:
m.drawmeridians(np.arange(0, 360, 60), labels=[0,0,0,1])
m.drawparallels([-40, 0, 40], labels=[1,0,0,0])
m.colorbar(im)
if title:
ax.set_title(title)
Explanation: defines a function to plot a 2D field map
End of explanation
import xray; print(xray.__version__)
dset_u = xray.open_dataset(os.path.join(dpath,'uwnd.2014.nc'))
dset_v = xray.open_dataset(os.path.join(dpath,'vwnd.2014.nc'))
dset_u = dset_u.sel(level=200)
dset_v = dset_v.sel(level=200)
dset_u = dset_u.mean('time')
dset_v = dset_v.mean('time')
lats = dset_u['lat'].values
lons = dset_u['lon'].values
uwnd = dset_u['uwnd'].values
vwnd = dset_v['vwnd'].values
uwnd, uwnd_info = prep_data(uwnd, 'yx')
vwnd, vwnd_info = prep_data(vwnd, 'yx')
# It is also required that the latitude dimension is north-to-south. Again the
# bundled tools make this easy.
lats, uwnd, vwnd = order_latdim(lats, uwnd, vwnd)
lons, lats = np.meshgrid(lons, lats)
w = VectorWind(uwnd, vwnd)
sf, vp = w.sfvp()
vp = vp * 10e-6
sf = sf * 10e-6
m = bm(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\
llcrnrlon=0,urcrnrlon=360,\
lat_ts=0,resolution='c')
plot_field(sf.squeeze(), lats, lons, -1500, 1500, 100, cmap=plt.get_cmap('RdBu_r'), \
title="Streamfunction at 200 hPa ($10^6$m$^2$s$^{-1}$)", grid=True)
plot_field(vp.squeeze(), lats, lons, -100, 100, 10, cmap=plt.get_cmap('RdBu_r'), \
title="Velocity Potential at 200 hPa ($10^6$m$^2$s$^{-1}$)", grid=True)
Explanation: load the wind data using xray
End of explanation |
7,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PSF Photometry
Version 0.1
We're going to try to piece together the different elements of a PSF photometry pipeline from scratch. Getting that done in one notebook means we'll have to cut some corners, but the process should be illustrative.
We will start with an image that has already been processed by the LSST pipelines, so all the calibration steps like bias subtraction, flat fielding, background subtraction, etc (together often called "instrumental signature removal") have been performed, and the image is ready for measurement.
Please download the calibrated image.
By C Slater (University of Washington)
Step1: 0) Finding an example star
I think a good way to work on a problem like this is to start with the core of the algorithm, working on just a single test case. After we have that working and tested, we can build out the infrastructure around it to run on the entire image.
Let's display a small subset of the image, say 400x400 pixels. By default, imshow() will scale the colorbar to the minimum and maximum pixel values, so let's also set some more reasonable limits so we can see some stars.
We also need to use the extent= keyword argument to imshow() so that the labels on the X and Y axes correspond to the pixel coordinates that we've selected.
You can also open the images in ds9 if you like, for easier browsing.
Step2: Now let's select a smaller region around something that looks like a good, isolated star. Remember to update the extent so we know which pixels we're looking at.
Step3: Ok, we need to cut down the image one more time, this time to give us a "cutout" image of a single star-like object. The cutout should only be about 20x20 pixels.
Step4: 1) Centroiding
Now that we have a test case to work on, let's find its position on the CCD.
To do that, we're going to need two arrays
Step5: Note how the values in a column are the same in xx, and all the values in a row are the same in yy.
Let's make an xx and yy with the values corresponding to the pixel coordinates in your cutout image.
Step6: Now we're ready to compute the centroid. Let's compute it first in x
Step7: Do the values you got make sense? Are they within the range of x and y coordinate values of the cutout? Does it roughly match where the star is? If not, are they possibly swapped, x-for-y and y-for-x? (It's very easy to get confused with the ordering of x and y indicies in Numpy, I make that mistake all the time).
If they make sense, try overplotting the coordinates on one of your larger cutout images.
Step8: If your lines cross on your chosen star, great! You've completed the first step of doing photometry, centroiding the object.
Let's take the code you prototyped in the notebook cells, and wrap it into a nice function we can use later. When we call this function, we need to tell it about the coordinates of the image we're providing, so we'll add the x_start and y_start parameters to convey that. We don't need to know the other two corners, because we can figure that out from the size of image_cutout.
Step9: 2) PSF Photometry
We needed the centroid first, because we're going to use that position to place our "PSF" model. Since we have not yet fit a real PSF model to the sources in the image, we'll use a Gaussian as an approximation.
I'll give you the function for a normalized 2D Gaussian
Step10: First just make an image of an example PSF, on the same grid as the cutout.
Note that the Gaussian is parameterized in terms of a radius, which means you will need to compute that radius from the position of every pixel in your image. meshgrid is again the tool for this.
You can either use your centroid() function here, or for debugging it's fine to manually set x_center and y_center to specific values.
Step11: Just to be sure, we should check that the PSF image is normalized (approximately) by summing the pixel values.
Step12: Ok, now we can compute the actual PSF flux. Remember the formula from the lecture is
Step13: Double check that the PSF flux you get matches (approximately) the flux you get from aperture photometry. If your cutout image is small enough that there are no other sources in it, you can just sum the cutout itself. No need to apply a more restrictive aperture for a debugging check like this.
Step14: If your psf_flux reasonably matches your aperture_flux, well done! You have a working PSF photometry measurement, now it just needs to get wrapped up in a convenient function for later use.
Step15: 3) Object Detection
Now that we have the core of the algorithm, we need to improve on our earlier step where we hand-picked a single source to measure.
We know from the talk on object detection that we need to convolve the image with the PSF to detect sources. Of course, we don't yet know what the PSF is, so we'll guess and use a Gaussian again.
With the convolved image, we now need to find "peaks". That is, we want to find pixels whose value is greater than all of their immediate neighbors. That's a relatively easy way to make sure we (mostly) only try to run photometry once on each star.
We are also applying a threshold; if a pixel value is below this threshold, we don't bother checking if it's a peak. That's useful
to exclude faint background fluctuations that aren't statistically significant (below 5-sigma), or we might set the threshold higher if if we want only bright stars for PSF determination.
The edges of the sensor often contain various artifacts, so you might want to exclude 5 to 10 pixels around each edge from the search.
Programming note
Step16: To use the peak-finder, we need to create a "detection image" by convolving the real image with the PSF. Of course, we don't know the PSF yet, so you can substitute a guess
Step17: Let's plot the positions of the peaks on the image, to make sure they look reasonable
Step18: A good debugging check is to look at a few cutouts centered on your newly-found detections. You can flip through a few of these by changing the value of n.
Step19: 4) Photometry on all objects
You're almost finished, the only remaining task is to put together all the different pieces from above into one function that finds sources and measures their sizes and fluxes, and outputs a data table at the end.
For the moment, I will tell you that the Gaussian PSF size is 2 pixels. If you have more time, there's an "extra credit" problem at the end of the notebook that will show you how to measure the PSF size directly, which also lets you measure object sizes in general. But try to get the PSF photometry working first before going onto that.
Step20: With that function all filled in, let's run it on the image!
Step21: Did you get a table full of photometry? If so, great! If it's not working well, it's likely to be a problem with getting the right inputs to the different functions you're calling. You've tested all the steps separately, so they should be working. Getting the right indices on your image cutout is always a tricky part.
If you have extra time, try adding an aperture photometry function to the processing. You can plot the size (from the second moment) against flux to find what objects might be galaxies, and generate the cutout image to see if they're really galaxies.
Extra Credit
Step22: Let's run the second moment estimator on one of the cutouts you made above.
Step23: Do the results look reasonable, compared to the image of the cutout you made above? Note that this is the Gaussian width, not the full-width at half-max that is typically quoted for PSF sizes.
If those look good, now we just need to run the second moment estimator over all the sources in your catalog. Our goal is to find if there's one particular size that fits lots of objects; that's likely to be our PSF size and the objects are likely to be stars.
Step24: Because we have second moments in both X and Y directions, we should combine them into a single value as the square root of the sum of squares. | Python Code:
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
import astropy.convolution
import pandas as pd
f = fits.open("calexp-0527247_10.fits")
image = f[1].data
Explanation: PSF Photometry
Version 0.1
We're going to try to piece together the different elements of a PSF photometry pipeline from scratch. Getting that done in one notebook means we'll have to cut some corners, but the process should be illustrative.
We will start with an image that has already been processed by the LSST pipelines, so all the calibration steps like bias subtraction, flat fielding, background subtraction, etc (together often called "instrumental signature removal") have been performed, and the image is ready for measurement.
Please download the calibrated image.
By C Slater (University of Washington)
End of explanation
#Question
plt.imshow(image[ # complete
extent= # complete
vmin=
vmax=
)
Explanation: 0) Finding an example star
I think a good way to work on a problem like this is to start with the core of the algorithm, working on just a single test case. After we have that working and tested, we can build out the infrastructure around it to run on the entire image.
Let's display a small subset of the image, say 400x400 pixels. By default, imshow() will scale the colorbar to the minimum and maximum pixel values, so let's also set some more reasonable limits so we can see some stars.
We also need to use the extent= keyword argument to imshow() so that the labels on the X and Y axes correspond to the pixel coordinates that we've selected.
You can also open the images in ds9 if you like, for easier browsing.
End of explanation
# Question
plt.imshow(image[ # complete
# complete
)
Explanation: Now let's select a smaller region around something that looks like a good, isolated star. Remember to update the extent so we know which pixels we're looking at.
End of explanation
# Question
cutout = image[ # complete
plt.imshow( #complete
Explanation: Ok, we need to cut down the image one more time, this time to give us a "cutout" image of a single star-like object. The cutout should only be about 20x20 pixels.
End of explanation
xx, yy = np.meshgrid(range(2, 10), range(20, 30))
print("xx: ", xx)
print("yy: ", yy)
Explanation: 1) Centroiding
Now that we have a test case to work on, let's find its position on the CCD.
To do that, we're going to need two arrays: one which has the same shape as cutout, but where each value is the X coordinate of the pixel, and another where each value is the Y coordinate of the pixel. Numpy has a function called meshgrid() that will give us this; we just need to supply an iterator for the X values, and an iterator for the Y values. It looks like this:
End of explanation
# Question
xx, yy = np.meshgrid( # complete
Explanation: Note how the values in a column are the same in xx, and all the values in a row are the same in yy.
Let's make an xx and yy with the values corresponding to the pixel coordinates in your cutout image.
End of explanation
# Question
x_center = # complete
y_center = # complete
print(x_center, y_center)
Explanation: Now we're ready to compute the centroid. Let's compute it first in x: we want the weighted mean of xx, with our cutout image as the weights. Remember to normalize by the sum of cutout values. The same formula will apply for y.
End of explanation
# Question
plt.imshow(image[ # complete
# complete
)
plt.axvline(x_center, color='r')
plt.axhline(y_center, color='r')
Explanation: Do the values you got make sense? Are they within the range of x and y coordinate values of the cutout? Does it roughly match where the star is? If not, are they possibly swapped, x-for-y and y-for-x? (It's very easy to get confused with the ordering of x and y indicies in Numpy, I make that mistake all the time).
If they make sense, try overplotting the coordinates on one of your larger cutout images.
End of explanation
# Question
def centroid(image_cutout, x_start, y_start):
x_size, y_size = image_cutout.shape
xx, yy = # complete
x_center = # complete
y_center = # complete
return (x_center, y_center)
Explanation: If your lines cross on your chosen star, great! You've completed the first step of doing photometry, centroiding the object.
Let's take the code you prototyped in the notebook cells, and wrap it into a nice function we can use later. When we call this function, we need to tell it about the coordinates of the image we're providing, so we'll add the x_start and y_start parameters to convey that. We don't need to know the other two corners, because we can figure that out from the size of image_cutout.
End of explanation
def gaussian2D(radius, mu):
return 1/(mu**2*2*np.pi)*np.exp(-0.5*((radius)/mu)**2)
Explanation: 2) PSF Photometry
We needed the centroid first, because we're going to use that position to place our "PSF" model. Since we have not yet fit a real PSF model to the sources in the image, we'll use a Gaussian as an approximation.
I'll give you the function for a normalized 2D Gaussian:
End of explanation
xx, yy = np.meshgrid( # complete
x_center, y_center = # complete
radius = np.sqrt(( # complete
+ ( # complete
)
psf_size_pixels = 2.5
psf_image = gaussian2D( # complete
plt.imshow( # Complete
Explanation: First just make an image of an example PSF, on the same grid as the cutout.
Note that the Gaussian is parameterized in terms of a radius, which means you will need to compute that radius from the position of every pixel in your image. meshgrid is again the tool for this.
You can either use your centroid() function here, or for debugging it's fine to manually set x_center and y_center to specific values.
End of explanation
# Question
# Complete
Explanation: Just to be sure, we should check that the PSF image is normalized (approximately) by summing the pixel values.
End of explanation
# Question
psf_flux = # complete
print(psf_flux)
Explanation: Ok, now we can compute the actual PSF flux. Remember the formula from the lecture is:
$$ f_{\rm ML}(x, y) = \frac{\sum_i \hat{f}_i p_i(x,y)}{\sum_i p_i^2(x, y)}$$
where $\hat{f_i}$ are your image values, and $p_i$ are are your PSF model values.
End of explanation
# Question
aperture_flux = # complete
print(aperture_flux)
Explanation: Double check that the PSF flux you get matches (approximately) the flux you get from aperture photometry. If your cutout image is small enough that there are no other sources in it, you can just sum the cutout itself. No need to apply a more restrictive aperture for a debugging check like this.
End of explanation
# Question
# We need to pass both the centroid x and y, and the image cutout start x,y because the star
# isn't necessarily at the very center of the cutout.
def psf_flux_gaussian(image_cutout, centroid_x, centroid_y, radius, x_start, y_start):
x_size, y_size = # complete
xx, yy = # complete
r = # complete
psf_image = # complete
psf_flux = # complete
return psf_flux
Explanation: If your psf_flux reasonably matches your aperture_flux, well done! You have a working PSF photometry measurement, now it just needs to get wrapped up in a convenient function for later use.
End of explanation
# Question
def find_peaks(image, threshold):
# We are going to append the peaks we find to these two lists
peak_x_values = []
peak_y_values = []
for i in # complete
for j in # complete
pixel = image[i,j]
# We want to skip over pixels that are below our threshold
if(pixel # complete
# We want to save pixel coordinates if the pixel is a "peak"
if(pixel > # Complete
and pixel > # complete
and
# complete
):
# complete
# Now that we're done appending to them, it will be easier if we turn the
# lists into numpy arrays.
return np.array(peak_x_values), np.array(peak_x_values)
Explanation: 3) Object Detection
Now that we have the core of the algorithm, we need to improve on our earlier step where we hand-picked a single source to measure.
We know from the talk on object detection that we need to convolve the image with the PSF to detect sources. Of course, we don't yet know what the PSF is, so we'll guess and use a Gaussian again.
With the convolved image, we now need to find "peaks". That is, we want to find pixels whose value is greater than all of their immediate neighbors. That's a relatively easy way to make sure we (mostly) only try to run photometry once on each star.
We are also applying a threshold; if a pixel value is below this threshold, we don't bother checking if it's a peak. That's useful
to exclude faint background fluctuations that aren't statistically significant (below 5-sigma), or we might set the threshold higher if if we want only bright stars for PSF determination.
The edges of the sensor often contain various artifacts, so you might want to exclude 5 to 10 pixels around each edge from the search.
Programming note: we're going to do a python loop over all the pixels in the image. This is a really slow way to do this, and you should try to avoid loops like this as much as possible in python. We're doing it this way only because 1) it's illustrative and 2) it takes less than a minute; acceptable for a notebook, but not how we process LSST.
End of explanation
%%time
# Question
convolved_image = astropy.convolution.convolve( # complete
peak_x_values, peak_y_values = # complete
Explanation: To use the peak-finder, we need to create a "detection image" by convolving the real image with the PSF. Of course, we don't know the PSF yet, so you can substitute a guess: try a Gaussian kernel, with a 2.5 pixel width.
The %%time "magic" will show us how long the convolution and peak-finding took.
End of explanation
# Question
plt.plot( # Complete
Explanation: Let's plot the positions of the peaks on the image, to make sure they look reasonable
End of explanation
# question
n = 50
peak_x = peak_x_values[n]
peak_y = peak_y_values[n]
cutout = image[ # complete
plt.imshow(cutout)
Explanation: A good debugging check is to look at a few cutouts centered on your newly-found detections. You can flip through a few of these by changing the value of n.
End of explanation
# Question
def run_photometry(image, threshold, psf_width):
# Detect your sources
# Setup any variables you need to store results.
for # complete
# Measure the centroid
# Measure the flux
# Measure the moments
# Let's return a pandas DataFrame to make it easy to use the results
return pd.DataFrame( # complete
Explanation: 4) Photometry on all objects
You're almost finished, the only remaining task is to put together all the different pieces from above into one function that finds sources and measures their sizes and fluxes, and outputs a data table at the end.
For the moment, I will tell you that the Gaussian PSF size is 2 pixels. If you have more time, there's an "extra credit" problem at the end of the notebook that will show you how to measure the PSF size directly, which also lets you measure object sizes in general. But try to get the PSF photometry working first before going onto that.
End of explanation
%%time
# Question
photometry_table = run_photometry( # complete
print(photometry_table)
print(photometry_table[:20])
Explanation: With that function all filled in, let's run it on the image!
End of explanation
# Question
def second_moment(image_cutout, centroid_x, centroid_y, start_x, start_y):
x_size, y_size = # complete
xx, yy = # complete
x_width = # complete
y_width = # complete
return (x_width, y_width)
Explanation: Did you get a table full of photometry? If so, great! If it's not working well, it's likely to be a problem with getting the right inputs to the different functions you're calling. You've tested all the steps separately, so they should be working. Getting the right indices on your image cutout is always a tricky part.
If you have extra time, try adding an aperture photometry function to the processing. You can plot the size (from the second moment) against flux to find what objects might be galaxies, and generate the cutout image to see if they're really galaxies.
Extra Credit: Measuring the PSF
Once we have sources identified in an image, we want to identify which would be good for PSF determination, and then we want to measure their PSFs. In our case we're going to do both of these at once, we're going to measure sizes for all sources, and then use the mean size of those which we think are stars as our PSF model. In a more sophisticated pipeline, the object sizes might be used as a cut before passing to some more complicated PSF determination process.
To obtain object sizes, we're going to measure the "second moment".
This will look a lot like the centroid algorithm. The formula we want to implement is:
$$I_{xx}^2 = \frac{\sum_i (\hat{f_i} (x_i - x_{\rm center}))^2}{\sum_i \hat{f_i}^2} $$
Let's try building it directly in the function this time; if it gives you trouble, feel free to try it out in some notebook cells directly (so you can see the intermediate variables better) before putting it back in the function.
End of explanation
# Question
second_moment(cutout, # complete
Explanation: Let's run the second moment estimator on one of the cutouts you made above.
End of explanation
# Question
%%time
# We will put the x and y moments in these lists
moments_x = []
moments_y = []
for peak_x, peak_y in # complete
image_cutout = image[ # complete
start_x = int( # complete
start_y = int( # complete
centroid_x, centroid_y = # complete
moment_x, moment_y = second_moment( # complete
moments_x.append( # complete
moments_y.append( # complete
Explanation: Do the results look reasonable, compared to the image of the cutout you made above? Note that this is the Gaussian width, not the full-width at half-max that is typically quoted for PSF sizes.
If those look good, now we just need to run the second moment estimator over all the sources in your catalog. Our goal is to find if there's one particular size that fits lots of objects; that's likely to be our PSF size and the objects are likely to be stars.
End of explanation
# Question
moments_sq = # complete
plt.hist( # complete
plt.xlabel("Second Moment (pixels)")
Explanation: Because we have second moments in both X and Y directions, we should combine them into a single value as the square root of the sum of squares.
End of explanation |
7,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ISL Lab 2.3 Introduction to Statistical Computing with Python
Step1: 2.3.1 Basic Commands
Step2: 2.3.2 Graphics
Step3: Have to dig back into MatPlotLib to set axis labels, so all is not perfect.
Step4: Adding a title is a little more annoying, per Stack Overflow explanation of adding a title to a Seaborn plot. There are more complex explanations that work with multiple subplots.
Step5: Saving an image to a file is also pretty straightforward using savefig from PyPlot.
Step6: np.linspace makes equally spaced steps between the start and end
Step7: A contour plot needs a 2D array of z values (x,y) -> f(x,y).
The hard part is getting the inputs to the function, or convincing f not to vectorize over x,y in parallel.
Step8: imshow shows an image, like the R command image.
Surely there is a way to get the coordinates input as well as the z, but in practice a regular grid seems most likely.
Step9: 3D Rendering
Step10: 2.3.3 Indexing Data
Step11: Beware if following code in the book. R indices start at 1, while Python indices start at 0.
Step12: If you combine the two in one set of brackets, they are traversed in parallel, getting you a[0,1] and a[2,3].
Step13: When you want a sub-array, index twice.
Step14: The ix_ function makes grids out of indices that you give it. Clearer for this!
Step15: Note
Step16: Dropping columns is not as convenient in Python.
Step17: 2.3.4 Loading Data
Note
Step18: Get rid of any rows with missing data. This is not always a good idea.
Step19: 2.3.5 Graphical and Numerical Summaries
Step20: I am not aware of a way to interactively identify points on a matplotlib plot that is similar to the R command identify.
Step21: Miscellaneous Notes
Categorical data can be constructed using astype('category') in Pandas. Read more about categorical data if you need the information.
Step22: Homework Starter
Easy access to ISL datasets if you have internet access. | Python Code:
import numpy as np
import pandas as pd
import scipy
import scipy.stats
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.simplefilter('ignore',FutureWarning)
Explanation: ISL Lab 2.3 Introduction to Statistical Computing with Python
End of explanation
np.arange(6)
a = np.arange(6)
b = a.reshape((2,3))
b
np.sqrt(b)
rnorm = scipy.stats.norm(loc=0,scale=1) # mean = loc = 0, standard_deviation = scale = 1
x = rnorm.rvs(size=50)
err = scipy.stats.norm(loc=50, scale=0.1)
y = err.rvs(size=50)
np.corrcoef(x,y)
np.random.seed(1303)
rnorm.rvs(size=8)
# Notice - same random numbers all of the time
np.random.seed(3)
y = rnorm.rvs(size=100)
np.mean(y)
np.var(y)
np.sqrt(np.var(y))
np.std(y)
Explanation: 2.3.1 Basic Commands
End of explanation
x = rnorm.rvs(size=100)
y = rnorm.rvs(size=100)
ax = sns.scatterplot(x,y);
Explanation: 2.3.2 Graphics
End of explanation
ax = sns.scatterplot(x,y);
ax.set(xlabel="the x-axis",ylabel="the y-axis")
plt.show()
Explanation: Have to dig back into MatPlotLib to set axis labels, so all is not perfect.
End of explanation
ax = sns.scatterplot(x,y);
ax.set_xlabel('independent var')
ax.set_ylabel('dependent var')
ax.set_title('Massive Title')
plt.show();
Explanation: Adding a title is a little more annoying, per Stack Overflow explanation of adding a title to a Seaborn plot. There are more complex explanations that work with multiple subplots.
End of explanation
ax = sns.scatterplot(x,y);
ax.set_title('Save this plot')
plt.savefig('unlabeled-axes.png');
# ugliness to avoid showing figure:
fig = plt.gcf()
plt.close(fig)
Explanation: Saving an image to a file is also pretty straightforward using savefig from PyPlot.
End of explanation
x = np.linspace(-np.pi,np.pi,50)
Explanation: np.linspace makes equally spaced steps between the start and end
End of explanation
x = np.linspace(-np.pi,np.pi,50)
y = x # for clarity only
xx,yy = np.meshgrid(x,y)
def fbasic(x,y): return np.cos(y) / (1+x**2)
f = np.vectorize(lambda x,y: np.cos(y) / (1+x**2))
z = f(xx,yy)
plt.contour(z);
plt.contour(z,45);
def g1(x,y): return (fbasic(x,y)+fbasic(y,x))/2
g = np.vectorize(g1)
z2 = g(xx,yy)
plt.contour(z2,15);
Explanation: A contour plot needs a 2D array of z values (x,y) -> f(x,y).
The hard part is getting the inputs to the function, or convincing f not to vectorize over x,y in parallel.
End of explanation
randompix = np.random.random((16, 16))
plt.imshow(randompix);
Explanation: imshow shows an image, like the R command image.
Surely there is a way to get the coordinates input as well as the z, but in practice a regular grid seems most likely.
End of explanation
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
surf = ax.plot_surface(xx,yy,z, cmap=cm.coolwarm);
plt.show();
Explanation: 3D Rendering
End of explanation
a = np.arange(1,17).reshape((4,4)).T # matches R example
print(a)
a[1,2]
Explanation: 2.3.3 Indexing Data
End of explanation
a[[0,2],[1,3]]
a[[0,2],:]
a[:,[1,3]]
Explanation: Beware if following code in the book. R indices start at 1, while Python indices start at 0.
End of explanation
a[[0,2],[1,3]]
Explanation: If you combine the two in one set of brackets, they are traversed in parallel, getting you a[0,1] and a[2,3].
End of explanation
a[[0,2],:][:,[1,3]]
Explanation: When you want a sub-array, index twice.
End of explanation
a[np.ix_([0,2],[1,3])]
Explanation: The ix_ function makes grids out of indices that you give it. Clearer for this!
End of explanation
a[np.ix_(np.arange(0,3),np.arange(1,4))]
a[[0,1],]
a[:,[0,1]]
a[1,]
Explanation: Note: R ranges include the last item, Python ranges do not.
End of explanation
b = np.delete(a,[0,2],0)
b
c = np.delete(b,[0,2,3],1)
c
a.shape
Explanation: Dropping columns is not as convenient in Python.
End of explanation
#auto = pd.read_table("Auto.data")
auto = pd.read_csv("http://www-bcf.usc.edu/~gareth/ISL/Auto.csv")
Explanation: 2.3.4 Loading Data
Note: To get the data from a preloaded R dataset, I do write_table(the_data, filename="whatever", sep="\t") in R.
Cool fact: read_table can load straight from a URL.
End of explanation
auto = auto.dropna()
auto.shape
auto.columns
Explanation: Get rid of any rows with missing data. This is not always a good idea.
End of explanation
sns.scatterplot(auto['cylinders'], auto['mpg']);
sns.boxplot(x="cylinders", y="mpg", data=auto);
sns.stripplot(x="cylinders", y="mpg", data=auto);
sns.distplot(auto['mpg']);
sns.distplot(auto['mpg'],bins=15, kde=False, vertical=True);
sns.pairplot(data=auto);
sns.pairplot(data=auto[['mpg','displacement','horsepower',
'weight','acceleration']]);
Explanation: 2.3.5 Graphical and Numerical Summaries
End of explanation
auto.describe()
auto['name'].value_counts().head()
auto['mpg'].describe()
Explanation: I am not aware of a way to interactively identify points on a matplotlib plot that is similar to the R command identify.
End of explanation
auto['cylinders'] = auto['cylinders'].astype('category')
auto['cylinders'].describe()
Explanation: Miscellaneous Notes
Categorical data can be constructed using astype('category') in Pandas. Read more about categorical data if you need the information.
End of explanation
college = pd.read_csv("http://www-bcf.usc.edu/~gareth/ISL/College.csv")
Explanation: Homework Starter
Easy access to ISL datasets if you have internet access.
End of explanation |
7,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Chapter 20
Copyright 2017 Allen Downey
License
Step1: Dropping pennies
I'll start by getting the units we need from Pint.
Step2: And defining the initial state.
Step3: Acceleration due to gravity is about 9.8 m / s$^2$.
Step4: I'll start with a duration of 10 seconds and step size 0.1 second.
Step5: Now we make a System object.
Step7: And define the slope function.
Step8: It's always a good idea to test the slope function with the initial conditions.
Step9: Now we're ready to call run_ode_solver
Step10: Here are the results
Step11: And here's position as a function of time
Step12: Onto the sidewalk
To figure out when the penny hit the sidewalk, we can use crossings, which finds the times where a Series passes through a given value.
Step13: For this example there should be just one crossing, the time when the penny hits the sidewalk.
Step14: We can compare that to the exact result. Without air resistance, we have
$v = -g t$
and
$y = 381 - g t^2 / 2$
Setting $y=0$ and solving for $t$ yields
$t = \sqrt{\frac{2 y_{init}}{g}}$
Step16: The estimate is accurate to about 9 decimal places.
Events
Instead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. run_ode_solver provides exactly the tool we need, event functions.
Here's an event function that returns the height of the penny above the sidewalk
Step17: And here's how we pass it to run_ode_solver. The solver should run until the event function returns 0, and then terminate.
Step18: The message from the solver indicates the solver stopped because the event we wanted to detect happened.
Here are the results
Step19: With the events option, the solver returns the actual time steps it computed, which are not necessarily equally spaced.
The last time step is when the event occurred
Step20: The result is accurate to about 4 decimal places.
We can also check the velocity of the penny when it hits the sidewalk
Step21: And convert to kilometers per hour.
Step22: If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.
So it's a good thing there is air resistance.
Under the hood
Here is the source code for crossings so you can see what's happening under the hood
Step23: The documentation of InterpolatedUnivariateSpline is here.
Exercises
Exercise | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Chapter 20
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
m = UNITS.meter
s = UNITS.second
Explanation: Dropping pennies
I'll start by getting the units we need from Pint.
End of explanation
init = State(y=381 * m,
v=0 * m/s)
Explanation: And defining the initial state.
End of explanation
g = 9.8 * m/s**2
Explanation: Acceleration due to gravity is about 9.8 m / s$^2$.
End of explanation
t_end = 10 * s
dt = 0.1 * s
Explanation: I'll start with a duration of 10 seconds and step size 0.1 second.
End of explanation
system = System(init=init, g=g, t_end=t_end, dt=dt)
Explanation: Now we make a System object.
End of explanation
def slope_func(state, t, system):
Compute derivatives of the state.
state: position, velocity
t: time
system: System object containing `g`
returns: derivatives of y and v
y, v = state
g = system.g
dydt = v
dvdt = -g
return dydt, dvdt
Explanation: And define the slope function.
End of explanation
dydt, dvdt = slope_func(system.init, 0, system)
print(dydt)
print(dvdt)
Explanation: It's always a good idea to test the slope function with the initial conditions.
End of explanation
results, details = run_ode_solver(system, slope_func)
details
Explanation: Now we're ready to call run_ode_solver
End of explanation
results.head()
results.tail()
Explanation: Here are the results:
End of explanation
def plot_position(results):
plot(results.y, label='y')
decorate(xlabel='Time (s)',
ylabel='Position (m)')
plot_position(results)
savefig('figs/chap20-fig01.pdf')
Explanation: And here's position as a function of time:
End of explanation
t_crossings = crossings(results.y, 0)
Explanation: Onto the sidewalk
To figure out when the penny hit the sidewalk, we can use crossings, which finds the times where a Series passes through a given value.
End of explanation
t_sidewalk = t_crossings[0] * s
Explanation: For this example there should be just one crossing, the time when the penny hits the sidewalk.
End of explanation
sqrt(2 * init.y / g)
Explanation: We can compare that to the exact result. Without air resistance, we have
$v = -g t$
and
$y = 381 - g t^2 / 2$
Setting $y=0$ and solving for $t$ yields
$t = \sqrt{\frac{2 y_{init}}{g}}$
End of explanation
def event_func(state, t, system):
Return the height of the penny above the sidewalk.
y, v = state
return y
Explanation: The estimate is accurate to about 9 decimal places.
Events
Instead of running the simulation until the penny goes through the sidewalk, it would be better to detect the point where the penny hits the sidewalk and stop. run_ode_solver provides exactly the tool we need, event functions.
Here's an event function that returns the height of the penny above the sidewalk:
End of explanation
results, details = run_ode_solver(system, slope_func, events=event_func)
details
Explanation: And here's how we pass it to run_ode_solver. The solver should run until the event function returns 0, and then terminate.
End of explanation
results.tail()
Explanation: The message from the solver indicates the solver stopped because the event we wanted to detect happened.
Here are the results:
End of explanation
t_sidewalk = get_last_label(results) * s
Explanation: With the events option, the solver returns the actual time steps it computed, which are not necessarily equally spaced.
The last time step is when the event occurred:
End of explanation
v_sidewalk = get_last_value(results.v)
Explanation: The result is accurate to about 4 decimal places.
We can also check the velocity of the penny when it hits the sidewalk:
End of explanation
km = UNITS.kilometer
h = UNITS.hour
v_sidewalk.to(km / h)
Explanation: And convert to kilometers per hour.
End of explanation
source_code(crossings)
Explanation: If there were no air resistance, the penny would hit the sidewalk (or someone's head) at more than 300 km/h.
So it's a good thing there is air resistance.
Under the hood
Here is the source code for crossings so you can see what's happening under the hood:
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: The documentation of InterpolatedUnivariateSpline is here.
Exercises
Exercise: Here's a question from the web site Ask an Astronomer:
"If the Earth suddenly stopped orbiting the Sun, I know eventually it would be pulled in by the Sun's gravity and hit it. How long would it take the Earth to hit the Sun? I imagine it would go slowly at first and then pick up speed."
Use run_ode_solver to answer this question.
Here are some suggestions about how to proceed:
Look up the Law of Universal Gravitation and any constants you need. I suggest you work entirely in SI units: meters, kilograms, and Newtons.
When the distance between the Earth and the Sun gets small, this system behaves badly, so you should use an event function to stop when the surface of Earth reaches the surface of the Sun.
Express your answer in days, and plot the results as millions of kilometers versus days.
If you read the reply by Dave Rothstein, you will see other ways to solve the problem, and a good discussion of the modeling decisions behind them.
You might also be interested to know that it's actually not that easy to get to the Sun.
End of explanation |
7,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Download and process the Bay Area's street network
Step1: Download and extract the counties shapefile if it doesn't already exist, then load it
To use OSMnx, we need a polygon of the Bay Area's nine counties. So, we'll download a shapefile from the census, extract our counties, and take the union to form a polygon. Also, project the polygon so we can calculate its area for density stats.
Step2: Download the street network
Now we've got our polygon. Use OSMnx to download the street network (drivable roads).
Step3: Filter the network to retain only tertiary streets and up
Including "unclassified" and "road" types as they often serve as null values.
Step4: Filter the network to retain only secondary streets and up
Also discard "unclassified" and "road" types.
Step5: Finally
Step6: See some stats on the coarsest network | Python Code:
import os, zipfile, requests, pandas as pd, geopandas as gpd, osmnx as ox
ox.config(use_cache=True, log_console=True)
# point to the shapefile for counties
counties_shapefile_url = 'http://www2.census.gov/geo/tiger/GENZ2016/shp/cb_2016_us_county_500k.zip'
# identify bay area counties by fips code
bayarea = {'Alameda':'001',
'Contra Costa':'013',
'Marin':'041',
'Napa':'055',
'San Francisco':'075',
'San Mateo':'081',
'Santa Clara':'085',
'Solano':'095',
'Sonoma':'097'}
Explanation: Download and process the Bay Area's street network
End of explanation
counties_shapefile_zip = counties_shapefile_url[counties_shapefile_url.rfind('/') + 1 :]
counties_shapefile_dir = counties_shapefile_zip[: counties_shapefile_zip.rfind('.zip')]
if not os.path.exists(counties_shapefile_dir):
response = requests.get(counties_shapefile_url)
with open(counties_shapefile_zip, 'wb') as f:
f.write(response.content)
with zipfile.ZipFile(counties_shapefile_zip, 'r') as zip_file:
zip_file.extractall(counties_shapefile_dir)
os.remove(counties_shapefile_zip)
counties = gpd.read_file(counties_shapefile_dir)
len(counties)
# retain only those tracts that are in the bay area counties
mask = (counties['STATEFP'] == '06') & (counties['COUNTYFP'].isin(bayarea.values()))
gdf_bay = counties[mask]
len(gdf_bay)
bayarea_polygon = gdf_bay.unary_union
bayarea_polygon
# get the convex hull, otherwise we'll cut out bridges over the bay
bayarea_polygon = bayarea_polygon.convex_hull
bayarea_polygon_proj, crs = ox.project_geometry(bayarea_polygon)
Explanation: Download and extract the counties shapefile if it doesn't already exist, then load it
To use OSMnx, we need a polygon of the Bay Area's nine counties. So, we'll download a shapefile from the census, extract our counties, and take the union to form a polygon. Also, project the polygon so we can calculate its area for density stats.
End of explanation
# do not simplify yet, we'll strip out unwanted local streets first
G = ox.graph_from_polygon(bayarea_polygon, network_type='drive', simplify=False)
print(len(G.nodes()))
print(len(G.edges()))
G1 = G.copy()
# retain only the largest connected component subgraph, then simplify the graph
G1 = ox.remove_isolated_nodes(G1)
G1_connected = ox.get_largest_component(G1, strongly=False)
G1_simp = ox.simplify_graph(G1_connected, strict=True)
print(len(G1_simp.nodes()))
print(len(G1_simp.edges()))
Explanation: Download the street network
Now we've got our polygon. Use OSMnx to download the street network (drivable roads).
End of explanation
G2 = G.copy()
# identify all the edge types we want to retain
types = ['motorway', 'motorway_link', 'trunk', 'trunk_link',
'primary', 'primary_link', 'secondary', 'secondary_link',
'tertiary', 'tertiary_link', 'unclassified', 'road']
minor_streets = [(u, v, k) for u, v, k, d in G2.edges(keys=True, data=True) if d['highway'] not in types]
# remove minor streets, retain only the largest connected component subgraph, then simplify the graph
G2.remove_edges_from(minor_streets)
G2 = ox.remove_isolated_nodes(G2)
G2_connected = ox.get_largest_component(G2, strongly=False)
G2_simp = ox.simplify_graph(G2_connected, strict=True)
print(len(G2_simp.nodes()))
print(len(G2_simp.edges()))
Explanation: Filter the network to retain only tertiary streets and up
Including "unclassified" and "road" types as they often serve as null values.
End of explanation
G3 = G.copy()
# identify all the edge types we want to retain
types = ['motorway', 'motorway_link', 'trunk', 'trunk_link',
'primary', 'primary_link', 'secondary', 'secondary_link']
minor_streets = [(u, v, k) for u, v, k, d in G3.edges(keys=True, data=True) if d['highway'] not in types]
# remove minor streets, retain only the largest connected component subgraph, then simplify the graph
G3.remove_edges_from(minor_streets)
G3 = ox.remove_isolated_nodes(G3)
G3_connected = ox.get_largest_component(G3, strongly=False)
G3_simp = ox.simplify_graph(G3_connected, strict=True)
print(len(G3_simp.nodes()))
print(len(G3_simp.edges()))
Explanation: Filter the network to retain only secondary streets and up
Also discard "unclassified" and "road" types.
End of explanation
# save it as graphml to use in igraph and other network analysis tools
ox.save_graphml(G1_simp, filename='bayarea_full.graphml')
ox.save_graphml(G2_simp, filename='bayarea_tertiary.graphml')
ox.save_graphml(G3_simp, filename='bayarea_secondary.graphml')
Explanation: Finally: calculate summary stats, save to disk
End of explanation
pd.Series(ox.basic_stats(G3_simp, area=bayarea_polygon_proj.area))
fig, ax = ox.plot_graph(G3_simp, node_size=0, edge_linewidth=0.2)
# save as shapefile for GIS
ox.save_graph_shapefile(G3_simp, filename='bayarea_secondary')
Explanation: See some stats on the coarsest network
End of explanation |
7,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Structures
Classes
Step3: Magic methods
There are a bunch of magic methods available in Python. You can overide some of them
Step4: Iterators & Generators
Step5: Tricks in functions (class methods) | Python Code:
class MyOwnClass(object):
This is a documentation for my class, so anyone can read it.
CLASS_CONST = 42
def __init__(self, argument1, default_argument2=1):
This is a constructor of MyOwnClass. It saves all arguments
as a 'protected' variables of a class.
self._argument1 = argument1
self._argument2 = default_argument2
super(MyOwnClass, self).__init__()
def get_info(self):
return ("Class const = %d\nArgument1 = %s\nArgument2 = %d"
% (self.CLASS_CONST, self.argument1, self.argument2))
def print_all(self):
print(self.get_info())
@property
def argument1(self):
return self._argument1
@property
def argument2(self):
return 2**self._argument2
instance_of_my_own_class_1 = MyOwnClass('string arg #1')
instance_of_my_own_class_1.print_all()
instance_of_my_own_class_2 = MyOwnClass('string arg #1', 2)
instance_of_my_own_class_2.print_all()
instance_of_my_own_class_3 = MyOwnClass('string arg #1', default_argument2=4)
instance_of_my_own_class_3.print_all()
instance_of_my_own_class_4 = MyOwnClass(default_argument2=5, argument1=1)
instance_of_my_own_class_4.print_all()
instance_of_my_own_class_5 = MyOwnClass('string arg #1', 'string')
instance_of_my_own_class_5.print_all()
Explanation: Structures
Classes
End of explanation
dir(instance_of_my_own_class_1)
instance_of_my_own_class_1.__dict__
Explanation: Magic methods
There are a bunch of magic methods available in Python. You can overide some of them:
__init__
__unicode__
__str__
__getattr__
__setattr__
__gt__
__gte__
__lt__
__lte__
...
End of explanation
[i*10 for i in range(10)]
(i*10 for i in range(10))
list((i*10 for i in range(10)))
{i / 2 for i in range(10)}
set([1,2,3,1,2,3])
{i: (i - 2, i**2) for i in range(10)}
dict((i, i**i) for i in range(10))
map(lambda x: x*10, range(10))
from functools import reduce
reduce(lambda prev_result, item: prev_result * item, range(1, 5))
{i: i**2 for i in range(10) if i % 3 == 0}
[(a, b) for a in range(4) for b in range(0, -a, -1)]
[1] * 10
[[1] * 5] * 5
matrix = [[1] * 5] * 5
matrix
matrix[3][2] = 2
matrix
matrix = [[1] * 5 for i in range(5)]
matrix
matrix[3][2] = 2
matrix
Explanation: Iterators & Generators
End of explanation
DEFAULT_LOGIN = 'root'
def my_function(arg1, arg2, arg3='str', arg4=None):
if arg4 is None:
arg4 = {}
if 'login' not in arg4:
arg4['login'] = DEFAULT_LOGIN
if 'id' not in arg4:
arg4['id'] = 0
else:
arg4['id'] += 1
print(arg1, arg2, arg3, arg4)
my_function(1, 2)
args = [10, 20]
kwargs = {
'arg3': 'unicode',
'arg4': {'login': 'frol', 'password': '$(KE)@D)$!1'},
}
my_function(*args, **kwargs)
my_function(*args, **kwargs)
kwargs
kwargs = {
'arg1': 11,
'arg2': 22,
'arg3': 'unicode',
'arg4': {'login': 'frol', 'password': '$(KE)@D)$!1'},
}
my_function(**kwargs)
kwargs
def my_function2(*args, **kwargs):
login = DEFAULT_LOGIN
if 'arg4' in kwargs and 'login' in kwargs['arg4']:
login = kwargs['arg4']['login']
kwargs['arg3'] = login
my_function(*args, **kwargs)
my_function2(**kwargs)
Explanation: Tricks in functions (class methods)
End of explanation |
7,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extinction (ebv, Av, & Rv)
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Now let's add a light curve so that we can access the relevant parameters.
Step3: Relevant Parameters
Extinction is parameterized by 3 parameters | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
Explanation: Extinction (ebv, Av, & Rv)
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc')
Explanation: Now let's add a light curve so that we can access the relevant parameters.
End of explanation
print(b['ebv'])
print(b['ebv@dataset'])
print(b['ebv@constraint'])
print(b['Av'])
print(b['Rv'])
Explanation: Relevant Parameters
Extinction is parameterized by 3 parameters: ebv (E(B-V)), Av, and Rv. Of these three, two can be provided and the other must be constrained. By default, ebv is the constrained parameter. To change this, see the tutorial on constraints and the b.flip_constraint API docs.
End of explanation |
7,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 9
Problem 9-3
Step1: Description
Suppose that the motor in Problem 9-1 is started and the auxiliary winding fails open while the rotor is accelerating through 400 r/min.
How much induced torque will the motor be able to produce on its main winding alone?
Assuming that the rotational losses are still 51 W, will this motor continue accelerating or will it slow down again? Prove your answer.
Step2: SOLUTION
At a speed of 400 r/min, the slip is
Step3: The impedances $Z_F$ and $Z_B$ are
Step4: $$Z_B = \frac{(R_2/(2-s) + jX_2)(jX_M)}{R_2/(2-s) + jX_2 + jX_M}$$
Step5: The input current is
Step6: The air-gap power is
Step7: The power converted from electrical to mechanical form is
Step8: The output power is
Step9: The induced torque is
$$\tau_\text{ind} = \frac{P_\text{AG}}{\omega_\text{sync}}$$ | Python Code:
%pylab notebook
%precision %.4g
Explanation: Excercises Electric Machinery Fundamentals
Chapter 9
Problem 9-3
End of explanation
V = 120 # [V]
p = 4
R1 = 2.0 # [Ohm]
R2 = 2.8 # [Ohm]
X1 = 2.56 # [Ohm]
X2 = 2.56 # [Ohm]
Xm = 60.5 # [Ohm]
n = 400 # [r/min]
Prot = 51 # [W]
n_sync = 1800 # [r/min]
Explanation: Description
Suppose that the motor in Problem 9-1 is started and the auxiliary winding fails open while the rotor is accelerating through 400 r/min.
How much induced torque will the motor be able to produce on its main winding alone?
Assuming that the rotational losses are still 51 W, will this motor continue accelerating or will it slow down again? Prove your answer.
End of explanation
s = (n_sync - n) / n_sync
s
Explanation: SOLUTION
At a speed of 400 r/min, the slip is:
$$s = \frac{n_\text{sync} - n}{n_\text{sync}}$$
End of explanation
Zf = ((R2/s + X2*1j)*(Xm*1j)) / (R2/s + X2*1j + Xm*1j)
Zf
Explanation: The impedances $Z_F$ and $Z_B$ are:
$$Z_F = \frac{(R_2/s + jX_2)(jX_M)}{R_2/s + jX_2 + jX_M}$$
End of explanation
Zb = ((R2/(2-s) + X2*1j)*(Xm*1j)) / (R2/(2-s) + X2*1j + Xm*1j)
Zb
Explanation: $$Z_B = \frac{(R_2/(2-s) + jX_2)(jX_M)}{R_2/(2-s) + jX_2 + jX_M}$$
End of explanation
I1 = V / (R1 +X1*1j + 0.5*Zf + 0.5*Zb)
I1_angle = arctan(I1.imag/I1.real)
print('I1 = {:.2f} V ∠{:.1f}°'.format(abs(I1), I1_angle/pi*180))
Explanation: The input current is:
$$\vec{I}_1 = \frac{\vec{V}}{R_1 + jX_1 + 0.5Z_F + 0.5Z_B}$$
End of explanation
Pag_f = abs(I1)**2 * 0.5*Zf.real
Pag_f
Pag_b = abs(I1)**2 * 0.5*Zb.real
Pag_b
Pag = Pag_f - Pag_b
print('Pag = {:.1f} W'.format(Pag))
Explanation: The air-gap power is:
End of explanation
Pconv_f = (1-s)*Pag_f
Pconv_f
Pconv_b = (1-s)*Pag_b
Pconv_b
Pconv = Pconv_f - Pconv_b
print('Pconv = {:.1f} W'.format(Pconv))
Explanation: The power converted from electrical to mechanical form is:
End of explanation
Pout = Pconv - Prot
print('Pout = {:.1f} W'.format(Pout))
Explanation: The output power is:
End of explanation
w_sync = n_sync * (2.0*pi/1.0) * (1.0/60.0)
tau_ind = Pag / w_sync
print('''
τ_ind = {:.3f} Nm
================'''.format(tau_ind))
Explanation: The induced torque is
$$\tau_\text{ind} = \frac{P_\text{AG}}{\omega_\text{sync}}$$
End of explanation |
7,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: HTTP Methods with curl
GET
Step2: POST
Step3: Delete
Step4: Install JSON-server
http
Step5: HTTP Methods
GET - query
Step6: POST - add new
Step7: NOTE
Step8: PUT - update (whole node)
Step9: PATCH (partial update)
Step10: DELETE
Step11: work with json file
https
Step12: SLICE
Step13: SORT
Step14: Search | Python Code:
!curl -o todos-1.json https://jsonplaceholder.typicode.com/todos/1
!cat todos-1.json
Explanation: https://www.baeldung.com/curl-rest
https://jsonplaceholder.typicode.com/
is a great REST API test site
Save curl output
save curl output response to a file
$ curl -o jsonplaceholder.html https://jsonplaceholder.typicode.com/
End of explanation
! curl https://jsonplaceholder.typicode.com/photos/1
Explanation: HTTP Methods with curl
GET
End of explanation
! curl "https://jsonplaceholder.typicode.com/users/10"
! curl "https://jsonplaceholder.typicode.com/users/id=10&username=Moria.Stanley"
! curl "https://jsonplaceholder.typicode.com/users/10"
Explanation: POST
End of explanation
! curl -X DELETE https://jsonplaceholder.typicode.com/users/10
!curl https://jsonplaceholder.typicode.com/users/10
Explanation: Delete
End of explanation
!curl http://localhost:3000/db
Explanation: Install JSON-server
http://nmotw.in/json-server/
https://github.com/typicode/json-server
$ npm install -g json-server
$ json-server --watch db.json
get the entire db
End of explanation
!curl http://localhost:3000/posts
Explanation: HTTP Methods
GET - query
End of explanation
! curl -X POST -d "id=21&title=learn JSON&author=gong" http://localhost:3000/posts/
! curl -X POST -d "id=5&title=learn Docker Compose&author=Mei" http://localhost:3000/posts/
! curl -X POST -d "id=2&title=Play with Jenkins&author=oracle" http://localhost:3000/posts/
Explanation: POST - add new
End of explanation
! curl -i -X POST http://localhost:3000/posts/ -d '{"id": 6, "title": "learn Jenkins", "author": "gong"}' --header "Content-Type: application/json"
Explanation: NOTE: JSON-server does not know how to parse JSON string
End of explanation
!curl -X PUT -d "id=21&title=learn REST&author=wen" http://localhost:3000/posts/21
!curl -i -X PUT -d "title=learn Ansible&author=albert" http://localhost:3000/posts/21
!curl http://localhost:3000/posts/21
Explanation: PUT - update (whole node)
End of explanation
!curl -i -X PATCH -d "author=annabella" http://localhost:3000/posts/21
Explanation: PATCH (partial update)
End of explanation
!curl http://localhost:3000/posts
!curl -X DELETE http://localhost:3000/posts/t1FLpvx
Explanation: DELETE
End of explanation
!cat request1.json
! curl -vX POST http://localhost:3000/posts -d @request1.json --header "Content-Type: application/json"
!curl -X GET http://localhost:3000/posts/4
Explanation: work with json file
https://stackoverflow.com/questions/18611903/how-to-pass-payload-via-json-file-for-curl
End of explanation
!curl -X GET http://localhost:3000/posts
!curl http://localhost:3000/posts?_start=1&_end=2
Explanation: SLICE
End of explanation
!curl http://localhost:3000/posts?_sort=id&_order=DESC
!curl http://localhost:3000/posts?_sort=id&_order=DESC
!curl http://localhost:3000/posts/2
Explanation: SORT
End of explanation
!curl http://localhost:3000/posts?q=Docker
!curl http://localhost:3000/posts?q=learn
!curl http://localhost:3000/posts?q=21
!curl http://localhost:3000/comments
!curl -X POST http://localhost:3000/comments -d "id=6&postId=4&body=Apache is powerful"
!curl http://localhost:3000/profile
Explanation: Search
End of explanation |
7,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Libraries and Packages
Step1: Connecting to National Data Service
Step2: Extracting Data of Midwestern states of the United states from 1992 - 2016.
The following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, owner, countryCode, structure type, type of wearing surface, and subtructure.
Step3: Filteration of NBI Data
The following routine removes the missing data such as 'N', 'NA' from deck, substructure,and superstructure , and also removing data with structure Type - 19 and type of wearing surface - 6.
Step4: Particularly in the area of determining a deterioration model of bridges, There is an observed sudden increase in condition ratings of bridges over the period of time, This sudden increase in the condition rating is attributed to the reconstruction of the bridges. NBI dataset contains an attribute to record this reconstruction of the bridge. An observation of an increase in condition rating of bridges over time without any recorded information of reconstruction of that bridge in NBI dataset suggests that dataset is not updated consistently. In order to have an accurate deterioration model, such unrecorded reconstruction activities must be accounted in the deterioration model of the bridges.
Step5: A utility function to plot the graphs.
Step6: The following script will select all the bridges in the midwestern United States, filter missing and not required data. The script also provides information of how much of the data is being filtered.
Step7: In following figures, shows the cumulative distribution function of the probability of reconstruction over the bridges' lifespan, of bridges in the midwestern United States, as the bridges grow older the probability of reconstruction increases.
Step8: The below figure presents CDF Probability of reconstruction, of bridge in the midwestern United States.
Step9: In the following figures, provides the probability of reconstruction at every age. Note this is not a cumulative probability function. the constant number of reconstruction of the bridges can be explained by various factors.
one particularly interesting reason could be funding provided to reconstruct bridges, this explain why some of the states have perfect linear curve.
Step10: A key observation in this investigation of several state reveals a constant number of bridges are reconstructed every year, this could be an effect of fixed budget allocated for reconstruction by the state. This also highlights the fact that not all bridges that might require reconstruction are reconstructed.
To Understand this phenomena in clearing, the following figure presents probability of reconstruction vs age of all individual states in the midwestern United States. | Python Code:
import pymongo
from pymongo import MongoClient
import time
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib.pyplot import *
import matplotlib.pyplot as plt
import folium
import datetime as dt
import random as rnd
import warnings
import datetime as dt
import csv
%matplotlib inline
Explanation: Libraries and Packages
End of explanation
warnings.filterwarnings(action="ignore")
Client = MongoClient("mongodb://bridges:readonly@nbi-mongo.admin/bridge")
db = Client.bridge
collection = db["bridges"]
Explanation: Connecting to National Data Service: The Lab Benchwork's NBI - MongoDB instance
End of explanation
def getData(state):
pipeline = [{"$match":{"$and":[{"year":{"$gt":1991, "$lt":2017}},{"stateCode":state}]}},
{"$project":{"_id":0,
"structureNumber":1,
"yearBuilt":1,
"yearReconstructed":1,
"deck":1, ## Rating of deck
"year":1,
'owner':1,
"countyCode":1,
"substructure":1, ## rating of substructure
"superstructure":1, ## rating of superstructure
"Structure Type":"$structureTypeMain.typeOfDesignConstruction",
"Type of Wearing Surface":"$wearingSurface/ProtectiveSystem.typeOfWearingSurface",
}}]
dec = collection.aggregate(pipeline)
conditionRatings = pd.DataFrame(list(dec))
## Creating new column: Age
conditionRatings['Age'] = conditionRatings['year']- conditionRatings['yearBuilt']
return conditionRatings
Explanation: Extracting Data of Midwestern states of the United states from 1992 - 2016.
The following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, owner, countryCode, structure type, type of wearing surface, and subtructure.
End of explanation
## filter and convert them into interger
def filterConvert(conditionRatings):
before = len(conditionRatings)
print("Total Records before filteration: ",len(conditionRatings))
conditionRatings = conditionRatings.loc[~conditionRatings['deck'].isin(['N','NA'])]
conditionRatings = conditionRatings.loc[~conditionRatings['substructure'].isin(['N','NA'])]
conditionRatings = conditionRatings.loc[~conditionRatings['superstructure'].isin(['N','NA'])]
conditionRatings = conditionRatings.loc[~conditionRatings['Structure Type'].isin([19])]
conditionRatings = conditionRatings.loc[~conditionRatings['Type of Wearing Surface'].isin(['6'])]
after = len(conditionRatings)
print("Total Records after filteration: ",len(conditionRatings))
print("Difference: ", before - after)
return conditionRatings
Explanation: Filteration of NBI Data
The following routine removes the missing data such as 'N', 'NA' from deck, substructure,and superstructure , and also removing data with structure Type - 19 and type of wearing surface - 6.
End of explanation
## make it into a function
def findSurvivalProbablities(conditionRatings):
i = 1
j = 2
probabilities = []
while j < 121:
v = list(conditionRatings.loc[conditionRatings['Age'] == i]['deck'])
k = list(conditionRatings.loc[conditionRatings['Age'] == i]['structureNumber'])
Age1 = {key:int(value) for key, value in zip(k,v)}
#v = conditionRatings.loc[conditionRatings['Age'] == j]
v_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['deck'])
k_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['structureNumber'])
Age2 = {key:int(value) for key, value in zip(k_2,v_2)}
intersectedList = list(Age1.keys() & Age2.keys())
reconstructed = 0
for structureNumber in intersectedList:
if Age1[structureNumber] < Age2[structureNumber]:
if (Age1[structureNumber] - Age2[structureNumber]) < -1:
reconstructed = reconstructed + 1
try:
probability = reconstructed / len(intersectedList)
except ZeroDivisionError:
probability = 0
probabilities.append(probability*100)
i = i + 1
j = j + 1
return probabilities
Explanation: Particularly in the area of determining a deterioration model of bridges, There is an observed sudden increase in condition ratings of bridges over the period of time, This sudden increase in the condition rating is attributed to the reconstruction of the bridges. NBI dataset contains an attribute to record this reconstruction of the bridge. An observation of an increase in condition rating of bridges over time without any recorded information of reconstruction of that bridge in NBI dataset suggests that dataset is not updated consistently. In order to have an accurate deterioration model, such unrecorded reconstruction activities must be accounted in the deterioration model of the bridges.
End of explanation
def plotCDF(cumsum_probabilities):
fig = plt.figure(figsize=(15,8))
ax = plt.axes()
plt.title('CDF of Reonstruction Vs Age')
plt.xlabel('Age')
plt.ylabel('CDF of Reonstruction')
plt.yticks([0,10,20,30,40,50,60,70,80,90,100])
plt.ylim(0,100)
x = [i for i in range(1,120)]
y = cumsum_probabilities
ax.plot(x,y)
return plt.show()
Explanation: A utility function to plot the graphs.
End of explanation
states = ['31','19','17','18','20','26','27','29','38','46','39','55']
# Mapping state code to state abbreviation
stateNameDict = {'25':'MA',
'04':'AZ',
'08':'CO',
'38':'ND',
'09':'CT',
'19':'IA',
'26':'MI',
'48':'TX',
'35':'NM',
'17':'IL',
'51':'VA',
'23':'ME',
'16':'ID',
'36':'NY',
'56':'WY',
'29':'MO',
'39':'OH',
'28':'MS',
'11':'DC',
'21':'KY',
'18':'IN',
'06':'CA',
'47':'TN',
'12':'FL',
'24':'MD',
'34':'NJ',
'46':'SD',
'13':'GA',
'55':'WI',
'30':'MT',
'54':'WV',
'15':'HI',
'32':'NV',
'37':'NC',
'10':'DE',
'33':'NH',
'44':'RI',
'50':'VT',
'42':'PA',
'05':'AR',
'20':'KS',
'45':'SC',
'22':'LA',
'40':'OK',
'72':'PR',
'41':'OR',
'27':'MN',
'53':'WA',
'01':'AL',
'31':'NE',
'02':'AK',
'49':'UT'
}
def getProbs(states, stateNameDict):
# Initializaing the dataframes for deck, superstructure and subtructure
df_prob_recon = pd.DataFrame({'Age':range(1,61)})
df_cumsum_prob_recon = pd.DataFrame({'Age':range(1,61)})
for state in states:
conditionRatings_state = getData(state)
stateName = stateNameDict[state]
print("STATE - ",stateName)
conditionRatings_state = filterConvert(conditionRatings_state)
print("\n")
probabilities_state = findSurvivalProbablities(conditionRatings_state)
cumsum_probabilities_state = np.cumsum(probabilities_state)
df_prob_recon[stateName] = probabilities_state[:60]
df_cumsum_prob_recon[stateName] = cumsum_probabilities_state[:60]
# df_prob_recon.set_index('Age', inplace = True)
# df_cumsum_prob_recon.set_index('Age', inplace = True)
return df_prob_recon, df_cumsum_prob_recon
df_prob_recon, df_cumsum_prob_recon = getProbs(states, stateNameDict)
df_prob_recon.to_csv('prsmidwest.csv')
df_cumsum_prob_recon.to_csv('cprsmidwest.csv')
Explanation: The following script will select all the bridges in the midwestern United States, filter missing and not required data. The script also provides information of how much of the data is being filtered.
End of explanation
plt.figure(figsize=(12,8))
plt.title("CDF Probability of Reconstruction vs Age")
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
linestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']
for num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):
plt.plot(df_cumsum_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)
plt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction');
plt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)
plt.ylim(1,25)
plt.show()
Explanation: In following figures, shows the cumulative distribution function of the probability of reconstruction over the bridges' lifespan, of bridges in the midwestern United States, as the bridges grow older the probability of reconstruction increases.
End of explanation
plt.figure(figsize = (16,12))
plt.xlabel('Age')
plt.ylabel('Mean')
# Initialize the figure
plt.style.use('seaborn-darkgrid')
# create a color palette
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
# multiple line plot
num = 1
linestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']
for n, column in enumerate(df_cumsum_prob_recon.drop('Age', axis=1)):
# Find the right spot on the plot
plt.subplot(4,3, num)
# Plot the lineplot
plt.plot(df_cumsum_prob_recon['Age'], df_cumsum_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)
# Same limits for everybody!
plt.xlim(1,60)
plt.ylim(1,100)
# Not ticks everywhere
if num in range(10) :
plt.tick_params(labelbottom='off')
if num not in [1,4,7,10]:
plt.tick_params(labelleft='off')
# Add title
plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])
plt.text(30, -1, 'Age', ha='center', va='center')
plt.text(1, 50, 'Probability', ha='center', va='center', rotation='vertical')
num = num + 1
# general title
plt.suptitle("CDF Probability of Reconstruction vs Age", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)
Explanation: The below figure presents CDF Probability of reconstruction, of bridge in the midwestern United States.
End of explanation
plt.figure(figsize=(12,8))
plt.title("Probability of Reconstruction vs Age")
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
linestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']
for num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):
plt.plot(df_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)
plt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction');
plt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)
plt.ylim(1,25)
plt.show()
Explanation: In the following figures, provides the probability of reconstruction at every age. Note this is not a cumulative probability function. the constant number of reconstruction of the bridges can be explained by various factors.
one particularly interesting reason could be funding provided to reconstruct bridges, this explain why some of the states have perfect linear curve.
End of explanation
plt.figure(figsize = (16,12))
plt.xlabel('Age')
plt.ylabel('Mean')
# Initialize the figure
plt.style.use('seaborn-darkgrid')
# create a color palette
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
# multiple line plot
num = 1
linestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']
for n, column in enumerate(df_prob_recon.drop('Age', axis=1)):
# Find the right spot on the plot
plt.subplot(4,3, num)
# Plot the lineplot
plt.plot(df_prob_recon['Age'], df_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)
# Same limits for everybody!
plt.xlim(1,60)
plt.ylim(1,25)
# Not ticks everywhere
if num in range(10) :
plt.tick_params(labelbottom='off')
if num not in [1,4,7,10]:
plt.tick_params(labelleft='off')
# Add title
plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])
plt.text(30, -1, 'Age', ha='center', va='center')
plt.text(1, 12.5, 'Probability', ha='center', va='center', rotation='vertical')
num = num + 1
# general title
plt.suptitle("Probability of Reconstruction vs Age", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)
Explanation: A key observation in this investigation of several state reveals a constant number of bridges are reconstructed every year, this could be an effect of fixed budget allocated for reconstruction by the state. This also highlights the fact that not all bridges that might require reconstruction are reconstructed.
To Understand this phenomena in clearing, the following figure presents probability of reconstruction vs age of all individual states in the midwestern United States.
End of explanation |
7,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prepara Alta Vista Para Modelacion
Se prepara la cuenca de alta vista para que sea modelada en el SIATA en tiempo real, en este caso se preparan ambas cuencas, tanto aguas arriba como aguas abajo.
Step1: Alta Vista Aguas Arriba
Se va a simular con casi todo en tipo no lineal, excepto el flujo base.
Step2: Parametros Geomorfológicos
Step3: Parametros fisicos
Step4: Figura EVP
Step5: montaje de los parametros verticales de la cuenca
Step6: Montaje de los almacenamientos maximos en la cuenca
Step7: Montaje de las velociedades laterales | Python Code:
%matplotlib inline
from wmf import wmf
import numpy as np
import pylab as pl
import datetime as dt
import os
ruta = '/media/nicolas/discoGrande/01_SIATA/'
Explanation: Prepara Alta Vista Para Modelacion
Se prepara la cuenca de alta vista para que sea modelada en el SIATA en tiempo real, en este caso se preparan ambas cuencas, tanto aguas arriba como aguas abajo.
End of explanation
cu = wmf.SimuBasin(0,0,0,0, rute='/media/nicolas/discoGrande/01_SIATA/nc_cuencas/AltaVista_abajo.nc')
Explanation: Alta Vista Aguas Arriba
Se va a simular con casi todo en tipo no lineal, excepto el flujo base.
End of explanation
#geomorfologia
cu.GetGeo_Cell_Basics()
cu.set_Geomorphology(stream_width=cu.CellLong, )
cu.GetGeo_Parameters(rutaParamASC='/media/nicolas/discoGrande/01_SIATA/ParamCuencas/Param_AltaVista_Arriba.txt')
cu.GetGeo_Ppal_Hipsometric()
cu.PlotPpalStream(ruta=ruta + '/ParamCuencas/PerfilCauce_AltaVista_Abajo.png')
cu.Plot_Hipsometric(ruta=ruta + '/ParamCuencas/CurvaHipso_AltaVista_Aabajo.png')
cu.Plot_Tc(ruta=ruta + '/ParamCuencas/Tc_AltaVista_Abajo.png')
Explanation: Parametros Geomorfológicos
End of explanation
Evp=4.658*np.exp(-0.0002*cu.CellHeight)
cu.Plot_basin(Evp / 96.0)
dia5Min = 8*12 #Ocho horas de luz efectiva
print dia5Min
Explanation: Parametros fisicos
End of explanation
#Lectura de la profunidad de raiz
Z,p = wmf.read_map_raster('/media/nicolas/discoGrande/01_SIATA/raster/Prof_raiz_cm.tif')
Z = cu.Transform_Map2Basin(Z,p)
Z[Z == -9999] = 40.0
Z[Z==0]=40
#Profundidad por geomorfologia
Zg = np.zeros(cu.ncells)
Zg[cu.CellSlope<0.25]=0.6
Zg[(cu.CellSlope>=0.25)&(cu.CellSlope<0.30)]=1.0
Zg[(cu.CellSlope>=0.30)&(cu.CellSlope<0.50)]=0.3
Zg[cu.CellSlope>=0.5] = 0.2
#plot de la profundidad
cu.Plot_basin(Zg,ruta=ruta + 'ParamCuencas/Prof_suelo_AltaVistaAbajo.png')
Tetas = {}
for i in ['Teta_pmp','Teta_cp','Teta_sat']:
te,p = wmf.read_map_raster('/media/nicolas/discoGrande/01_SIATA/raster/'+i+'.tif')
te = cu.Transform_Map2Basin(te,p)
te[te == -9999] = te[te>0].mean()
te[te == 0] = te[te>0].mean()
Tetas.update({i:te})
# Valores de Hu y Hg calculados en metros
Hu = Zg * (Tetas['Teta_cp']-Tetas['Teta_pmp'])*10
Hg = Zg * (Tetas['Teta_sat']-Tetas['Teta_cp'])*10
Hu[Z == 2] = 2
cu.Plot_basin(Hg,lines_spaces=0.08,ruta=ruta + 'ParamCuencas/Hg_AltaVista_Abajo.png')
# Cionductividad hidraulica saturada
Ks, p = wmf.read_map_raster('/media/nicolas/discoGrande/01_SIATA/raster/'+'Ks_mm_h.tif')
Ks = cu.Transform_Map2Basin(Ks,p)
#Conductivdad Nivel freatico
Ks[Ks == -9999] = Ks[Ks>0].mean()
Ks[Ks == 0] = Ks[Ks>0].mean()
Kp = np.copy(Ks) / 100.0
#Coeficiente kubota y sivapalan
ksh=((Ks/3600000.0)*cu.CellSlope*(30.0**2.0))/(3*(Hg*0.9/1000.0)**2)
cu.Plot_basin(ksh)
# Rugosidad
man,p = wmf.read_map_raster('/media/nicolas/discoGrande/01_SIATA/raster/n_man.asc')
man = cu.Transform_Map2Basin(man,p)
#Flujo por pequenos surcos
CoefLad = (0.5/man)*(cu.CellSlope**(1.0/2.0))
ExpLad = (2.0/3.0)*0.64
cu.Plot_basin(CoefLad)
print ExpLad
#Coeficiente de cauces
area = cu.CellAcum * (12.7**2)/1e6 #Tamaño de celda al cuadrado
CoefOCG,ExpOCG = wmf.OCG_param(pend = cu.CellSlope, area = area)
cu.Plot_basin(CoefOCG)
print ExpOCG
Explanation: Figura EVP: Evaporacion estimada cada 5 min en las 8 horas de luz efectiva para la EVP
End of explanation
# Evaporacion, infiltracion, percolacion y perdidas
cu.set_PhysicVariables('v_coef',Evp,0)
#Infiltracion, se hace impermeable la ciudad, se pasa a [mm/seg]
KsForMap = np.copy(Ks)/3600.0
KsForMap.min()
KsForMap[Z==2] = 0.0001
cu.set_PhysicVariables('v_coef',KsForMap,1)
#Percolacion se pasa a [mm/seg]
cu.set_PhysicVariables('v_coef',Kp/3600.0,2)
#Se asume perdidas del sistema iguales a cero
cu.set_PhysicVariables('v_coef',0,3)
Explanation: montaje de los parametros verticales de la cuenca
End of explanation
cu.set_PhysicVariables('capilar', Hu, 0)
cu.set_PhysicVariables('gravit', Hg, 1)
Explanation: Montaje de los almacenamientos maximos en la cuenca
End of explanation
#Coloca todas las velocidades en lineal
cu.set_Speed_type([2,2,2])
# Coeficientes de velocidad horizontal
cu.set_PhysicVariables('h_coef',CoefLad, 0)
cu.set_PhysicVariables('h_exp',ExpLad, 0) # Linealiza la ladera de nuevo
#El flujo sub-superficial se hace no lineal
cu.set_PhysicVariables('h_coef',ksh, 1)
cu.set_PhysicVariables('h_exp',2.0, 1)
#Coeficientes de flujo subterraneo
cu.set_PhysicVariables('h_coef', Kp/3600, 2)
#Coeficientes de velocidad en canales
cu.set_PhysicVariables('h_coef', CoefOCG, 3)
cu.set_PhysicVariables('h_exp', ExpOCG, 3)
# Guarda la version lineal
cu.Save_SimuBasin('/media/nicolas/discoGrande/01_SIATA/nc_cuencas/AltaVista_abajo.nc',
ruta_dem = '/media/nicolas/discoGrande/01_SIATA/raster/dem_altavista.tif',
ruta_dir = '/media/nicolas/discoGrande/01_SIATA/raster/dir_altavista.tif')
Explanation: Montaje de las velociedades laterales
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.