markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Using this in combination with plot_poly_fidi_mesh:
d1 = 2*pmt.find_circumradius(n=3, side=40) pmt.plot_poly_fidi_mesh(diameter=d1, n=3, x_spacing=1, y_spacing=1) # It can be seen on the y-axis that the side has a length of 40, as desired. d2 = 2*pmt.find_circumradius(n=5, apothem=20) pmt.plot_poly_fidi_mesh(diameter=d2, n=5, x_spacing=1, y_spacing=1) # The circumc...
notebooks/example.ipynb
fangohr/polygon-finite-difference-mesh-tools
bsd-2-clause
The line above tells the computer to make the functions available and nickname the master object np. To call a function from the master object, we use the syntax np.function(). To find out what functions numpy has for you to use, go to their documentation at https://docs.scipy.org/doc/numpy-dev/user/quickstart.html. Le...
print("Just an Array: \n",np.array([0,1,2,34,5])) print("An Array of Zeros: \n",np.zeros((2,3))) print("An Array of Ones: \n",np.ones((2,))) print("A Clever Way to Build an Array: \n",np.pi*np.ones((4,3))) print("A Bunch of Random Junk: \n",np.empty((2,2))) print("A Range of Values: \n",np.arange(0,100, 3)) print...
Python Workshop/NumPy.ipynb
CalPolyPat/Python-Workshop
mit
Now that we have seen how NumPy sees operations, let's practice a bit. Exercises Create at least 2 arrays with each different method. In the next cell are two arrays of measurements, you happen to know that their sum over their product squared is a quantity of interest, calculate this quantity for every pair of ele...
p = np.array([1,2,3,5,1,2,3,1,2,2,6,3,1,1,5,1,1,3,2,1]) l = 100*np.array([-0.06878584, -0.13865453, -1.61421586, 1.02892411, 0.31529163, -0.06186747, -0.15273951, 1.67466332, -1.88215846, 0.67427142, 1.2747444, -0.0391945, -0.81211282, -0.38412292, -1.01116052, 0.25611357, 0.3126883, 0.8011353, 0.64691918, ...
Python Workshop/NumPy.ipynb
CalPolyPat/Python-Workshop
mit
Some Other Interesting NumPy Functions np.dot(array1, array2) #Computes the dot product of array1 and array2. np.cross(array1, array2) #Computes the cross product of array1 and array2. np.eye(size) #Creates a size by size identity matrix/array NumPy Array Slicing NumPy slicing is only slightly different than list sli...
array = np.array([1,2,3,4]) print(array[0]) print(array[1:4]) print(array[0:4:2])
Python Workshop/NumPy.ipynb
CalPolyPat/Python-Workshop
mit
Masking Masking is a special type of slicing which uses boolean values to decide whether to show or hide the values in another array. A mask must be a boolean array of the same size as the original array. To apply a mask to an array, yous use the folllowing syntax: mask = np.array([True, False]) array = np.array([25, 3...
mask = np.array([[1,1,1,1,0,0,1],[1,0,0,0,0,1,0]], dtype=bool) #^^^This converts the ones and zeros into trues and falses because I'm lazy^^^ array = np.array([[5,7,3,4,5,7,1],np.random.randn(7)]) print(array) print(mask) print(array[mask])
Python Workshop/NumPy.ipynb
CalPolyPat/Python-Workshop
mit
Let's say that we have measured some quantity with a computer and generated a really long numpy array, like, really long. It just so happens that we are interested in how many of these numbers are greater than zero. We could try to make a mask with the methods used above, but the people who made masks gave us a tool to...
data = np.random.normal(0,3,10000) #Wow, I made 10,000 measurements, wouldn't mastoridis be proud. data[data>0].size #This returns only the elements of data that are greater than 0
Python Workshop/NumPy.ipynb
CalPolyPat/Python-Workshop
mit
This is a powerful tool that you should keep in the back of your head that can often greatly simplify problems. Universal Functions Universal functions are NumPy functions that help in applying functions to every element in an array. sin(), cos(), exp(), are all universal functions and when applied to an array, they ta...
import matplotlib.pyplot as plt %matplotlib inline x = np.linspace(0,2*np.pi,1000) y = np.sin(x) plt.subplot(211) plt.plot(x) plt.subplot(212) plt.plot(x,y)
Python Workshop/NumPy.ipynb
CalPolyPat/Python-Workshop
mit
A list of all the universal functions is included at the end of this notebook. Exercises Create a couple of arrays of various type and size and play with them until you feel comfortable moving on. You know that a certain quantity can be calculated using the following formula: f(x)=x^e^sin(x^2)-sin(x*ln(x)) Given th...
x = np.random.rand(1000)*np.linspace(0,10,1000)
Python Workshop/NumPy.ipynb
CalPolyPat/Python-Workshop
mit
Background information on filtering Here we give some background information on filtering in general, and how it is done in MNE-Python in particular. Recommended reading for practical applications of digital filter design can be found in Parks & Burrus (1987) [1] and Ifeachor & Jervis (2002) [2], and for filtering in a...
import numpy as np from numpy.fft import fft, fftfreq from scipy import signal import matplotlib.pyplot as plt from mne.time_frequency.tfr import morlet from mne.viz import plot_filter, plot_ideal_filter import mne sfreq = 1000. f_p = 40. flim = (1., sfreq / 2.) # limits for plotting
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Take for example an ideal low-pass filter, which would give a magnitude response of 1 in the pass-band (up to frequency $f_p$) and a magnitude response of 0 in the stop-band (down to frequency $f_s$) such that $f_p=f_s=40$ Hz here (shown to a lower limit of -60 dB for simplicity):
nyq = sfreq / 2. # the Nyquist frequency is half our sample rate freq = [0, f_p, f_p, nyq] gain = [1, 1, 0, 0] third_height = np.array(plt.rcParams['figure.figsize']) * [1, 1. / 3.] ax = plt.subplots(1, figsize=third_height)[1] plot_ideal_filter(freq, gain, ax, title='Ideal %s Hz lowpass' % f_p, flim=flim)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This filter hypothetically achieves zero ripple in the frequency domain, perfect attenuation, and perfect steepness. However, due to the discontinuity in the frequency response, the filter would require infinite ringing in the time domain (i.e., infinite order) to be realized. Another way to think of this is that a rec...
n = int(round(0.1 * sfreq)) n -= n % 2 - 1 # make it odd t = np.arange(-(n // 2), n // 2 + 1) / sfreq # center our sinc h = np.sinc(2 * f_p * t) / (4 * np.pi) plot_filter(h, sfreq, freq, gain, 'Sinc (0.1 s)', flim=flim, compensate=True)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This is not so good! Making the filter 10 times longer (1 s) gets us a slightly better stop-band suppression, but still has a lot of ringing in the time domain. Note the x-axis is an order of magnitude longer here, and the filter has a correspondingly much longer group delay (again equal to half the filter length, or 0...
n = int(round(1. * sfreq)) n -= n % 2 - 1 # make it odd t = np.arange(-(n // 2), n // 2 + 1) / sfreq h = np.sinc(2 * f_p * t) / (4 * np.pi) plot_filter(h, sfreq, freq, gain, 'Sinc (1.0 s)', flim=flim, compensate=True)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's make the stop-band tighter still with a longer filter (10 s), with a resulting larger x-axis:
n = int(round(10. * sfreq)) n -= n % 2 - 1 # make it odd t = np.arange(-(n // 2), n // 2 + 1) / sfreq h = np.sinc(2 * f_p * t) / (4 * np.pi) plot_filter(h, sfreq, freq, gain, 'Sinc (10.0 s)', flim=flim, compensate=True)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now we have very sharp frequency suppression, but our filter rings for the entire 10 seconds. So this naïve method is probably not a good way to build our low-pass filter. Fortunately, there are multiple established methods to design FIR filters based on desired response characteristics. These include: 1. The Remez_ al...
trans_bandwidth = 10 # 10 Hz transition band f_s = f_p + trans_bandwidth # = 50 Hz freq = [0., f_p, f_s, nyq] gain = [1., 1., 0., 0.] ax = plt.subplots(1, figsize=third_height)[1] title = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth) plot_ideal_filter(freq, gain, ax, title=title, flim=flim)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Accepting a shallower roll-off of the filter in the frequency domain makes our time-domain response potentially much better. We end up with a more gradual slope through the transition region, but a much cleaner time domain signal. Here again for the 1 s filter:
h = signal.firwin2(n, freq, gain, nyq=nyq) plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (1.0 s)', flim=flim, compensate=True)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually use a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable stop-band attenuation:
n = int(round(sfreq * 0.5)) + 1 h = signal.firwin2(n, freq, gain, nyq=nyq) plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.5 s)', flim=flim, compensate=True)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s), our effective stop frequency gets pushed out past 60 Hz:
n = int(round(sfreq * 0.2)) + 1 h = signal.firwin2(n, freq, gain, nyq=nyq) plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.2 s)', flim=flim, compensate=True)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If we want a filter that is only 0.1 seconds long, we should probably use something more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz):
trans_bandwidth = 25 f_s = f_p + trans_bandwidth freq = [0, f_p, f_s, nyq] h = signal.firwin2(n, freq, gain, nyq=nyq) plot_filter(h, sfreq, freq, gain, 'Windowed 50 Hz transition (0.2 s)', flim=flim, compensate=True)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
So far, we have only discussed non-causal filtering, which means that each sample at each time point $t$ is filtered using samples that come after ($t + \Delta t$) and before ($t - \Delta t$) the current time point $t$. In this sense, each sample is influenced by samples that come both before and after it. This is usef...
h_min = signal.minimum_phase(h) plot_filter(h_min, sfreq, freq, gain, 'Minimum-phase', flim=flim)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Applying FIR filters Now lets look at some practical effects of these filters by applying them to some data. Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part) plus noise (random and line). Note that the original clean signal contains frequency content in both the pass band and transition bands ...
dur = 10. center = 2. morlet_freq = f_p tlim = [center - 0.2, center + 0.2] tticks = [tlim[0], center, tlim[1]] flim = [20, 70] x = np.zeros(int(sfreq * dur) + 1) blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20. n_onset = int(center * sfreq) - len(blip) // 2 x[n_onset:n_onset + len(blip)] += blip x_orig =...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Filter it with a shallow cutoff, linear-phase FIR (which allows us to compensate for the constant filter delay):
transition_band = 0.25 * f_p f_s = f_p + transition_band freq = [0., f_p, f_s, sfreq / 2.] gain = [1., 1., 0., 0.] # This would be equivalent: h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, fir_design='firwin', verbose=True) x_v16 = np.convolve(h, x) # this is the linear->z...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Filter it with a different design method fir_design="firwin2", and also compensate for the constant filter delay. This method does not produce quite as sharp a transition compared to fir_design="firwin", despite being twice as long:
transition_band = 0.25 * f_p f_s = f_p + transition_band freq = [0., f_p, f_s, sfreq / 2.] gain = [1., 1., 0., 0.] # This would be equivalent: # filter_dur = 6.6 / transition_band # sec # n = int(sfreq * filter_dur) # h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.) h = mne.filter.create_filter(x, sfreq, l_freq=None,...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's also filter with the MNE-Python 0.13 default, which is a long-duration, steep cutoff FIR that gets applied twice:
transition_band = 0.5 # Hz f_s = f_p + transition_band filter_dur = 10. # sec freq = [0., f_p, f_s, sfreq / 2.] gain = [1., 1., 0., 0.] # This would be equivalent # n = int(sfreq * filter_dur) # h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.) h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, ...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's also filter it with the MNE-C default, which is a long-duration steep-slope FIR filter designed using frequency-domain techniques:
h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5) x_mne_c = np.convolve(h, x)[len(h) // 2:] transition_band = 5 # Hz (default in MNE-C) f_s = f_p + transition_band freq = [0., f_p, f_s, sfreq / 2.] gain = [1., 1., 0., 0.] plot_filter(h, sfreq, freq, gain, 'MNE-C default', flim=flim, compensate=...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And now an example of a minimum-phase filter:
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, phase='minimum', fir_design='firwin', verbose=True) x_min = np.convolve(h, x) transition_band = 0.25 * f_p f_s = f_p + transition_band filter_dur = 6.6 / transition_band # sec n = int(sfreq * filte...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Both the MNE-Python 0.13 and MNE-C filters have excellent frequency attenuation, but it comes at a cost of potential ringing (long-lasting ripples) in the time domain. Ringing can occur with steep filters, especially in signals with frequency content around the transition band. Our Morlet wavelet signal has power in ou...
axes = plt.subplots(1, 2)[1] def plot_signal(x, offset): """Plot a signal.""" t = np.arange(len(x)) / sfreq axes[0].plot(t, x + offset) axes[0].set(xlabel='Time (s)', xlim=t[[0, -1]]) X = fft(x) freqs = fftfreq(len(x), 1. / sfreq) mask = freqs >= 0 X = X[mask] freqs = freqs[mask] ...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
IIR filters MNE-Python also offers IIR filtering functionality that is based on the methods from :mod:scipy.signal. Specifically, we use the general-purpose functions :func:scipy.signal.iirfilter and :func:scipy.signal.iirdesign, which provide unified interfaces to IIR filter design. Designing IIR filters Let's continu...
sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos') plot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=2', flim=flim, compensate=True) x_shallow = signal.sosfiltfilt(sos, x) del sos
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The falloff of this filter is not very steep. <div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS) by using :func:`scipy.signal.sosfilt` and, under the hood, :func:`scipy.signal.zpk2sos` when passing the ``output='sos'`` keyword argument to ...
iir_params = dict(order=8, ftype='butter') filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, method='iir', iir_params=iir_params, verbose=True) plot_filter(filt, sfreq, freq, gain, 'Butterworth order=8', flim=flim, compensate=T...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
There are other types of IIR filters that we can use. For a complete list, check out the documentation for :func:scipy.signal.iirdesign. Let's try a Chebychev (type I) filter, which trades off ripple in the pass-band to get better attenuation in the stop-band:
iir_params.update(ftype='cheby1', rp=1., # dB of acceptable pass-band ripple ) filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, method='iir', iir_params=iir_params, verbose=True) plot_filter(filt, sfre...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If we can live with even more ripple, we can get it slightly steeper, but the impulse response begins to ring substantially longer (note the different x-axis scale):
iir_params['rp'] = 6. filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, method='iir', iir_params=iir_params, verbose=True) plot_filter(filt, sfreq, freq, gain, 'Chebychev-1 order=8, ripple=6 dB', flim=flim, compensa...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Applying IIR filters Now let's look at how our shallow and steep Butterworth IIR filters perform on our Morlet signal from before:
axes = plt.subplots(1, 2)[1] yticks = np.arange(4) / -30. yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8'] plot_signal(x_orig, offset=yticks[0]) plot_signal(x, offset=yticks[1]) plot_signal(x_shallow, offset=yticks[2]) plot_signal(x_steep, offset=yticks[3]) axes[0].set(xlim=tlim, title='IIR, Lowpas...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Some pitfalls of filtering Multiple recent papers have noted potential risks of drawing errant inferences due to misapplication of filters. Low-pass problems Filters in general, especially those that are non-causal (zero-phase), can make activity appear to occur earlier or later than it truly did. As mentioned in VanRu...
x = np.zeros(int(2 * sfreq)) t = np.arange(0, len(x)) / sfreq - 0.2 onset = np.where(t >= 0.5)[0][0] cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t) x[onset:onset + len(sig)] = sig iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass') iir_hp_p1 = signal.i...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Similarly, in a P300 paradigm reported by Kappenman & Luck (2010) [12]_, they found that applying a 1 Hz high-pass decreased the probability of finding a significant difference in the N100 response, likely because the P300 response was smeared (and inverted) in time by the high-pass filter such that it tended to cancel...
def baseline_plot(x): all_axes = plt.subplots(3, 2)[1] for ri, (axes, freq) in enumerate(zip(all_axes, [0.1, 0.3, 0.5])): for ci, ax in enumerate(axes): if ci == 0: iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass', output='...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
In response, Maess et al. (2016) [11]_ note that these simulations do not address cases of pre-stimulus activity that is shared across conditions, as applying baseline correction will effectively copy the topology outside the baseline period. We can see this if we give our signal x with some consistent pre-stimulus act...
n_pre = (t < 0).sum() sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre)) x[:n_pre] += sig_pre baseline_plot(x)
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Both groups seem to acknowledge that the choices of filtering cutoffs, and perhaps even the application of baseline correction, depend on the characteristics of the data being investigated, especially when it comes to: The frequency content of the underlying evoked activity relative to the filtering parameters. ...
# Use the same settings as when calling e.g., `raw.filter()` fir_coefs = mne.filter.create_filter( data=None, # data is only used for sanity checking, not strictly needed sfreq=1000., # sfreq of your data in Hz l_freq=None, h_freq=40., # assuming a lowpass of 40 Hz method='fir', fir_window='h...
0.21/_downloads/80342e62fc31882c2b53e38ec1ed14a6/plot_background_filtering.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load Data
import cifar10
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Set the path for storing the data-set on your computer. The CIFAR-10 data-set is about 163 MB and will be downloaded automatically if it is not located in the given path.
cifar10.maybe_download_and_extract()
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Load the class-names.
class_names = cifar10.load_class_names() class_names
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Load the training-set. This returns the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.
images_train, cls_train, labels_train = cifar10.load_training_data()
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Load the test-set.
images_test, cls_test, labels_test = cifar10.load_test_data()
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
The CIFAR-10 data-set has now been loaded and consists of 60,000 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
print("Size of:") print("- Training-set:\t\t{}".format(len(images_train))) print("- Test-set:\t\t{}".format(len(images_test)))
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
The data dimensions are used in several places in the source-code below. They have already been defined in the cifar10 module, so we just need to import them.
from cifar10 import img_size, num_channels, num_classes
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
The images are 32 x 32 pixels, but we will crop the images to 24 x 24 pixels.
img_size_cropped = 24
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
def plot_images(images, cls_true, cls_pred=None, smooth=True): assert len(images) == len(cls_true) == 9 # Create figure with sub-plots. fig, axes = plt.subplots(3, 3) # Adjust vertical spacing if we need to print ensemble and best-net. if cls_pred is None: hspace = 0.3 else: h...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Plot a few images to see if data is correct
# Get the first images from the test-set. images = images_test[0:9] # Get the true classes for those images. cls_true = cls_test[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true, smooth=False)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
The pixelated images above are what the neural network will get as input. The images might be a bit easier for the human eye to recognize if we smoothen the pixels.
plot_images(images=images, cls_true=cls_true, smooth=True) x = tf.placeholder(tf.float32, shape=[None, img_size, img_size, num_channels], name='x') y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true') y_true_cls = tf.argmax(y_true, dimension=1)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Data augmentation for images The following helper-functions create the part of the TensorFlow computational graph that pre-processes the input images. Nothing is actually calculated at this point, the function merely adds nodes to the computational graph for TensorFlow. The pre-processing is different for training and ...
def pre_process_image(image, training): # This function takes a single image as input, # and a boolean whether to build the training or testing graph. if training: # For training, add the following to the TensorFlow graph. # Randomly crop the input image. image = tf.random_crop...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
The function above is called for each image in the input batch using the following function.
def pre_process(images, training): # Use TensorFlow to loop over all the input images and call # the function above which takes a single image as input. images = tf.map_fn(lambda image: pre_process_image(image, training), images) return images
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
In order to plot the distorted images, we create the pre-processing graph for TensorFlow, so we may execute it later.
distorted_images = pre_process(images=x, training=True)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Creating Main Processing
def main_network(images, training): # Wrap the input images as a Pretty Tensor object. x_pretty = pt.wrap(images) # Pretty Tensor uses special numbers to distinguish between # the training and testing phases. if training: phase = pt.Phase.train else: phase = pt.Phase.infer ...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Creating Neural Network Note that the neural network is enclosed in the variable-scope named 'network'. This is because we are actually creating two neural networks in the TensorFlow graph. By assigning a variable-scope like this, we can re-use the variables for the two neural networks, so the variables that are optimi...
def create_network(training): # Wrap the neural network in the scope named 'network'. # Create new variables during training, and re-use during testing. with tf.variable_scope('network', reuse=not training): # Just rename the input placeholder variable for convenience. images = x # ...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Create Neural Network for Training Phase Note that trainable=False which means that TensorFlow will not try to optimize this variable.
global_step = tf.Variable(initial_value=0, name='global_step', trainable=False)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Create the neural network to be used for training. The create_network() function returns both y_pred and loss, but we only need the loss-function during training.
_, loss = create_network(training=True)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Create an optimizer which will minimize the loss-function. Also pass the global_step variable to the optimizer so it will be increased by one after each iteration.
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step=global_step)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Create Neural Network for Test Phase / Inference Now create the neural network for the test-phase. Once again the create_network() function returns the predicted class-labels y_pred for the input images, as well as the loss-function to be used during optimization. During testing we only need y_pred.
y_pred, _ = create_network(training=False)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
We then calculate the predicted class number as an integer. The output of the network y_pred is an array with 10 elements. The class number is the index of the largest element in the array.
y_pred_cls = tf.argmax(y_pred, dimension=1)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Saver In order to save the variables of the neural network, so they can be reloaded quickly without having to train the network again, we now create a so-called Saver-object which is used for storing and retrieving all the variables of the TensorFlow graph. Nothing is actually saved at this point, which will be done fu...
saver = tf.train.Saver()
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Getting the Weights Further below, we want to plot the weights of the neural network. When the network is constructed using Pretty Tensor, all the variables of the layers are created indirectly by Pretty Tensor. We therefore have to retrieve the variables from TensorFlow. We used the names layer_conv1 and layer_conv2 f...
def get_weights_variable(layer_name): # Retrieve an existing variable named 'weights' in the scope # with the given layer_name. # This is awkward because the TensorFlow function was # really intended for another purpose. with tf.variable_scope("network/" + layer_name, reuse=True): variable ...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.
weights_conv1 = get_weights_variable(layer_name='layer_conv1') weights_conv2 = get_weights_variable(layer_name='layer_conv2') with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print(sess.run(weights_conv1).shape) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) ...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Getting the Layer Outputs Similarly we also need to retrieve the outputs of the convolutional layers. The function for doing this is slightly different than the function above for getting the weights. Here we instead retrieve the last tensor that is output by the convolutional layer.
def get_layer_output(layer_name): # The name of the last operation of the convolutional layer. # This assumes you are using Relu as the activation-function. tensor_name = "network/" + layer_name + "/Relu:0" # Get the tensor with this name. tensor = tf.get_default_graph().get_tensor_by_name(tensor_n...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Get the output of the convoluational layers so we can plot them later.
output_conv1 = get_layer_output(layer_name='layer_conv1') output_conv2 = get_layer_output(layer_name='layer_conv2')
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Restore or initialize variables Training this neural network may take a long time, especially if you do not have a GPU. We therefore save checkpoints during training so we can continue training at another time (e.g. during the night), and also for performing analysis later without having to train the neural network eve...
save_dir = 'checkpoints/'
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
This is the base-filename for the checkpoints, TensorFlow will append the iteration number, etc.
save_path = os.path.join(save_dir, 'cifar10_cnn')
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
First try to restore the latest checkpoint. This may fail and raise an exception e.g. if such a checkpoint does not exist, or if you have changed the TensorFlow graph.
try: print("Trying to restore last checkpoint ...") # Use TensorFlow to find the latest checkpoint - if any. last_chk_path = tf.train.latest_checkpoint(checkpoint_dir=save_dir) # Try and load the data in the checkpoint. saver.restore(session, save_path=last_chk_path) # If we get to this point...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-function to get a random training-batch There are 50,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer. If your computer crashes or becomes very slow because you run ...
train_batch_size = 64
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Function for selecting a random batch of images from the training-set.
def random_batch(): # Number of images in the training-set. num_images = len(images_train) # Create a random index. idx = np.random.choice(num_images, size=train_batch_size, replace=False) # Use the random index to select random images and labe...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Optimization The progress is printed every 100 iterations. A checkpoint is saved every 1000 iterations and also after the last iteration.
def optimize(num_iterations): # Start-time used for printing time-usage below. start_time = time.time() for i in range(num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch,...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
def plot_example_errors(cls_pred, correct): # This function is called from print_test_accuracy() below. # cls_pred is an array of the predicted class-number for # all images in the test-set. # correct is a boolean array whether the predicted class # is equal to the true class for each image in the...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Plot confusion matrix
def plot_confusion_matrix(cls_pred): # This is called from print_test_accuracy() below. # cls_pred is an array of the predicted class-number for # all images in the test-set. # Get the confusion matrix using sklearn. cm = confusion_matrix(y_true=cls_test, # True class for test-set. ...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Calculating classifications This function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct. The calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.
# Split the data-set in batches of this size to limit RAM usage. batch_size = 256 def predict_cls(images, labels, cls_true): # Number of images. num_images = len(images) # Allocate an array for the predicted classes which # will be calculated in batches and filled into this array. cls_pred = np.ze...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Calculate the predicted class for the test-set.
def predict_cls_test(): return predict_cls(images = images_test, labels = labels_test, cls_true = cls_test)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-functions for the classification accuracy This function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4. The function also returns the number of correct classifications.
def classification_accuracy(correct): # When averaging a boolean array, False means 0 and True means 1. # So we are calculating: number of True / len(correct) which is # the same as the classification accuracy. # Return the classification accuracy # and the number of correct classifications. ...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-function for showing the performance Function for printing the classification accuracy on the test-set. It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't ha...
def print_test_accuracy(show_example_errors=False, show_confusion_matrix=False): # For all the images in the test-set, # calculate the predicted classes and whether they are correct. correct, cls_pred = predict_cls_test() # Classification accuracy and the number of correct ...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-function for plotting convolutional weights
def plot_conv_weights(weights, input_channel=0): # Assume weights are TensorFlow ops for 4-dim variables # e.g. weights_conv1 or weights_conv2. # Retrieve the values of the weight-variables from TensorFlow. # A feed-dict is not necessary because nothing is calculated. w = session.run(weights) ...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-function for plotting the output of convolutional layers
def plot_layer_output(layer_output, image): # Assume layer_output is a 4-dim tensor # e.g. output_conv1 or output_conv2. # Create a feed-dict which holds the single input image. # Note that TensorFlow needs a list of images, # so we just create a list with this one image. feed_dict = {x: [image...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Examples of distorted input images In order to artificially inflate the number of images available for training, the neural network uses pre-processing with random distortions of the input images. This should hopefully make the neural network more flexible at recognizing and classifying images. This is a helper-functio...
def plot_distorted_image(image, cls_true): # Repeat the input image 9 times. image_duplicates = np.repeat(image[np.newaxis, :, :, :], 9, axis=0) # Create a feed-dict for TensorFlow. feed_dict = {x: image_duplicates} # Calculate only the pre-processing of the TensorFlow graph # which distorts t...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Helper-function for getting an image and its class-number from the test-set.
def get_test_image(i): return images_test[i, :, :, :], cls_test[i]
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Get an image and its true class from the test-set.
img, cls = get_test_image(16)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Plot 9 random distortions of the image. If you re-run this code you will get slightly different results.
plot_distorted_image(img, cls)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Perform optimization
# if False: optimize(num_iterations=1000)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Results Examples of mis-classifications are plotted below. Some of these are difficult to recognize even for humans and others are reasonable mistakes e.g. between a large car and a truck, or between a cat and a dog, while other mistakes seem a bit strange.
print_test_accuracy(show_example_errors=True, show_confusion_matrix=True)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Convolutional Weights The following shows some of the weights (or filters) for the first convolutional layer. There are 3 input channels so there are 3 of these sets, which you may plot by changing the input_channel. Note that positive weights are red and negative weights are blue.
plot_conv_weights(weights=weights_conv1, input_channel=0)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Plot some of the weights (or filters) for the second convolutional layer. These are apparently closer to zero than the weights for the first convolutional layers, see the lower standard deviation.
plot_conv_weights(weights=weights_conv2, input_channel=1)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Output of convolutional layers Helper-function for plotting an image.
def plot_image(image): # Create figure with sub-plots. fig, axes = plt.subplots(1, 2) # References to the sub-plots. ax0 = axes.flat[0] ax1 = axes.flat[1] # Show raw and smoothened images in sub-plots. ax0.imshow(image, interpolation='nearest') ax1.imshow(image, interpolation='spline16...
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Plot an image from the test-set. The raw pixelated image is used as input to the neural network.
img, cls = get_test_image(16) plot_image(img)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Use the raw image as input to the neural network and plot the output of the first convolutional layer.
plot_layer_output(output_conv1, image=img)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Using the same image as input to the neural network, now plot the output of the second convolutional layer.
plot_layer_output(output_conv2, image=img)
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Predicted class-labels Get the predicted class-label and class-number for this image.
label_pred, cls_pred = session.run([y_pred, y_pred_cls], feed_dict={x: [img]})
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Print the predicted class-label.
# Set the rounding options for numpy. np.set_printoptions(precision=3, suppress=True) # Print the predicted label. print(label_pred[0])
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
The predicted class-label is an array of length 10, with each element indicating how confident the neural network is that the image is the given class. In this case the element with index 3 has a value of 0.493, while the element with index 5 has a value of 0.490. This means the neural network believes the image either...
class_names[3] class_names[5]
seminar_3/.ipynb_checkpoints/classwork_2-checkpoint.ipynb
akseshina/dl_course
gpl-3.0
Exercise 8.2: Potential barrier Simulate a Gaussian wave-packet moving along the x axis passing through a potential barrier
import matplotlib.animation as animation from IPython.display import HTML fig = pyplot.figure() ax = pyplot.axes(xlim=(0, lx), ylim=(0, 2), xlabel='x', ylabel='$|\Psi|^2$') points, = ax.plot([], [], marker='', linestyle='-', lw=3) x0=6 for ix in range(0,nx): psi0_r[ix] = math.exp(-0.5*((ix*dx-x0)**2)/sigma2)*ma...
08_01_Schroedinger.ipynb
afeiguin/comp-phys
mit
Exercise 8.2: Single-slit diffraction Young’s single-slit experiment consists of a wave passing though a small slit, which causes the emerging wavelets to intefere with eachother forming a diffraction pattern. In quantum mechanics, where particles are represented by probabilities, and probabilities by wave packets, it ...
%matplotlib inline import numpy as np from matplotlib import pyplot import math lx = 20 #Box length in x ly = 20 #Box length in y dx = 0.25 #Incremental step size in x (Increased this to decrease the time of the sim) dy ...
08_01_Schroedinger.ipynb
afeiguin/comp-phys
mit
This notebook explores merged craigslist listings/census data and fits some initial models Remote connection parameters If data is stored remotely
# TODO: add putty connection too. #read SSH connection parameters with open('ssh_settings.json') as settings_file: settings = json.load(settings_file) hostname = settings['hostname'] username = settings['username'] password = settings['password'] local_key_dir = settings['local_key_dir'] census_dir = 'synth...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Data Preparation
def read_listings_file(fname): """Read csv file via SFTP and return as dataframe.""" with sftp.open(os.path.join(listings_dir,fname)) as f: df = pd.read_csv(f, delimiter=',', dtype={'fips_block':str,'state':str,'mpo_id':str}, date_parser=['date']) # TODO: parse dates. return df def log_var...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
create variables variable codes Race codes (from PUMS) 1 .White alone 2 .Black or African American alone 3 .American Indian alone 4 .Alaska Native alone 5 .American Indian and Alaska Native tribes specified; or American .Indian or Alaska native, not specified and no other races 6 .Asian alone 7 .Native Hawaiian ...
# create useful variables data = create_census_vars(data) # define some feature to include in the model. features_to_examine = ['rent','ln_rent', 'bedrooms','bathrooms','sqft','pct_white', 'pct_black','pct_asian','pct_mover','pct_owner','income_med','age_of_head_med','avg_hh_size','cars_per_hh'] data[features_to_exa...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Filter outliers
# I've already identified these ranges as good at exluding outliers rent_range=(100,10000) sqft_range=(10,5000) data = filter_outliers(data, rent_range=rent_range, sqft_range=sqft_range) # Use this to explore outliers yourself. g=sns.distplot(data['rent'], kde=False) g.set_xlim(0,10000) g=sns.distplot(data['sqft'],...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Examine missing data
# examine NA's print('Total rows:',len(data)) print('Rows with any NA:',len(data[pd.isnull(data).any(axis=1)])) print('Rows with bathroom NA:',len(data[pd.isnull(data.bathrooms)])) print('% rows missing bathroom col:',len(data[pd.isnull(data.bathrooms)])/len(data))
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
uh oh, 74% are missing bathrooms feature. Might have to omit that one. Only 0.02% of rows have other missing values, so that should be ok.
#for d in range(1,31): # print(d,'% rows missing bathroom col:',len(data[pd.isnull(data.bathrooms)&((data.date.dt.month==12)&(data.date.dt.day==d))])/len(data[(data.date.dt.month==12)&(data.date.dt.day==d)]))
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Bathrooms were added on Dec 21. After that, if bathrooms aren't in the listing, the listing is thrown out. Let's try to find the date when the bathrooms column was added. So if need to use bathrooms feature, can use listings Dec 22 and after.
# uncommon to only use data after Dec 21. #data=data[(data.date.dt.month>=12)&(data.date.dt.day>=22)] #data.shape # Uncomment to drop NA's #data = data.dropna() #print('Dropped {} rows with NAs'.format(n0-len(data)))
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause
Look at distributions Since rent has a more or less logarithmic distribution, use ln_rent instead
p=sns.distplot(data.rent, kde=False) p.set_title('rent') p=sns.distplot(data.ln_rent, kde=False) p.set_title('ln rent') plot_rows = math.ceil(len(features_to_examine)/2) f, axes = plt.subplots(plot_rows,2, figsize=(8,15)) sns.despine(left=True) for i,col in enumerate(features_to_examine): row_position = math.fl...
src/rental_listings_modeling.ipynb
lrayle/rental-listings-census
bsd-3-clause