markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
set from dictionary Works on dicts too. But will return a set of keys only, not values.
# works on dicts too s3_repeat_values = set({'k1':'v1', 'k2':'v1', 'k3':'v2'}) s3_repeat_values type(s3_repeat_values) # repeating keys s3_repeat_keys = set({'k1':'v1', 'k1':'v2'}) s3_repeat_keys
python_crash_course/python_cheat_sheet_1.ipynb
AtmaMani/pyChakras
mit
Note. When you create a dict with duplicate keys, Python just keeps the last occurrence of the kvp. It thinks the kvp needs to be updated to the latest value
d80 = {'k1':'v1', 'k2':'v2', 'k1':'v45'} # k1 is repeated d80
python_crash_course/python_cheat_sheet_1.ipynb
AtmaMani/pyChakras
mit
Loops for loop
list1 = [1,2,3,4,5,6,7] for element in list1: print(element) for element in list1: print(element, " squared ", element*element) for count in range(11,20): print(str(count))
python_crash_course/python_cheat_sheet_1.ipynb
AtmaMani/pyChakras
mit
Comprehensions Comprehensions are an effective way to loop through sequences List comprehension [operation for index in sequence condition]
list2_comp = [e*e for e in list1] list2_comp list2_even = [e*e for e in list1 if e%2==0] list2_even
python_crash_course/python_cheat_sheet_1.ipynb
AtmaMani/pyChakras
mit
Dictionary comprehension Same as list comprehension, but instead of lists, it returns a dictionary. You use {} instead of [] {key:value for key, value in dictionary if condition}
d3 = {'day':'Thursday', 'day_of_week':5, 'start_of_week':'Sunday', 'day_of_year':123, 'dod':{'month_of_year':'Feb', 'year':2017}, 'list1':[8,7,66]} # get those kvp whose value is a list {k:v for k,v in d3.items() if type(v)==list}
python_crash_course/python_cheat_sheet_1.ipynb
AtmaMani/pyChakras
mit
reverse keys and values? - works only when values are immutable types. hence filter them out
d4 = {k:v for k,v in d3.items() if type(v) not in [list, dict]} d4 # reverse keys and values d4_reverse = {v:k for k,v in d4.items()} d4_reverse
python_crash_course/python_cheat_sheet_1.ipynb
AtmaMani/pyChakras
mit
What is Numba? Numba is a just-in-time, type-specializing, function compiler for accelerating numerically-focused Python. That's a long list, so let's break down those terms: function compiler: Numba compiles Python functions, not entire applications, and not parts of functions. Numba does not replace your Python interpreter, but is just another Python module that can turn a function into a (usually) faster function. type-specializing: Numba speeds up your function by generating a specialized implementation for the specific data types you are using. Python functions are designed to operate on generic data types, which makes them very flexible, but also very slow. In practice, you only will call a function with a small number of argument types, so Numba will generate a fast implementation for each set of types. just-in-time: Numba translates functions when they are first called. This ensures the compiler knows what argument types you will be using. This also allows Numba to be used interactively in a Jupyter notebook just as easily as a traditional application. numerically-focused: Currently, Numba is focused on numerical data types, like int, float, and complex. There is very limited string processing support, and many string use cases are not going to work well on the GPU. To get best results with Numba, you will likely be using NumPy arrays. Problem 1 - A First Numba Function 1a) To start our exploration of Numba's features, let's write a python function to add two numbers. We'll creatively name it add:
def add(x, y): return x + y # add code here
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Now, test the function, first with two scalar integers:
add(1, 2) # add code here
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
1b) With Numpy, we can use our function to add not just scalars, but vectors as well. Using your favorite array creation routine, create two integer arrays with ten elements each, called a and b, and use your add function to add them.
a = np.arange(0,10) # add code here b = np.arange(1,11) # add code here add(a, b) # add code here
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
More commonly, you will use jit as a decorator, by adding @jit to the line above your function definition, but the above version shows you that at heart, @jit is just a python function that takes other functions as its argument! 1c) By default, a Numba function saves the original python version of the function in the variable py_func. Check that the original python version gives you the same answer as the Numba version.
print(add(a,b)) #add code here print(numba_add(a,b)) #add code here print(numba_add.py_func(a,b)) #add code here
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
What's going on here? %timeit is running our function many times, and then reporting the average time it takes to run. This is generally a better approach than timing a single function execution, because it accounts for random events that may cause any given run to perform poorly. 1d) Compare the time it takes to run your function with scalar vs array arguments, then your function vs python's add function (the standard ''+'' operator).
%timeit add(1,2) # add code here %timeit add(a, b) # add code here %timeit a + b # add code here
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
So, scalars are faster than arrays (makes sense), and python's addition function is better than ours (seems reasonable). Now, let's see how fast our pre-compiled Numba addition function is.
%timeit numba_add(a,b) # add code here
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Hold on - our new pre-compiled function is running even slower than the original python version! What's going on here? Problem 2 - A Better Numba Function (This problem borrowed from seibert's 2018 gtc numba tutorial.) As we saw in the first example, Numba isn't going to speed up everything. Generally, Numba will help you most in circumstances where python's line-by-line interperability and lack of type casting is slowing it down. We can use a slightly more complicated function to demonstrate this. The following is a function to calculate the hypotenuse of two numbers, that has been carefully designed to compensate for the computer's finite precision representation of numbers (check out https://en.wikipedia.org/wiki/Hypot for more info). 2a) Use the @jit decorator to generate a Numba version of this function.
@jit # add code here def hypotenuse(x, y): x = abs(x); y = abs(y); t = min(x, y); x = max(x, y); t = t / x; return x * math.sqrt(1+t*t)
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
2b) Use the %timeit function to determine whether the Numba version of the hyptonenuse function is better than the original Python implementation.
%timeit hypotenuse(3,4) # add code here %timeit hypotenuse.py_func(3,4) # add code here
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
2c) Numba functions can call other functions, provided they are also Numba functions. Below is a function that loops through two numpy arrays and puts their sum into an output array. Modify the following function to calculate the hypotenuse instead.
@njit # this is an alias for @jit(nopython=True) def ex_func(x, y, out): for i in range(x.shape[0]): out[i] = hypotenuse(x[i], y[i]) # change this line in1 = np.arange(10, dtype=np.float64) in2 = 2 * in1 + 1 out = np.empty_like(in1) print('in1:', in1) print('in2:', in2) ex_func(in1, in2, out) print('out:', out) # This test will fail until you fix the ex1 function np.testing.assert_almost_equal(out, np.hypot(in1, in2))
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Problem 3 - Fun with Fractals Now that we've got the basics of the Numba jit decorator down, let's have a little fun. A classic example problem in parallel programming is the calculation of a fractal, because a large fraction of the work can be done in parallel. Below is some code that calculates whether a number is a member of the Julia set, and then computes the set on a discrete domain to calculate a fractal. 3a) Modify the code below to use Numba and test how much faster it is than the original python implementation.
@njit # add code here def julia(x, y, max_iters): """ Given the real and imaginary parts of a complex number, determine if it is a candidate for membership in the Julia set given a fixed number of iterations. """ i = 0 c = complex(-0.8, 0.156) a = complex(x,y) for i in range(max_iters): a = a*a + c if (a.real*a.real + a.imag*a.imag) > 1000: return 0 return 255 @njit # add code here def create_fractal(min_x, max_x, min_y, max_y, image, iters): height = image.shape[0] width = image.shape[1] pixel_size_x = (max_x - min_x) / width pixel_size_y = (max_y - min_y) / height for x in range(width): real = min_x + x * pixel_size_x for y in range(height): imag = min_y + y * pixel_size_y color = julia(real, imag, iters) image[y, x] = color return image image = np.zeros((500, 750), dtype=np.uint8) %timeit create_fractal(-2.0, 2.0, -1.0, 1.0, image, 200)
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
3b) There is more than one type of fractal in the world, however! Below is a function that determines membership in the Mandelbrot set. Modify the function using to take advantage of Numba, then modify the code above to produce a new pretty picture.
@njit #add code here def mandel(x, y, max_iters): """ Given the real and imaginary parts of a complex number, determine if it is a candidate for membership in the Mandelbrot set given a fixed number of iterations. """ i = 0 c = complex(x,y) z = 0.0j for i in range(max_iters): z = z*z + c if (z.real*z.real + z.imag*z.imag) >= 4: return i return 255 # add code here @njit def create_fractal(min_x, max_x, min_y, max_y, image, iters): height = image.shape[0] width = image.shape[1] pixel_size_x = (max_x - min_x) / width pixel_size_y = (max_y - min_y) / height for x in range(width): real = min_x + x * pixel_size_x for y in range(height): imag = min_y + y * pixel_size_y color = mandel(real, imag, iters) image[y, x] = color return image image = np.zeros((500, 750), dtype=np.uint8) create_fractal(-2.0, 2.0, -1.0, 1.0, image, 200) plt.imshow(image) plt.viridis() plt.show()
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
4a) Numba has inferred the types for this function based on how we've used it. Try out your numba_add function with two floating point numbers, then re-inspect the types of the Numba function. Are they the same?
#Add code here numba_add(3.,4.) numba_add.inspect_types()
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
4b) Try your ufunc out with a new target, 'parallel'. How does the speed compare? What if the array size is much larger?
big_array = np.arange(0,1000000) # add code here %timeit add_ufunc(big_array, big_array) # add code here @vectorize(['int64(int64, int64)'], target='parallel') def add_ufunc(x, y): return x + y %timeit add_ufunc(big_array,big_array)
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
5a) Run the direct summation code and determine how long it takes with 10, 100, 1000 particles. Is there a relationship?
%timeit direct_sum(particles)
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Because this is an $\mathcal{O} \left(n^2 \right)$ function, each addition of 10x particles costs 100x more in computational time. Given how expensive it is, let's see if we can improve things a bit with Numba. (This also happens to be a highly parallelizable problem, but we'll get to that tomorrow.) How do we use Numba on this problem? There is a subtle issue here - Numba doesn't support jitting native Python classes. There is a jit_class structure in Numba but it's still in early development. But we'd like to have attributes for readable programming. The solution is to build NumPy custom dtypes.
particle_dtype = np.dtype({'names':['x','y','z','m','phi'], 'formats':[np.double, np.double, np.double, np.double, np.double]}) myarray = np.ones(3, dtype=particle_dtype) myarray
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
5b) Write a jit function create_n_random_particles that takes the arguments n (number of particles), m (mass of every particle) and a domain within which to generate a random number (as in the class above). It should create an array with n elements and dtype=particle_dtype and then return that array. ​ For each particle, the mass should be initialized to the value of m and the potential phi initialized to zero. Hint: You will probably want to loop over the number of particles within the function to assign attributes.
@njit def create_n_random_particles(n, m, domain=1): ''' Creates `n` particles with mass `m` with random coordinates between 0 and `domain` ''' parts = np.zeros((n), dtype=particle_dtype) #attribute access only in @jitted function for p in parts: p.x = np.random.random() * domain p.y = np.random.random() * domain p.z = np.random.random() * domain p.m = m p.phi = 0 return parts
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
We don't have a distance method anymore, so we need to write a function to take care of that. 5c) Write a jit function distance to calculate the distance between two particles of dtype particle_dtype.
@njit def distance(part1, part2): '''calculate the distance between two particles''' return ((part1.x - part2.x)**2 + (part1.y - part2.y)**2 + (part1.z - part2.z)**2)**.5 distance(particles[0], particles[1]) %%timeit distance(particles[0], particles[1])
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
5d) Modify the direct_sum function above to instead work a NumPy array of particles. Loop over each element in the array and calculate its total potential. Time the result and compare it to your previous version of this function.
@njit def direct_sum(particles): for i, target in enumerate(particles): for j, source in enumerate(particles): if i != j: r = distance(target, source) target.phi += source.m / r return particles %timeit direct_sum(particles)
Sessions/Session08/Day4/numba_intro_solutions.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Load data First we will load a sample ECoG dataset which we'll use for generating a 2D snapshot.
mat = loadmat(path_data) ch_names = mat['ch_names'].tolist() elec = mat['elec'] # electrode coordinates in meters # Now we make a montage stating that the sEEG contacts are in head # coordinate system (although they are in MRI). This is compensated # by the fact that below we do not specicty a trans file so the Head<->MRI # transform is the identity. montage = mne.channels.make_dig_montage(ch_pos=dict(zip(ch_names, elec)), coord_frame='head') info = mne.create_info(ch_names, 1000., 'ecog').set_montage(montage) print('Created %s channel positions' % len(ch_names))
0.23/_downloads/00ac060e49528fd74fda09b97366af98/3d_to_2d.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Reading simulation data
def load_data(filenames, preselection=None): # not setting treename, it's detected automatically data = root_numpy.root2array(filenames, selection=preselection) return pandas.DataFrame(data) sim_data = load_data(folder + 'PhaseSpaceSimulation.root', preselection=None)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Looking at data, taking first rows:
sim_data.head()
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Plotting some feature
# hist data will contain all information from histogram hist_data = hist(sim_data.H1_PX, bins=40, range=[-100000, 100000])
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Adding interesting features for each particle we compute it's P, PT and energy (under assumption this is Kaon)
def add_momenta_and_energy(dataframe, prefix, compute_energy=False): """Adding P, PT and У of particle with given prefix, say, 'H1_' """ pt_squared = dataframe[prefix + 'PX'] ** 2. + dataframe[prefix + 'PY'] ** 2. dataframe[prefix + 'PT'] = numpy.sqrt(pt_squared) p_squared = pt_squared + dataframe[prefix + 'PZ'] ** 2. dataframe[prefix + 'P'] = numpy.sqrt(p_squared) if compute_energy: E_squared = p_squared + dataframe[prefix + 'M'] ** 2. dataframe[prefix + 'E'] = numpy.sqrt(E_squared) for prefix in ['H1_', 'H2_', 'H3_']: # setting Kaon mass to each of particles: sim_data[prefix + 'M'] = 493 add_momenta_and_energy(sim_data, prefix, compute_energy=True)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Adding features of $B$ We are able to compute 4-momentum of B, given 4-momenta of produced particles
def add_B_features(data): for axis in ['PX', 'PY', 'PZ', 'E']: data['B_' + axis] = data['H1_' + axis] + data['H2_' + axis] + data['H3_' + axis] add_momenta_and_energy(data, prefix='B_', compute_energy=False) data['B_M'] = data.eval('(B_E ** 2 - B_PX ** 2 - B_PY ** 2 - B_PZ ** 2) ** 0.5') add_B_features(sim_data)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
looking at result (with added features)
sim_data.head() _ = hist(sim_data['B_M'], range=[5260, 5280], bins=100)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Dalitz plot computing dalitz variables and checking that no resonances in simulation
def add_dalitz_variables(data): """function to add Dalitz variables, names of prudicts are H1, H2, H3""" for i, j in [(1, 2), (1, 3), (2, 3)]: momentum = pandas.DataFrame() for axis in ['E', 'PX', 'PY', 'PZ']: momentum[axis] = data['H{}_{}'.format(i, axis)] + data['H{}_{}'.format(j, axis)] data['M_{}{}'.format(i,j)] = momentum.eval('(E ** 2 - PX ** 2 - PY ** 2 - PZ ** 2) ** 0.5') add_dalitz_variables(sim_data) scatter(sim_data.M_12, sim_data.M_13, alpha=0.05)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Working with real data Preselection
preselection = """ H1_IPChi2 > 1 && H2_IPChi2 > 1 && H3_IPChi2 > 1 && H1_IPChi2 + H2_IPChi2 + H3_IPChi2 > 500 && B_VertexChi2 < 12 && H1_ProbPi < 0.5 && H2_ProbPi < 0.5 && H3_ProbPi < 0.5 && H1_ProbK > 0.9 && H2_ProbK > 0.9 && H3_ProbK > 0.9 && !H1_isMuon && !H2_isMuon && !H3_isMuon """ preselection = preselection.replace('\n', '') real_data = load_data([folder + 'B2HHH_MagnetDown.root', folder + 'B2HHH_MagnetUp.root'], preselection=preselection)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
adding features
for prefix in ['H1_', 'H2_', 'H3_']: # setting Kaon mass: real_data[prefix + 'M'] = 493 add_momenta_and_energy(real_data, prefix, compute_energy=True) add_B_features(real_data) _ = hist(real_data.B_M, bins=50)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
additional preselection which uses added features
momentum_preselection = """ (H1_PT > 100) && (H2_PT > 100) && (H3_PT > 100) && (H1_PT + H2_PT + H3_PT > 4500) && H1_P > 1500 && H2_P > 1500 && H3_P > 1500 && B_M > 5050 && B_M < 6300 """ momentum_preselection = momentum_preselection.replace('\n', '').replace('&&', '&') real_data = real_data.query(momentum_preselection) _ = hist(real_data.B_M, bins=50)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Adding Dalitz plot for real data
add_dalitz_variables(real_data) # check that 2nd and 3rd particle have same sign numpy.mean(real_data.H2_Charge * real_data.H3_Charge) scatter(real_data['M_12'], real_data['M_13'], alpha=0.1) xlabel('M_12'), ylabel('M_13') show() # lazy way for plots real_data.plot('M_12', 'M_13', kind='scatter', alpha=0.1)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Ordering dalitz variables let's reorder particles so the first Dalitz variable is always greater
scatter(numpy.maximum(real_data['M_12'], real_data['M_13']), numpy.minimum(real_data['M_12'], real_data['M_13']), alpha=0.1) xlabel('max(M12, M13)'), ylabel('min(M12, M13)') show()
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Binned dalitz plot let's plot the same in bins, as physicists like
hist2d(numpy.maximum(real_data['M_12'], real_data['M_13']), numpy.minimum(real_data['M_12'], real_data['M_13']), bins=8) colorbar() xlabel('max(M12, M13)'), ylabel('min(M12, M13)') show()
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Looking at local CP-asimmetry adding one more column
real_data['B_Charge'] = real_data.H1_Charge + real_data.H2_Charge + real_data.H3_Charge hist(real_data.B_M[real_data.B_Charge == +1].values, bins=30, range=[5050, 5500], alpha=0.5) hist(real_data.B_M[real_data.B_Charge == -1].values, bins=30, range=[5050, 5500], alpha=0.5) pass
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Leaving only signal region in mass
signal_charge = real_data.query('B_M > 5200 & B_M < 5320').B_Charge
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
counting number of positively and negatively charged B particles
n_plus = numpy.sum(signal_charge == +1) n_minus = numpy.sum(signal_charge == -1) print n_plus, n_minus, n_plus - n_minus print 'asymmetry = ', (n_plus - n_minus) / float(n_plus + n_minus)
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Estimating significance of deviation (approximately) we will assume that $N_{+} + N_{-}$, and under null hypothesis each observation contains with $p=0.5$ positive or negative particle. So, under these assumptions $N_{+}$ is distributed as binomial random variable.
# computing properties of n_plus according to H_0 hypothesis. n_mean = len(signal_charge) * 0.5 n_std = numpy.sqrt(len(signal_charge) * 0.25) print 'significance = ', (n_plus - n_mean) / n_std
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Subtracting background using RooFit to fit mixture of exponential (bkg) and gaussian (signal) distributions. Based on the fit, we estimate number of events in mass region
# Lots of ROOT imports for fitting and plotting from rootpy import asrootpy, log from rootpy.plotting import Hist, Canvas, set_style, get_style from ROOT import (RooFit, RooRealVar, RooDataHist, RooArgList, RooArgSet, RooAddPdf, TLatex, RooGaussian, RooExponential ) def compute_n_signal_by_fitting(data_for_fit): """ Computing the amount of signal with in region [x_min, x_max] returns: canvas with fit, n_signal in mass region """ # fit limits hmin, hmax = data_for_fit.min(), data_for_fit.max() hist = Hist(100, hmin, hmax, drawstyle='EP') root_numpy.fill_hist(hist, data_for_fit) # Declare observable x x = RooRealVar("x","x", hmin, hmax) dh = RooDataHist("dh","dh", RooArgList(x), RooFit.Import(hist)) frame = x.frame(RooFit.Title("D^{0} mass")) # this will show histogram data points on canvas dh.plotOn(frame, RooFit.MarkerColor(2), RooFit.MarkerSize(0.9), RooFit.MarkerStyle(21)) # Signal PDF mean = RooRealVar("mean", "mean", 5300, 0, 6000) width = RooRealVar("width", "width", 10, 0, 100) gauss = RooGaussian("gauss","gauss", x, mean, width) # Background PDF cc = RooRealVar("cc", "cc", -0.01, -100, 100) exp = RooExponential("exp", "exp", x, cc) # Combined model d0_rate = RooRealVar("D0_rate", "rate of D0", 0.9, 0, 1) model = RooAddPdf("model","exp+gauss",RooArgList(gauss, exp), RooArgList(d0_rate)) # Fitting model result = asrootpy(model.fitTo(dh, RooFit.Save(True))) mass = result.final_params['mean'].value hwhm = result.final_params['width'].value # this will show fit overlay on canvas model.plotOn(frame, RooFit.Components("exp"), RooFit.LineStyle(3), RooFit.LineColor(3)) model.plotOn(frame, RooFit.LineColor(4)) # Draw all frames on a canvas canvas = Canvas() frame.GetXaxis().SetTitle("m_{K#pi#pi} [GeV]") frame.GetXaxis().SetTitleOffset(1.2) frame.Draw() # Draw the mass and error label label = TLatex(0.6, 0.8, "m = {0:.2f} #pm {1:.2f} GeV".format(mass, hwhm)) label.SetNDC() label.Draw() # Calculate the rate of background below the signal curve inside (x_min, x_max) x_min, x_max = 5200, 5330 x.setRange(hmin, hmax) bkg_total = exp.getNorm(RooArgSet(x)) sig_total = gauss.getNorm(RooArgSet(x)) x.setRange(x_min, x_max) bkg_level = exp.getNorm(RooArgSet(x)) sig_level = gauss.getNorm(RooArgSet(x)) bkg_ratio = bkg_level / bkg_total sig_ratio = sig_level / sig_total n_elements = hist.GetEntries() # TODO - normally get parameter form fit_result sig_part = (d0_rate.getVal()) bck_part = (1 - d0_rate.getVal()) # estimating ratio of signal and background bck_sig_ratio = (bkg_ratio * n_elements * bck_part) / (sig_ratio * n_elements * sig_part) # n_events in (x_min, x_max) n_events_in_mass_region = numpy.sum((data_for_fit > x_min) & (data_for_fit < x_max)) n_signal_in_mass_region = n_events_in_mass_region / (1. + bck_sig_ratio) return canvas, n_signal_in_mass_region B_mass_range = [5050, 5500] mass_for_fitting_plus = real_data.query('(B_M > 5050) & (B_M < 5500) & (B_Charge == +1)').B_M mass_for_fitting_minus = real_data.query('(B_M > 5050) & (B_M < 5500) & (B_Charge == -1)').B_M canvas_plus, n_positive_signal = compute_n_signal_by_fitting(mass_for_fitting_plus) canvas_plus canvas_minus, n_negative_signal = compute_n_signal_by_fitting(mass_for_fitting_minus) canvas_minus
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Computing asymmetry with subtracted background
print n_positive_signal, n_negative_signal print (n_positive_signal - n_negative_signal) / (n_positive_signal + n_negative_signal) n_mean = 0.5 * (n_positive_signal + n_negative_signal) n_std = numpy.sqrt(0.25 * (n_positive_signal + n_negative_signal)) print (n_positive_signal - n_mean) / n_std
ManchesterTutorial.ipynb
yandexdataschool/manchester-cp-asymmetry-tutorial
cc0-1.0
Read the Dakota tabular data file.
dat_file = '../examples/1-rosenbrock/dakota.dat' data = numpy.loadtxt(dat_file, skiprows=1, unpack=True, usecols=[0,2,3,4]) data
notebooks/1-rosenbrock.ipynb
mdpiper/dakota-tutorial
mit
Plot the path taken in the vector parameter study.
plot(data[1,], data[2,], 'ro') xlim((-2, 2)) ylim((-2, 2)) xlabel('$x_1$') ylabel('$x_2$') title('Planview of parameter study locations')
notebooks/1-rosenbrock.ipynb
mdpiper/dakota-tutorial
mit
Plot the values of the Rosenbrock function at the study locations.
plot(data[-1,], 'bo') xlabel('index') ylabel('Rosenbrock fuction value') title('Rosenbrock function values at study locations')
notebooks/1-rosenbrock.ipynb
mdpiper/dakota-tutorial
mit
What's the minimum value of the function over the study locations?
min(data[-1,:])
notebooks/1-rosenbrock.ipynb
mdpiper/dakota-tutorial
mit
Nomenclature Python has a specific nomenclature for enums. The class Color is an enumeration (or enum) The attributes Color.red, Color.green, etc., are enumeration members (or enum members). The enum members have names and values (the name of Color.red is red, the value of Color.blue is 3, etc.) Printing and Representing Enums Enum types have human readable string representations for print and repr:
print(MyEnum.first) print(repr(MyEnum.first))
Enums.ipynb
JohnCrickett/PythonExamples
mit
The type of an enumeration member is the enumeration it belongs to:
type(MyEnum.first)
Enums.ipynb
JohnCrickett/PythonExamples
mit
Alternative way to create an Enum There is an alternative way to create and Enum, that matches Python's NamedTuple: python Colour = Enum('Colour', 'red, green') Try it below:
SecondEnum = Enum('SecondEnum', 'first, second, third') print(SecondEnum.first)
Enums.ipynb
JohnCrickett/PythonExamples
mit
Let's declare a function with arguments
def eggs(arg1): # Functions arguments are declared inside the parentheses print "eggs", arg1 eggs("eggssss") # Function calls specify arguments inside parentheses def func(arg1, arg2, arg3): # There is no limit of arguments print "func", arg1, arg2, arg3 func("spam", "eggs", "fooo") print func("spam", "eggs", "fooo") # By default functions return None def my_sum(arg1, arg2): return arg1 + arg2 # Use the return keyword to output any result print my_sum(3, 5) print my_sum(3.333, 5) print my_sum("spam", "eggs") # Given that Python is a dynamic language we can reuse the same method
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Let's declare a function with arguments and default values
def my_pow(arg1, arg2=2): # It is possible to define deault values for the arguments, always after arguments without default values return arg1 ** arg2 print my_pow(3) def my_func(arg1, arg2=2, arg3=3, arg4=4): return arg1 ** arg2 + arg3 ** arg4 print my_func(3, arg3=2) # Use keyword arguments to call skip some of the arguments with default value
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Let's use an arbitrary arguments list
def my_func(arg1=1, arg2=2, *args): # This arbitrary list is a (kind-off) tuple of positional arguments print args return arg1 + arg2 print my_func(2, 3) print my_func(2, 3, 5, 7) spam = (5, 7) print my_func(2, 3, *spam) # It is possible to unpack a tuple or list as an arbitrary list of arguments
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
The same applies for arbitrary keyword arguments
def my_func(arg1=1, arg2=2, **kwargs): # This arbitrary 'args' list is a (kind-off) tuple of positional arguments print kwargs return arg1 + arg2 print my_func(2, 3) print my_func(2, 3, param3=5, param4=7) spam = {"param3": 5, "param4": 7} print my_func(2, 3, **spam) # It is possible to unpack a tuple or list as an arbitrary list of arguments
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Functions are first classed objects
def function_caller(f): f() def func_as_arg(): print 'There should be one-- and preferably only one --obvious way to do it.' function_caller(func_as_arg) # Functions can be passed as arguments
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
REMEMBER: Functions are declared with the 'def' keyword, its name, parrentheses and a colon Specify arguments inside the parentheses Define arguments' default values with an equal, after arguments without def val Specify arbitrary arguments or keyword arguments with args or *kwargs Actually only the asterisks matter, the name is up to you Use indentation for the body of the function, typically 4 spaces per level Functions are executed with its name followed by parentheses Provide input arguments inside the parentheses Provide keywords arguments specifying their name Functions can be declared and called outside classes Functions are first classed objects You can pass them as arguments Classes Let's see how to declare custom classes
class Spam: # 'class' keyword, camel case class name and colon : pass spammer = Spam() # Class instantiation: spammer becomes an instance of Spam print spammer class Eggs(Spam): # Ancestor superclasses inside parentheses for inheritance a_class_attr = "class_val" # Class attributes inside the body, outside class methods. Must have value def __init__(self, attr_val): # __init__ is called in the instances initialization (not constructor) self.attr = attr_val def method(self, arg1, arg2=None): # Method declaration. Indented and receiving self (the instance) print "'method' of", self print self.attr, arg1, arg2 # Access instance attributes using self with a dot . def second_method(self): self.attr = 99.99 self.method("FROM 2nd") # Methos may call other methods using self with a dot .
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Still easy?
egger = Eggs(12.345) # Provide __init__ arguments in the instantiation print egger print egger.attr # Retrieve instance attributes with a dot print egger.a_class_attr # Retrieve class attributes with a dot print Eggs.a_class_attr egger.a_class_attr = "new value" print egger.a_class_attr print Eggs.a_class_attr
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Class attributes can be retrieved directly from the class Instances only modify class attributes value locally
print Eggs
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Classes are objects too: Python evaluates its declaration and instantiates a special object This object is called each time a new class instance is created
egger.method("value1", "value2") egger.second_method() print egger.method print Eggs.method inst_method = egger.method inst_method("valueA", "valueB")
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Methods are also attributes (bounded) of classes and instances Time to talk about new-style classes
class Spam: def spam_method(self): print self.__class__ # __class__ is a special attribute containing the class of any object print type(self) spammer = Spam() spammer.spam_method() print spammer print type(spammer) # Why type says it is an 'instance' and not a 'Spam'? class Spam(object): # Inherit from 'object' def spam_method(self): print self.__class__ print type(self) spammer = Spam() print spammer print type(spammer) # This is a new-style class
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
New-style classes were introduced in Python 2.2 to unify classes and types Provide unified object model with a full meta-model (more in the Advanced block) Other benefits: subclass most built-in types, descriptors (slots, properties, static and class methods)... By default all classes are old-style until Python 3 In Python 2 you have to inherit from 'object' to use new-style You must avoid old-style So you must inherit ALWAYS from 'object' Other changes introduced Python 2.2: new, new dir() behavior, metaclasses, new MRO (also in 2.3) More info: http://www.python.org/doc/newstyle/
class OldStyleClass(): pass old_inst = OldStyleClass() print type(old_inst) # Let's inherit from an old-style class class NewStyleSubClass(OldStyleClass, object): # Multiple inheritance pass new_inst = NewStyleSubClass() print type(new_inst)
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Inherit from both old-style classes and 'object' to obtain new-style classes Let's play a bit with inheritance
class Spam(object): spam_class_attr = "spam" # Class attributes must have value always (you may use None...) def spam_method(self): print "spam_method", self, self.spam_class_attr print self.__class__ class Eggs(object): eggs_class_attr = "eggs" def eggs_method(self): print "eggs_method", self, self.eggs_class_attr print self.__class__ class Fooo(Spam, Eggs): # Specify a list of ancestor superclasses fooo_class_attr = "fooo" def fooo_method(self): self.spam_method() self.eggs_method() # Retrieve superclasses attributes as if they were yours print "fooo_method", self, self.fooo_class_attr print self.__class__ foooer = Fooo() foooer.fooo_method() foooer.spam_method() foooer.eggs_method() # self is ALWAYS an instance of the subclass print foooer.spam_class_attr print foooer.eggs_class_attr print foooer.fooo_class_attr # We have access to all own and ancestors' attributes # Given that Python is a dynamic language... class Spam(object): pass spammer = Spam() spammer.name = "John" spammer.surname = "Doe" spammer.age = 65 spammer.male = True # ... this is legal print spammer.name print spammer.surname print spammer.age print spammer.male
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
What about static or class methods?
class Spam(object): def method(self, arg=None): print "Called 'method' with", self, arg @classmethod # This is a decorator def cls_method(cls, arg=None): print "Called 'cls_method' with", cls, arg @staticmethod # This is another decorator def st_method(arg=None): print "Called 'st_method' with", arg spammer = Spam() spammer.method(10) Spam.method(spammer, 100) # Although it works, this is not exacty the same print spammer.method print Spam.method # It is unbounded, not related with an instance spammer.cls_method(20) Spam.cls_method(200) print spammer.cls_method print Spam.cls_method # Both are a bounded method... to the class spammer.st_method(30) Spam.st_method(300) print spammer.st_method print Spam.st_method # Both are a plain standard functions
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
REMEMBER: Classes are declared with the 'class' keyword, its name in camel case and a colon Specify ancestors superclasses list between parrentheses after the class name So you must inherit ALWAYS from 'object' to have new-style classes Use indentation for class body declarations (attributes and methods) Specify class attributes (with value) inside the class, outside any method Specify methods inside the body, with indentation (method body has 2x indentation) Method's first parameter is always self, the instance whose method is being called Use self to access attributes and other methods of the instance When inheriting, ancestors attributes and methods can be accessed transparently There are no private attributes in Python There is a convention to use underscore _ prefix Classes definition is not closed. At any time you can add (or delete) an attribute classmethod to specify class methods; bounded to the class, not its instances Used to implement alternative constructors (e.g. dict.copy) staticmethod to specify static methods; standard functions declared inside the class Only for organisation, it is equivalent to declare the function in the class module Modules What is it a python module? A module is a file containing Python definitions and statements. Python interpreter reads the file and evaluates its definitions and statements. Python does not accept dashes - in modules names
print "'__name__' value:", __name__
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
The file name is the module name with the suffix .py appended. Global variable 'name' contains the name of current module. Functions and classes also have a variable containing their module name
def func(): print "Called func in", __name__ print "'func.__module__' value:", func.__module__ !cat my_modules.py !python my_modules.py import my_modules
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
The module name depends on how the module is being evaluated (imported or executed) Use if name == "main": to detect when a module (script) is imported or executed
# What will it happen if we import the module again? import my_modules ### All code is evaluated (executed) only once the first time it is imported func() my_modules.func() from my_modules import func func() func() !rm -rf basic_tmp !mkdir basic_tmp !echo 'print "This is the __init__.py", __name__\n' > basic_tmp/__init__.py !cp my_modules.py basic_tmp !python -c "import basic_tmp.my_modules"
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Packages are folders with a init.py file This init.py is also evaluated, so it may contain code The init.py is actually the package (check its module name) The module name depends on the packages path
!python -c "from basic_tmp.my_modules import func;func();print my_modules" !python -c "from basic_tmp.my_modules import func as the_module;the_module();print the_module.__name__"
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
LESSONS LEARNT: - Modules are objects too, and their variables, functions and classes are their attributes - Modules can be imported in different ways: - import packages.path.to.module.module_name - from packages.path.to.module import module_name_1, module_name_2 - from packages.path.to.module import (module_name_1, module_name_2, module_name_3, module_name_4) - from packages.path.to.module import module_name as new_module_name - You are binding the module to another name, like you do with lists or strings - Modules is indepent on how you call (bind) them when importing
!rm -rf basic_tmp !mkdir basic_tmp !echo 'print "This is the __init__.py", __name__\n' > basic_tmp/__init__.py !cp my_modules.py basic_tmp !echo 'from my_modules import func\n' > basic_tmp/__init__.py !python -c "from basic_tmp import func;func()"
basic/4_Functions_classes_and_modules.ipynb
ealogar/curso-python
apache-2.0
Statistical Analysis and Data Exploration In this first section of the project, you will quickly investigate a few basic statistics about the dataset you are working with. In addition, you'll look at the client's feature set in CLIENT_FEATURES and see how this particular sample relates to the features of the dataset. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand your results. Step 1 In the code block below, use the imported numpy library to calculate the requested statistics. You will need to replace each None you find with the appropriate numpy coding for the proper statistic to be printed. Be sure to execute the code block each time to test if your implementation is working successfully. The print statements will show the statistics you calculate!
# Number of houses in the dataset total_houses = housing_features.shape[0] # Number of features in the dataset total_features = housing_features.shape[1] # Minimum housing value in the dataset minimum_price = housing_prices.min() # Maximum housing value in the dataset maximum_price = housing_prices.max() # Mean house value of the dataset mean_price = housing_prices.mean() # Median house value of the dataset median_price = numpy.median(housing_prices) # Standard deviation of housing values of the dataset std_dev = numpy.std(housing_prices) # Show the calculated statistics print "Boston Housing dataset statistics (in $1000's):\n" print "Total number of houses:", total_houses print "Total number of features:", total_features print "Minimum house price:", minimum_price print "Maximum house price:", maximum_price print "Mean house price: {0:.3f}".format(mean_price) print "Median house price:", median_price print "Standard deviation of house price: {0:.3f}".format(std_dev) axe = seaborn.distplot(housing_data.median_value) title = axe.set_title('Median Housing Prices')
machine_learning/udacity/project_1/boston_housing_exploration.ipynb
necromuralist/machine_learning_studies
mit
Question 1 As a reminder, you can view a description of the Boston Housing dataset here, where you can find the different features under Attribute Information. The MEDV attribute relates to the values stored in our housing_prices variable, so we do not consider that a feature of the data. Of the features available for each data point, choose three that you feel are significant and give a brief description for each of what they measure. Remember, you can double click the text box below to add your answer!
seaborn.set_style('whitegrid', {'figure.figsize': (10, 8)}) with warnings.catch_warnings(): warnings.simplefilter('ignore') for column in housing_data.columns: grid = seaborn.lmplot(column, 'median_value', data=housing_data, size=8) axe = grid.fig.gca() title = axe.set_title('{0} vs price'.format(column))
machine_learning/udacity/project_1/boston_housing_exploration.ipynb
necromuralist/machine_learning_studies
mit
CRIM : 11.95, INDUS: 18.1, LSTAT: 12.13 Evaluating Model Performance In this second section of the project, you will begin to develop the tools necessary for a model to make a prediction. Being able to accurately evaluate each model's performance through the use of these tools helps to greatly reinforce the confidence in your predictions. Step 2 In the code block below, you will need to implement code so that the shuffle_split_data function does the following: - Randomly shuffle the input data X and target labels (housing values) y. - Split the data into training and testing subsets, holding 30% of the data for testing. If you use any functions not already acessible from the imported libraries above, remember to include your import statement below as well! Ensure that you have executed the code block once you are done. You'll know if the shuffle_split_data function is working if the statement "Successfully shuffled and split the data!" is printed.
# Put any import statements you need for this code block here from sklearn import cross_validation def shuffle_split_data(X, y): """ Shuffles and splits data into 70% training and 30% testing subsets, then returns the training and testing subsets. """ # Shuffle and split the data X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=.3, random_state=0) # Return the training and testing data subsets return X_train, y_train, X_test, y_test # Test shuffle_split_data X_train, y_train, X_test, y_test = shuffle_split_data(housing_features, housing_prices) feature_length = len(housing_features) train_length = round(.7 * feature_length) test_length = round(.3 * feature_length) assert len(X_train) == train_length, "Expected: {0} Actual: {1}".format(.7 * feature_length, len(X_train)) assert len(X_test) == test_length, "Expected: {0} Actual: {1}".format(int(.3 * feature_length), len(X_test)) assert len(y_train) == train_length assert len(y_test) == test_length print "Successfully shuffled and split the data!"
machine_learning/udacity/project_1/boston_housing_exploration.ipynb
necromuralist/machine_learning_studies
mit
Question 4 Which performance metric below did you find was most appropriate for predicting housing prices and analyzing the total error. Why? - Accuracy - Precision - Recall - F1 Score - Mean Squared Error (MSE) - Mean Absolute Error (MAE) Mean Squared Error was the most appropriate performance metric for predicting housing prices because we are predicting a numeric value (this is a regression problem) and while Mean Absolute Error could also be used, the MSE emphasizes larger errors more (due to the squaring) and so is preferable. Step 4 (Final Step) In the code block below, you will need to implement code so that the fit_model function does the following: - Create a scoring function using the same performance metric as in Step 2. See the sklearn make_scorer documentation. - Build a GridSearchCV object using regressor, parameters, and scoring_function. See the sklearn documentation on GridSearchCV. When building the scoring function and GridSearchCV object, be sure that you read the parameters documentation thoroughly. It is not always the case that a default parameter for a function is the appropriate setting for the problem you are working on. Since you are using sklearn functions, remember to include the necessary import statements below as well! Ensure that you have executed the code block once you are done. You'll know if the fit_model function is working if the statement "Successfully fit a model to the data!" is printed.
# Put any import statements you need for this code block from sklearn.metrics import make_scorer from sklearn.grid_search import GridSearchCV def fit_model(X, y): """ Tunes a decision tree regressor model using GridSearchCV on the input data X and target labels y and returns this optimal model. """ # Create a decision tree regressor object regressor = DecisionTreeRegressor() # Set up the parameters we wish to tune parameters = {'max_depth':(1,2,3,4,5,6,7,8,9,10)} # Make an appropriate scoring function scoring_function = make_scorer(mean_squared_error, greater_is_better=False) # Make the GridSearchCV object reg = GridSearchCV(regressor, param_grid=parameters, scoring=scoring_function, cv=10) # Fit the learner to the data to obtain the optimal model with tuned parameters reg.fit(X, y) # Return the optimal model return reg # Test fit_model on entire dataset reg = fit_model(housing_features, housing_prices) print "Successfully fit a model!"
machine_learning/udacity/project_1/boston_housing_exploration.ipynb
necromuralist/machine_learning_studies
mit
Analyzing Model Performance In this third section of the project, you'll take a look at several models' learning and testing error rates on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing max_depth parameter on the full training set to observe how model complexity affects learning and testing errors. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
with warnings.catch_warnings(): warnings.simplefilter('ignore') learning_curves(X_train, y_train, X_test, y_test)
machine_learning/udacity/project_1/boston_housing_exploration.ipynb
necromuralist/machine_learning_studies
mit
Question 7 Choose one of the learning curve graphs that are created above. What is the max depth for the chosen model? As the size of the training set increases, what happens to the training error? What happens to the testing error? Looking at the model with max-depth of 3, as the size of the training set increases, the training error gradually increases. The testing error initially decreases, the seems to more or less stabilize. Question 8 Look at the learning curve graphs for the model with a max depth of 1 and a max depth of 10. When the model is using the full training set, does it suffer from high bias or high variance when the max depth is 1? What about when the max depth is 10? The training and testing plots for the model with max-depth 1 move toward convergence with an error near 50, indicating a high bias (the model is too simple, and the additional data isn't improving the generalization of the model). For the model with max-depth 1, the curves haven't converged, and the training error remains near 0, indicating that it suffers from high variance, and should be improved with more data.
model_complexity(X_train, y_train, X_test, y_test)
machine_learning/udacity/project_1/boston_housing_exploration.ipynb
necromuralist/machine_learning_studies
mit
Question 9 From the model complexity graph above, describe the training and testing errors as the max depth increases. Based on your interpretation of the graph, which max depth results in a model that best generalizes the dataset? Why? As max-depth increases the training error improves, while the testing error decreases up until a depth of 6 and then begins a slight increase as the depth is increased. Based on this I would say that the max-depth of 6 created the model that best generalized the dataset, as it minimized the testing error. Model Prediction In this final section of the project, you will make a prediction on the client's feature set using an optimized model from fit_model. To answer the following questions, it is recommended that you run the code blocks several times and use the median or mean value of the results. Question 10 Using grid search on the entire dataset, what is the optimal max_depth parameter for your model? How does this result compare to your intial intuition? Hint: Run the code block below to see the max depth produced by your optimized model.
print "Final model optimal parameters:", reg.best_params_ reg.best_score_ models = (fit_model(housing_features, housing_prices) for model in range(1000)) params_scores = [(model.best_params_, model.best_score_) for model in models] parameters = numpy.array([param_score[0]['max_depth'] for param_score in params_scores]) scores = numpy.array([param_score[1] for param_score in params_scores]) grid = seaborn.distplot(parameters) grid = seaborn.boxplot(parameters) import matplotlib.pyplot as plot grid = plot.plot(sorted(parameters), numpy.linspace(0, 1, len(parameters))) best_models = pandas.DataFrame.from_dict({'parameter':parameters, 'score': scores}) x_labels = sorted(best_models.parameter.unique()) figure = plot.figure() axe = figure.gca() grid = seaborn.boxplot('parameter', 'score', data = best_models, order=x_labels, ax=axe) grid = seaborn.swarmplot('parameter', 'score', data = best_models, order=x_labels, ax=axe, color='w', alpha=0.5) title = axe.set_title("Best Parameters vs Best Scores") grid = seaborn.distplot(scores) print('min parameter: {0}'.format(numpy.min(parameters))) print('median parameter: {0}'.format(numpy.median(parameters))) print('mean parameter: {0}'.format(numpy.mean(parameters))) print('max parameter: {0}'.format(numpy.max(parameters))) # since the goal is to minimize the errors, sklearn negates the MSE # so the highest score (the least negative) is the best score print('min score: {0}'.format(numpy.min(scores))) print('max score: {0}'.format(numpy.max(scores))) best_index = numpy.where(scores==numpy.max(scores)) print(scores[best_index]) print(parameters[best_index]) bin_range = best_models.parameter.max() - best_models.parameter.min() bins = pandas.cut(best_models.parameter, bin_range) print(bins.value_counts()) parameter_group = pandas.groupby(best_models, 'parameter') parameter_group.median() parameter_group.max()
machine_learning/udacity/project_1/boston_housing_exploration.ipynb
necromuralist/machine_learning_studies
mit
While a max-depth of 3 was the most common best-parameter, the max-depth of 5 was the median max-depth, had the highest median score, and had the highest overall score, so I will say that the optimal max_depth parameter is 5. This is slightly lower than my guess of 6, but doesn't seem too far off, although a max-depth of 7 seems to be a slight improvement over 6 as well. Question 11 With your parameter-tuned model, what is the best selling price for your client's home? How does this selling price compare to the basic statistics you calculated on the dataset? Hint: Run the code block below to have your parameter-tuned model make a prediction on the client's home.
sale_price = reg.predict(CLIENT_FEATURES) print "Predicted value of client's home: {0:.3f}".format(sale_price[0])
machine_learning/udacity/project_1/boston_housing_exploration.ipynb
necromuralist/machine_learning_studies
mit
We need to import several things from Keras.
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import InputLayer, Input from tensorflow.keras.layers import Reshape, MaxPooling2D from tensorflow.keras.layers import Conv2D, Dense, Flatten
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
PrettyTensor API This is how the Convolutional Neural Network was implemented in Tutorial #03 using the PrettyTensor API. It is shown here for easy comparison to the Keras implementation below.
if False: x_pretty = pt.wrap(x_image) with pt.defaults_scope(activation_fn=tf.nn.relu): y_pred, loss = x_pretty.\ conv2d(kernel=5, depth=16, name='layer_conv1').\ max_pool(kernel=2, stride=2).\ conv2d(kernel=5, depth=36, name='layer_conv2').\ max_pool(kernel=2, stride=2).\ flatten().\ fully_connected(size=128, name='layer_fc1').\ softmax_classifier(num_classes=num_classes, labels=y_true)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Sequential Model The Keras API has two modes of constructing Neural Networks. The simplest is the Sequential Model which only allows for the layers to be added in sequence.
# Start construction of the Keras Sequential model. model = Sequential() # Add an input layer which is similar to a feed_dict in TensorFlow. # Note that the input-shape must be a tuple containing the image-size. model.add(InputLayer(input_shape=(img_size_flat,))) # The input is a flattened array with 784 elements, # but the convolutional layers expect images with shape (28, 28, 1) model.add(Reshape(img_shape_full)) # First convolutional layer with ReLU-activation and max-pooling. model.add(Conv2D(kernel_size=5, strides=1, filters=16, padding='same', activation='relu', name='layer_conv1')) model.add(MaxPooling2D(pool_size=2, strides=2)) # Second convolutional layer with ReLU-activation and max-pooling. model.add(Conv2D(kernel_size=5, strides=1, filters=36, padding='same', activation='relu', name='layer_conv2')) model.add(MaxPooling2D(pool_size=2, strides=2)) # Flatten the 4-rank output of the convolutional layers # to 2-rank that can be input to a fully-connected / dense layer. model.add(Flatten()) # First fully-connected / dense layer with ReLU-activation. model.add(Dense(128, activation='relu')) # Last fully-connected / dense layer with softmax-activation # for use in classification. model.add(Dense(num_classes, activation='softmax'))
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Model Compilation The Neural Network has now been defined and must be finalized by adding a loss-function, optimizer and performance metrics. This is called model "compilation" in Keras. We can either define the optimizer using a string, or if we want more control of its parameters then we need to instantiate an object. For example, we can set the learning-rate.
from tensorflow.keras.optimizers import Adam optimizer = Adam(lr=1e-3)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
For a classification-problem such as MNIST which has 10 possible classes, we need to use the loss-function called categorical_crossentropy. The performance metric we are interested in is the classification accuracy.
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Training Now that the model has been fully defined with loss-function and optimizer, we can train it. This function takes numpy-arrays and performs the given number of training epochs using the given batch-size. An epoch is one full use of the entire training-set. So for 10 epochs we would iterate randomly over the entire training-set 10 times.
model.fit(x=data.x_train, y=data.y_train, epochs=1, batch_size=128)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Evaluation Now that the model has been trained we can test its performance on the test-set. This also uses numpy-arrays as input.
result = model.evaluate(x=data.x_test, y=data.y_test)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Prediction We can also predict the classification for new images. We will just use some images from the test-set but you could load your own images into numpy arrays and use those instead.
images = data.x_test[0:9]
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Functional Model The Keras API can also be used to construct more complicated networks using the Functional Model. This may look a little confusing at first, because each call to the Keras API will create and return an instance that is itself callable. It is not clear whether it is a function or an object - but we can call it as if it is a function. This allows us to build computational graphs that are more complex than the Sequential Model allows.
# Create an input layer which is similar to a feed_dict in TensorFlow. # Note that the input-shape must be a tuple containing the image-size. inputs = Input(shape=(img_size_flat,)) # Variable used for building the Neural Network. net = inputs # The input is an image as a flattened array with 784 elements. # But the convolutional layers expect images with shape (28, 28, 1) net = Reshape(img_shape_full)(net) # First convolutional layer with ReLU-activation and max-pooling. net = Conv2D(kernel_size=5, strides=1, filters=16, padding='same', activation='relu', name='layer_conv1')(net) net = MaxPooling2D(pool_size=2, strides=2)(net) # Second convolutional layer with ReLU-activation and max-pooling. net = Conv2D(kernel_size=5, strides=1, filters=36, padding='same', activation='relu', name='layer_conv2')(net) net = MaxPooling2D(pool_size=2, strides=2)(net) # Flatten the output of the conv-layer from 4-dim to 2-dim. net = Flatten()(net) # First fully-connected / dense layer with ReLU-activation. net = Dense(128, activation='relu')(net) # Last fully-connected / dense layer with softmax-activation # so it can be used for classification. net = Dense(num_classes, activation='softmax')(net) # Output of the Neural Network. outputs = net
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Model Compilation We have now defined the architecture of the model with its input and output. We now have to create a Keras model and compile it with a loss-function and optimizer, so it is ready for training.
from tensorflow.python.keras.models import Model
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Create a new instance of the Keras Functional Model. We give it the inputs and outputs of the Convolutional Neural Network that we constructed above.
model2 = Model(inputs=inputs, outputs=outputs)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Compile the Keras model using the RMSprop optimizer and with a loss-function for multiple categories. The only performance metric we are interested in is the classification accuracy, but you could use a list of metrics here.
model2.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Training The model has now been defined and compiled so it can be trained using the same fit() function as used in the Sequential Model above. This also takes numpy-arrays as input.
model2.fit(x=data.x_train, y=data.y_train, epochs=1, batch_size=128)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Evaluation Once the model has been trained we can evaluate its performance on the test-set. This is the same syntax as for the Sequential Model.
result = model2.evaluate(x=data.x_test, y=data.y_test)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
The result is a list of values, containing the loss-value and all the metrics we defined when we compiled the model. Note that 'accuracy' is now called 'acc' which is a small inconsistency.
for name, value in zip(model2.metrics_names, result): print(name, value)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We can also print the classification accuracy as a percentage:
print("{0}: {1:.2%}".format(model2.metrics_names[1], result[1]))
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Examples of Mis-Classified Images We can plot some examples of mis-classified images from the test-set. First we get the predicted classes for all the images in the test-set:
y_pred = model2.predict(x=data.x_test)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Save & Load Model NOTE: You need to install h5py for this to work! Tutorial #04 was about saving and restoring the weights of a model using native TensorFlow code. It was an absolutely horrible API! Fortunately, Keras makes this very easy. This is the file-path where we want to save the Keras model.
path_model = 'model.keras'
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Saving a Keras model with the trained weights is then just a single function call, as it should be.
model2.save(path_model)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Delete the model from memory so we are sure it is no longer used.
del model2
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We need to import this Keras function for loading the model.
from tensorflow.python.keras.models import load_model
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
Loading the model is then just a single function-call, as it should be.
model3 = load_model(path_model)
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit
We can then use the model again e.g. to make predictions. We get the first 9 images from the test-set and their true class-numbers.
images = data.x_test[0:9] cls_true = data.y_test_cls[0:9]
03C_Keras_API.ipynb
Hvass-Labs/TensorFlow-Tutorials
mit