markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
First, we find yearly mean temperatures and then calculate overall mean temperature.
yearly_tmean = dd1.T2MMEAN.resample(time="1AS").mean('time')[:,0,0] tmean_mean = yearly_tmean.mean(axis=0) print ('Overall mean for tmean is ' + str("%.2f" % tmean_mean.values))
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Now it is time to plot mean annual temperature in Arctic. We also marked overall average for 1980-2019 with a red dotted line. The green line marks a trend. The other significant thing we can see from looking at the plot is that the temperatures have been rising over the years. The small anomalies are normal, while after 2014 the temperature hasn't dropped below average at all. That's an unusual pattern. For example, in 2004 and 2014 the temperature was over 2 degrees above overall average, while in 2017 temperature have been more than 3 °C degree over the average.
make_plot(yearly_tmean.loc[yearly_tmean['time.year'] < 2019],dataset,'Mean annual temperature in ' + ' '.join(area_name.split('_')),ylabel = 'Temperature [C]',compare_line = tmean_mean.values,trend=True)
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Here we show number of days in a year when temperature exceeds 10 °C. Here we can see that year 2004 stands out pretty well.
daily_data = dd1.T2MMEAN[:,0,0] make_plot(daily_data[np.where(daily_data.values > 10)].groupby('time.year').count(),dataset,'Number of days in year when mean temperature exceeds 10 $^o$C in ' + ' '.join(area_name.split('_')),ylabel = 'Days of year') print ('Yearly average days when temperature exceeds 10 C is ' + str("%.1f" % daily_data[np.where(daily_data.values > 10)].groupby('time.year').count().mean().values)) data_jan = dd1.sel(time=dd1['time.month']<3) jan_mean_temp = np.mean(data_jan.T2MMEAN)
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Last, we will compare observational data from the beginning of this year with the overall January and February mean.
compare_observations_analysis_mean(time_synop,temp_synop,'Air Temperature',jan_mean_temp.values,\ '10 year average temperature','Beginning of 2019 temperature in ' + ' '.join(area_name.split('_')),\ 'january_feb_' + area_name + '.png')
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
For real problems (like below) this generally gives a bit of a speed boost. After doing that we can import quimb:
import quimb as qu
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
We are not going to contsruct the Hamiltonian directly, instead leave it as a Lazy object, so that each MPI process can construct its own rows and avoid redudant communication and memory. To do that we need to know the size of the matrix first:
# total hilbert space for 18 spin-1/2s n = 18 d = 2**n shape = (d, d) # And make the lazy representation H_opts = {'n': n, 'dh': 3.0, 'sparse': True, 'seed': 9} H = qu.Lazy(qu.ham_mbl, **H_opts, shape=shape) H
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
This Hamiltonian also conserves z-spin, which we can use to make the effective problem significantly smaller. This is done by supplying a projector onto the subspace we are targeting. We also need to know its size first if we want to leave it 'unconstructed':
# total Sz=0 subspace size (n choose n / 2) from scipy.special import comb ds = comb(n, n // 2, exact=True) shape = (d, ds) # And make the lazy representation P = qu.Lazy(qu.zspin_projector, n=n, shape=shape) P
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
Now we can solve the hamiltoniain, for 5 eigenpairs centered around energy 0:
%%time lk, vk = qu.eigh(H, P=P, k=5, sigma=0.0, backend='slepc') print('energies:', lk)
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
eigh takes care of projecting H into the subspace ($\tilde{H} = P^{\dagger} H P$), and mapping the eigenvectors back the computation basis once found. Here we specified the 'slepc' backend. In an interactive session, this will spawn the MPI workers for you (using mpi4py). Other options would be to run this in a script using quimb-mpi-python which would pro-actively spawn workers from the get-go, and quimb-mpi-python --syncro which is the more traditional 'syncronised' MPI mode. These modes would be more suitable for a cluster and large problems (see docs/examples/ex_mpi_expm_evo.py). Now we have the 5 eigenpairs, we can compute the 'entanglement matrix' for each, with varying block size. However, seeing as we have a pool of MPI workers already, let's also reuse it to parallelize the computation:
# get an MPI executor pool pool = qu.linalg.mpi_launcher.get_mpi_pool() # 'submit' the function with args to the pool e_k_b_ij = [[pool.submit(qu.ent_cross_matrix, vk[:, [k]], sz_blc=b) for b in [1, 2, 3, 4]] for k in range(5)] e_k_b_ij
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
Once we have submitted all this work to the pool (which works in any of the modes described above), we can retrieve the results:
# convert each 'Future' into its result e_k_b_ij = [[f.result() for f in e_b_ij] for e_b_ij in e_k_b_ij] %matplotlib inline from matplotlib import pyplot as plt fig, axes = plt.subplots(4, 5, figsize=(15, 10), squeeze=True) for k in range(5): for b in [1, 2, 3, 4]: e_ij = e_k_b_ij[k][b - 1] axes[b - 1, k].imshow(e_ij, vmax=1)
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
MyriaL Basics First you would scan in whatever tables you want to use in that cell. These are the tables visible in the Myria-Web datasets tab. R1 = scan(cosmo8_000970); This put all of the data in the cosmo8_000970 table into the relation R1 which can now be queried with MyriaL R2 = select * from R1 limit 5; Once we have a relation, we can query it, and store the result in a new relation. Sometimes you just want to see the output of the cell you're running, or sometimes you want to store the result for later use. In either case, you have to store the relation that you want to see or store the output of, because otherwise Myria will optimize the query into an empty query. store(R2, MyInterestingResult); This will add MyInterestingResult to the list of datasets on Myria-Web. If you are running multiple queries and want to just see their results without storing multiple new tables, you can pick a name and overwrite it repeatedly: %%query ... store(R2, temp); ... query%% ... store(R50, temp); All statements need to be ended with a semicolon! Also, note that a MyriaL cell cannot contain any Python. These cells are Python by default, but a MyriaL cell starts with %%query and can only contain MyriaL syntax.
%%query -- comments in MyriaL look like this -- notice that the notebook highlighting still thinks we are writing python: in, from, for, range, return R1 = scan(cosmo8_000970); R2 = select * from R1 limit 5; R3 = select iOrder from R1 limit 5; store(R2, garbage); %%query -- there are some built in functions that are useful, just like regular SQL: cosmo8 = scan(cosmo8_000970); countRows = select count(*) as countRows from cosmo8; store(countRows, garbage); %%query -- lets say we want just the number of gas particles cosmo8 = scan(cosmo8_000970); c = select count(*) as numGas from cosmo8 where type = 'gas'; store(c, garbage); %%query -- some stats about the positions of star particles cosmo8 = scan(cosmo8_000970); positionStats = select min(x) as min_x , max(x) as max_x , avg(x) as avg_x , stdev(x) as stdev_x , min(y) as min_y , max(y) as max_y , avg(y) as avg_y , stdev(y) as stdev_y , min(z) as min_z , max(z) as max_z , avg(z) as avg_z , stdev(z) as stdev_z from cosmo8 where type = 'star'; store(positionStats, garbage); # we can also create constants in Python and reference them in MyriaL low = 50000 high = 100000 destination = 'tempRangeCosmo8' %%query -- we can reference Python constants with '@' cosmo8 = scan(cosmo8_000970); temps = select iOrder, mass, type, temp from cosmo8 where temp > @low and temp < @high limit 10; store(temps, @destination);
ipnb examples/Cosmo8+Demo+Notebook.ipynb
uwescience/myria-python
bsd-3-clause
User Defined Functions In MyriaL we can define our own functions that will then be applied to the results of a query. These can either be written in Python and registered with Myria or they can be written directly within a MyriaL cell (but not in Python). When registering a Python function as a UDF, we need to specify the type of the return value. The possible types are the INTERNAL_TYPES defined in raco.types <a href="https://github.com/uwescience/raco/blob/4b2387aaaa82daaeac6c8960c837a6ccc7d46ff8/raco/types.py">as seen here</a> Currently a function signature can't be registered more than once. In order to overwrite an existing registered function of the same signature, you have to use the Restart Kernel button in the Jupyter Notebook toolbar.
from raco.types import DOUBLE_TYPE from myria.udf import MyriaPythonFunction # each row is passed in as a tupl within a list def sillyUDF(tuplList): row = tuplList[0] x = row[0] y = row[1] z = row[2] if (x > y): return x + y + z else: return z # A python function needs to be registered to be able to # call it from a MyriaL cell MyriaPythonFunction(sillyUDF, DOUBLE_TYPE).register() # To see all functions currently registered connection.get_functions() %%query -- for your queries to run faster, its better to push the UDF to the smallest possible set of data cosmo8 = scan(cosmo8_000970); small = select * from cosmo8 limit 10; res = select sillyUDF(x,y,z) as sillyPyRes from small; store(res, garbage); %%query -- same thing but as a MyriaL UDF def silly(x,y,z): case when x > y then x + y + z else z end; cosmo8 = scan(cosmo8_000970); res = select silly(x,y,z) as sillyMyRes from cosmo8 limit 10; store(res, garbage); from raco.types import DOUBLE_TYPE def distance(tuplList): # note that libraries used inside the UDF need to be imported inside the UDF import math row = tuplList[0] x1 = row[0] y1 = row[1] z1 = row[2] x2 = row[3] y2 = row[4] z2 = row[5] return math.sqrt((x1-x2)**2 + (y1-y2)**2 + (z1-z2)**2) MyriaPythonFunction(distance, DOUBLE_TYPE).register() print distance([(.1, .1, .1, .2, .2, .2)]) eps = .0042 %%query -- here I am trying to find all points within eps distance of a given point -- in order to avoid the expensive UDF distance() call on every point in the data, -- I first filter the points by a simpler range query that immitates a bounding box cosmo8 = scan(cosmo8_000970); point = select * from cosmo8 where iOrder = 68649; cube = select c.* from cosmo8 as c, point as p where abs(c.x - p.x) < @eps and abs(c.y - p.y) < @eps and abs(c.z - p.z) < @eps; distances = select c.*, distance(c.x, c.y, c.z, p.x, p.y, p.z) as dist from cube as c, point as p; res = select * from distances where dist < @eps; store(res, garbage); %%query cosmo8 = scan(cosmo8_000970); point = select * from cosmo8 where iOrder = 68649; cube = select c.* from cosmo8 as c, point as p where abs(c.x - p.x) < @eps and abs(c.y - p.y) < @eps and abs(c.z - p.z) < @eps; onlyGases = select * from cube where type = 'gas'; distances = select c.*, distance(c.x, c.y, c.z, p.x, p.y, p.z) as dist from onlyGases as c, point as p; res = select * from distances where dist < @eps; store(res, garbage);
ipnb examples/Cosmo8+Demo+Notebook.ipynb
uwescience/myria-python
bsd-3-clause
There is also special syntax for user defined aggregate functions, which use all of the rows to produce a single output, like a Reduce or Fold function pattern: uda func-name(args) { initialization-expr(s); update-expr(s); result-expr(s); }; Where each of the inner lines is a bracketed statement with an entry for each expression that you want to output.
%%query -- UDA example using MyriaL functions inside the UDA update line def pickBasedOnValue2(val1, arg1, val2, arg2): case when val1 >= val2 then arg1 else arg2 end; def maxValue2(val1, val2): case when val1 >= val2 then val1 else val2 end; uda argMaxAndMax(arg, val) { [-1 as argAcc, -1.0 as valAcc]; [pickBasedOnValue2(val, arg, valAcc, argAcc), maxValue2(val, valAcc)]; [argAcc, valAcc]; }; cosmo8 = scan(cosmo8_000970); res = select argMaxAndMax(iOrder, vx) from cosmo8; store(res, garbage); # Previously when we wrote a UDF we expected the tuplList to only hold one row # but UDFs that are used in a UDA could be given many rows at a time, so it is # important to loop over all of them and keep track of the state/accumulator outside # the loop, and then return the value that is expected by the update-expr line in the UDA. from raco.types import LONG_TYPE def pickBasedOnValue(tuplList): maxArg = -1 maxVal = -1.0 for tupl in tuplList: value1 = tupl[0] arg1 = tupl[1] value2 = tupl[2] arg2 = tupl[3] if (value1 >= value2): if (value1 >= maxVal): maxArg = arg1 maxVal = value1 else: if (value2 >= maxVal): maxArg = arg2 maxVal = value2 return maxArg MyriaPythonFunction(pickBasedOnValue, LONG_TYPE).register() from raco.types import DOUBLE_TYPE def maxValue(tuplList): maxVal = -1.0 for tupl in tuplList: value1 = tupl[0] value2 = tupl[1] if (value1 >= value2): if (value1 >= maxVal): maxVal = value1 else: if (value2 >= maxVal): maxVal = value2 return maxVal MyriaPythonFunction(maxValue, DOUBLE_TYPE).register() %%query -- UDA example using Python functions inside the UDA update line uda argMaxAndMax(arg, val) { [-1 as argAcc, -1.0 as valAcc]; [pickBasedOnValue(val, arg, valAcc, argAcc), maxValue(val, valAcc)]; [argAcc, valAcc]; }; t = scan(cosmo8_000970); s = select argMaxAndMax(iOrder, vx) from t; store(s, garbage); %%query -- of course, argMaxAndMax can be done much more simply: c = scan(cosmo8_000970); m = select max(vx) as mvx from c; res = select iOrder, mvx from m,c where vx = mvx; store(res, garbage);
ipnb examples/Cosmo8+Demo+Notebook.ipynb
uwescience/myria-python
bsd-3-clause
Working with multiple snapshots On the Myria demo cluster we only provide cosmo8_000970, but on a private cluster we could load in any number of snapshots to look for how things change over time.
%%query c8_000970 = scan(cosmo8_000970); c8_000962 = scan(cosmo8_000962); -- finding all gas particles that were destroyed between step 000962 and 000970 c1Gases = select iOrder from c8_000962 where type = 'gas'; c2Gases = select iOrder from c8_000970 where type = 'gas'; exist = select c1.iOrder from c1Gases as c1, c2Gases as c2 where c1.iOrder = c2.iOrder; destroyed = diff(c1Gases, exist); store(destroyed, garbage); %%query c8_000970 = scan(cosmo8_000970); c8_000962 = scan(cosmo8_000962); -- finding all particles where some property changed between step 000962 and 000970 res = select c1.iOrder from c8_000962 as c1, c8_000970 as c2 where c1.iOrder = c2.iOrder and c1.metals = 0.0 and c2.metals > 0.0; store(res, garbage); from IPython.display import HTML HTML('''<script> code_show_err=false; function code_toggle_err() { if (code_show_err){ $('div.output_stderr').hide(); } else { $('div.output_stderr').show(); } code_show_err = !code_show_err } $( document ).ready(code_toggle_err); </script> To toggle on/off output_stderr, click <a href="javascript:code_toggle_err()">here</a>.''')
ipnb examples/Cosmo8+Demo+Notebook.ipynb
uwescience/myria-python
bsd-3-clause
percentage
percentage_1 = len(np.where((samples1 >= -1) & (samples1 <=1))[0]) / N percentage_2 = len(np.where((samples1 >= -2) & (samples1 <=2))[0]) / N percentage_3 = len(np.where((samples1 >= -3) & (samples1 <=3))[0]) / N print("between -1 and 1: %{}".format(percentage_1)) print("between -2 and 2: %{}".format(percentage_2)) print("between -3 and 3: %{}".format(percentage_3)) print("mean: {}, \nvariance: {}, \nstandard deviation: {}".format( np.mean(samples1), np.var(samples1), np.std(samples1)))
Trueskill.ipynb
muatik/dm
mit
creating the second sample set
N = 10000 mean = 1 std_dev = 1 samples2 = np.random.normal(mean, std_dev, N)
Trueskill.ipynb
muatik/dm
mit
pairing the two sample sets the X co-ordinate of each point is a sample from the first set and the Y co-ordinate is the corresponding sample from the second set
plt.scatter(samples1, samples2) plt.plot(np.linspace(-3,5, 10000), np.linspace(-3,5, 10000), c="r")
Trueskill.ipynb
muatik/dm
mit
the fraction of samples which lie above the diagonal line where X=Y
len(np.where((samples2 > samples1))[0]) / N
Trueskill.ipynb
muatik/dm
mit
the posterior distribution numerically over Ywins
def calc_ywins(): from scipy.stats import norm meanD = 1 - 0 varD = 1 + 1 x = np.linspace(-10, 11, 1000) dProbs = norm.pdf(x, meanD, scale=varD**(1/2)) # scale is standard deviation plt.plot(x, dProbs) plt.plot([0,0],[0,0.25], 'r-') plt.fill_between(x,0,dProbs,color='cyan', where=x>0) plt.show() yWins = [d for (i,d) in enumerate(dProbs) if x[i] > 0] print("ywins = ", np.trapz(yWins, dx=x[1]-x[0])) calc_ywins()
Trueskill.ipynb
muatik/dm
mit
Simple graph analytics for the Twitter stream For this first step we want: - top 10 retweeted users - top 10 PageRanked users - basic matplotlib viz
import matplotlib as mpl mpl.use('Agg') import matplotlib.pyplot as plt %matplotlib inline import networkx as nx
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Building the directed graph We build the retweet graph, where an edge is from the original tweeter to the retweeter. We add node weights corresponding to how much each node was retweeted
graph = nx.DiGraph() for tweet in data: if tweet.get('retweet') == 'Y': name = tweet.get('name') original_name = tweet.get('original_name') followers = tweet.get('followers') if name not in graph: graph.add_node(name, retweets = 0) if original_name not in graph: graph.add_node(original_name, retweets = 1) else: graph.node[original_name]['retweets'] = graph.node[original_name]['retweets'] +1 graph.add_edge(original_name, name)
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Most retweeted users
top10_retweets = sorted([(node,graph.node[node]['retweets']) for node in graph.nodes()], key = lambda x: -x[1])[0:10] top10_retweets
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Top 10 Pageranked users Note - these are the 'archetypal retweeters' of the graph (well, not exactly. see https://en.wikipedia.org/wiki/PageRank)
pr = nx.pagerank(graph) colors = [pr[node] for node in graph.nodes()] top10_pr = sorted([(k,v) for k,v in pr.items()], key = lambda x: x[1])[0:10] label_dict = dict([(k[0],k[0]) for k in top10_pr]) top10_pr
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Basic network viz size of nodes is number of retweets color of nodes is pagerank we only label the top 10 pageranked users
plt.figure(figsize=(11,11)) plt.axis('off') weights = [10*(graph.node[node]['retweets'] + 1) for node in graph.nodes()] nx.draw_networkx(graph, node_size = weights, width = .1, linewidths = .1, with_labels=True, node_color = colors, cmap = 'RdYlBu', labels = label_dict) consumer.close()
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Setting up the inputs Target inputs (optical constant database, sphere coordinates, composition and size)
# building the optical constant database eps_db_out=py_gmm.mat.generate_eps_db('../epsilon/',ext='*.edb') eps_files,eps_names,eps_db=eps_db_out['eps_files'],eps_db_out['eps_names'],eps_db_out['eps_db'] # sphere radius (in nm) v_r = np.array([ 40., 40.]) # sphere position (in nm) m_xyz = np.array([[ -42.5, 0. , 0. ], [ 42.5, 0. , 0. ]]) # how many spheres in the target? We guess it from the length of the radius vector ns = len(v_r) # sphere composition, calling the names contained in "eps_names", just populated above target_comp= np.array(['eAgPalikSTDf','eAuJCSTDf']) # vector containing the optical constants names # refractive index of the environment n_matrix = 1.33 # water
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Plane wave incident field
# Euler angles: (alpha,beta,gamma)=(0,0,0) means a z-directed, x-polarized plane wave alpha = 0.0 # azimuth beta = 0.0 # polar gamma = 0.0 # polarization # Wavelengths for the specrum computation wl_min = 300 wl_max = 800 n_wl = 250 v_wl = np.linspace(wl_min,wl_max,n_wl) # Wavelength for the local field computation v_wl_lf = [430.0,630] # resonance wavelengths
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Additional inputs for the simulation
n_stop=10 # maximum multipolar expansion order f_int=0.0; # interaction cutoff (normally set to zero to have full interactions) lf_ratio=300; # plot sphere local field contribution up to a distance equal to d=lf_ratio*r_sphere qs_flag='no' # retarded simulation n_E = 400 # local field plotting grid resolution
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Target plot
# target plot fig = plt.figure(num=1,figsize=(10,10)) # setting the figure size ax = fig.add_subplot(1, 1, 1, aspect='equal') # creating the plotting axis # plot bounds and eliminating x and y ticks plt.xlim(-1.1*(-m_xyz[0,0]+v_r[0]),1.1*(m_xyz[1,0]+v_r[ns-1])) plt.ylim(-1.1*(v_r[0]),1.1*(v_r[0])) plt.xticks([]) plt.yticks([]) # plotting the target v_color = ['0.6','y'] for c,r,col in zip(m_xyz,v_r,v_color): c0=c[0];c1=c[1]; ax.add_patch(Circle((c0,c1),r,color=col))
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Local Field
# local field for the first resonance v_field = [] for wl_lf in v_wl_lf: # optical constants e_list=py_gmm.mat.db_to_eps(wl_lf,eps_db,target_comp); m_eps=np.column_stack((np.real(e_list),np.imag(e_list))); # gmm coefficients computation out=py_gmm.gmm_py.gmm_f2py_module.expansion_coefficients(m_xyz, # target sphere position in nm v_r, # target sphere radii in nm m_eps, # e1 and e2 for each sphere f_int, # interaction coefficient n_matrix, # environment refractive index wl_lf, # computation wavelength alpha,beta,gamma, # euler angles for the incident pw 0, # =0 Linear, =1 Left Circular, =2 Right Circular n_stop, # maximum number for expansion coefficients qs_flag) # quasi static approximation v_amnbmn=out[0][:,0] # getting field expansion coefficients v_dmncmn=out[0][:,1] # local field v_emn=py_gmm.gmm_py.gmm_f2py_module.emn(n_stop)[0] # normalization coeffs # building plotting grid x_min = -1.5*(v_r[0]-m_xyz[0,0]) x_max = 1.5*(v_r[1]+m_xyz[1,0]) y_min = -1.5*v_r[0] y_max = 1.5*v_r[0] v_x=np.linspace(x_min,x_max,n_E); v_y=np.linspace(y_min,y_max,n_E); # retrieving the local field m_E=[] for x in v_x: for y in v_y: out = py_gmm.gmm_py.gmm_f2py_module.exyz("yes", # include incident local field n_stop, # maximum number for expansion coefficients 0, # =0 Linear, =1 Left Circular, =2 Right Circular lf_ratio, # plot sphere contribution up to distance d=lf_ratio*r wl_lf, # computation wavelength alpha,beta,gamma, x,y,0.0, # field computation coordinates v_amnbmn,v_dmncmn,v_emn, # expansion and normalization coefficients m_xyz,m_eps,v_r, # sphere position, composition and size n_matrix, # environment refractive index qs_flag) # quasi static approximation m_E=np.append(m_E,out[3]) m_E=np.array(m_E).reshape(n_E,n_E) v_field.append(m_E)
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Plotting the results Extinction cross section
# cross section plot f_size=25; f_size_ticks=20; plt.figure(1,figsize=(15,10)); plt.plot(v_wl,np.sum(v_cext,axis=1),'k',linewidth=3.0); plt.plot(v_wl,v_cext[:,0],'0.6', v_wl,v_cext[:,1],'y',linewidth=2.0); # plt title plt.title('AuAg dimer',fontsize=f_size) # axes labels plt.xlabel(r'wavelength (nm)', fontsize=f_size) plt.ylabel(r'C$_{ext}$', fontsize=f_size) # ticks plt.xticks(fontsize=f_size_ticks) plt.yticks(fontsize=f_size_ticks) # legend plt.legend((r'Integral C$_{ext}$', r'Ag C$_{ext}$', r'Au C$_{ext}$'),frameon=False,fontsize=f_size-5) # layout plt.tight_layout()
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Local field enhancement
# local field plot f_size = 25 fig = plt.figure(2,figsize=(14,10)) v_title = ['High energy resonance','Low energy resonance'] for i_E,m_E in enumerate(v_field): ax = fig.add_subplot(2, 1, i_E+1, aspect='equal') # creating the plotting axis plt.imshow(m_E.T,origin='lower',cmap='gnuplot2', aspect=(y_max-y_min)/(x_max-x_min)) # remove ticks plt.xticks([]) plt.yticks([]) # colorbar cb = plt.colorbar() cb.set_label('|E|', fontsize=f_size-5) cb.ax.tick_params(labelsize=f_size-10) plt.title(v_title[i_E],fontsize=f_size) plt.tight_layout() # sphere outlines for c,r in zip(m_xyz,v_r): # aspect rations x_ar = n_E/(x_max-x_min) y_ar = n_E/(y_max-y_min) # circle centers c0=x_ar*(c[0]-x_min) c1=y_ar*(c[1]-y_min) ax.add_patch(Ellipse((c0,c1),2.0*r*x_ar,2.0*r*y_ar,facecolor='none',edgecolor='w',linewidth=2.0))
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Programming Idioms To Make Life Easier Iterating Lists - Enumerate
my_list = ['hi', 'this', 'is', 'my', 'list'] for i,e in enumerate(my_list): print(i, e)
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
Iterating Lists - Zip
x = range(10) y = range(10, 20) for xi, yi in zip(x, y): print(xi, yi)
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
List Comprehensions This allows you to avoid a for loop when you don't want to type one out
powers = [xi ** 2 for xi in range(4)] print(powers) # make y = x^2 quickly x = [1, 3, 10] y = [xi ** 2 for xi in x] for xi, yi in zip(x,y): print(xi, yi) # by convention, use _ to indicate we don't care about a variable zeros = [0 for _ in range(4)] print(zeros) x = [4,3, 6, 1, 4] mean_x = sum(x) / len(x) delta_mean = [(xi - mean_x)**2 for xi in x] var = sum(delta_mean) / (len(x) - 1) print(var)
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
Dict Comprehensions
x = [4, 10, 11] y = ['number of people', 'the number 10', 'another number'] #key: value my_dict = {yi: xi for xi, yi in zip(x,y)} print(my_dict)
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
F Strings You can simplify string formatting with f strings:
answer = 4 mean = -3 print(f'The answer is {answer} and the mean is {mean:.2f}')
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
Getting started Easy to install: pip install plotly How to save and view files? Can work offline and save as .html files to open on web browser Jupyter notebook Upload to online account for easy sharing: import statement automatically signs you in How It Works Graph objects Same structure as native Python dictionaries and lists Defined as new classes Every Plotly plot type has its own graph object, i.e., Scatter, Bar, Histogram All information in a Plotly plot is contained in a Figure object, which contains a Data object: stores data and style options, i.e., setting the line color a Layout object: for aesthetic features outside the plotting area, i.e., setting the title trace: refers to a set of data meant to be plotted as a whole (like an $x$ and $y$ pairing) Interactivity is automatic! Line/Scatter Plots The following import statements load the three main modules:
# (*) Tools to communicate with Plotly's server import plotly.plotly as py # (*) Useful Python/Plotly tools import plotly.tools as tls # (*) Graph objects to piece together your Plotly plots import plotly.graph_objs as go
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
The following code will make a simple line and scatter plot:
# Create random data with numpy import numpy as np N = 100 random_x = np.linspace(0, 1, N) random_y0 = np.random.randn(N)+5 random_y1 = np.random.randn(N) random_y2 = np.random.randn(N)-5 # (1.1) Make a 1st Scatter object trace0 = go.Scatter( x = random_x, y = random_y0, mode = 'markers', name = '$\mu = 5$', hoverinfo='x+y' # choosing what to show on hover ) # (1.2) Make a 2nd Scatter object trace1 = go.Scatter( x = random_x, y = random_y1, mode = 'lines+markers', name = '$\mu = 0$', hoverinfo='x+y' ) # (1.3) Make a 3rd Scatter object trace2 = go.Scatter( x = random_x, y = random_y2, mode = 'lines', name = '$\mu = -5$', hoverinfo='x+y' ) # (2) Make Data object # Data is list-like, must use [ ] data = go.Data([trace0, trace1, trace2]) # (3) Make Layout object (Layout is dict-like) layout = go.Layout(title='$\\text{Some scatter objects distributed as } \ \mathcal{N}(\mu,1)$', xaxis=dict(title='x-axis label'), yaxis=dict(title='y-axis label'), showlegend=True) # (4) Make Figure object (Figure is dict-like) fig = go.Figure(data=data, layout=layout) print(fig) # print the figure object in notebook
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Figure objects store data like a Python dictionary.
# (5) Send Figure object to Plotly and show plot in notebook py.iplot(fig, filename='scatter-mode')
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Can save a static image as well:
py.image.save_as(fig, filename='scatter-mode.png')
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Histograms
# (1) Generate some random numbers x0 = np.random.randn(500) x1 = np.random.randn(500)+1 # (2.1) Create the first Histogram object trace1 = go.Histogram( x=x0, histnorm='count', name='control', autobinx=False, xbins=dict( start=-3.2, end=2.8, size=0.2 ), marker=dict( color='fuchsia', line=dict( color='grey', width=0 ) ), opacity=0.75 ) # (2.2) Create the second Histogram object trace2 = go.Histogram( x=x1, name='experimental', autobinx=False, xbins=dict( start=-1.8, end=4.2, size=0.2 ), marker=dict( color='rgb(255, 217, 102)' ), opacity=0.75 ) # (3) Create Data object data = [trace1, trace2] # (4) Create Layout object layout = go.Layout( title='Sampled Results', xaxis=dict( title='Value' ), yaxis=dict( title='Count' ), barmode='overlay', bargap=0.25, bargroupgap=0.3, showlegend=True ) fig = go.Figure(data=data, layout=layout) # (5) Send Figure object to Plotly and show plot in notebook py.iplot(fig, filename='histogram_example')
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Distplots Similar to seaborn.distplot. Plot a histogram, kernel density or normal curve, and a rug plot all together.
from plotly.tools import FigureFactory as FF # Add histogram data x1 = np.random.randn(200)-2 x2 = np.random.randn(200) x3 = np.random.randn(200)+2 x4 = np.random.randn(200)+4 # Group data together hist_data = [x1, x2, x3, x4] group_labels = ['Group 1', 'Group 2', 'Group 3', 'Group 4'] # Create distplot with custom bin_size fig = FF.create_distplot(hist_data, group_labels, bin_size=.2) # Plot! py.iplot(fig, filename='Distplot with Multiple Datasets', \ validate=False)
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
2D Contour Plot
x = np.random.randn(1000) y = np.random.randn(1000) py.iplot([go.Histogram2dContour(x=x, y=y, \ contours=go.Contours(coloring='fill')), \ go.Scatter(x=x, y=y, mode='markers', \ marker=go.Marker(color='white', size=3, opacity=0.3))])
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
3D Surface Plot Plot the function: $f(x,y) = A \cos(\pi x y) e^{-(x^2+y^2)/2}$
# Define the function to be plotted def fxy(x, y): A = 1 # choose a maximum amplitude return A*(np.cos(np.pi*x*y))**2 * np.exp(-(x**2+y**2)/2.) # Choose length of square domain, make row and column vectors L = 4 x = y = np.arange(-L/2., L/2., 0.1) # use a mesh spacing of 0.1 yt = y[:, np.newaxis] # (!) make column vector # Get surface coordinates! z = fxy(x, yt) trace1 = go.Surface( z=z, # link the fxy 2d numpy array x=x, # link 1d numpy array of x coords y=y # link 1d numpy array of y coords ) # Package the trace dictionary into a data object data = go.Data([trace1]) # Dictionary of style options for all axes axis = dict( showbackground=True, # (!) show axis background backgroundcolor="rgb(204, 204, 204)", # set background color to grey gridcolor="rgb(255, 255, 255)", # set grid line color zerolinecolor="rgb(255, 255, 255)", # set zero grid line color ) # Make a layout object layout = go.Layout( title='$f(x,y) = A \cos(\pi x y) e^{-(x^2+y^2)/2}$', # set plot title scene=go.Scene( # (!) axes are part of a 'scene' in 3d plots xaxis=go.XAxis(axis), # set x-axis style yaxis=go.YAxis(axis), # set y-axis style zaxis=go.ZAxis(axis) # set z-axis style ) ) # Make a figure object fig = go.Figure(data=data, layout=layout) # (@) Send to Plotly and show in notebook py.iplot(fig, filename='surface')
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Matplotlib Conversion
import matplotlib.pyplot as plt import matplotlib.mlab as mlab n = 50 x, y, z, s, ew = np.random.rand(5, n) c, ec = np.random.rand(2, n, 4) area_scale, width_scale = 500, 5 fig, ax = plt.subplots() sc = ax.scatter(x, y, c=c, s=np.square(s)*area_scale, edgecolor=ec, linewidth=ew*width_scale) ax.grid() py.iplot_mpl(fig)
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Alapozás Görbék megadási módjai Implicit alak Az implicit alak a görbét alkotó pontokat egy teszt formájában adja meg, melynek segítségével el lehet dönteni, hogy egy adott pont rajta fekszik-e a görbén. Kétdimenziós esetben az implicit alak felírható a következő formában $$ f(x, y) = 0, $$ mely egyenletet a görbét alkotó pontok elégítik ki. $f$ itt egy tetszőleges valós értékű függvény. Például, ha az origó középpontú $r$ sugarú kört szeretnénk felírni implicit alakban, akkor $$ f(x,y) = x^2 + y^2 - r^2. $$ Paraméteres alak A görbék paraméteres megadása egy leképezés valamilyen paramétertartomány és a görbepontok között. A paraméteres alak egy olyan függvény, mely a paraméter értékeihez pozíciókat ad meg a görbén. Képzeljük el, hogy papíron, ceruzával rajzolunk egy görbét. Ekkor a paraméter tekinthető az időnek, a paraméter tartománya pedig a rajzolás kezdetének és befejeztének. Ekkor a paraméteres alak megadja, hogy egy adott időpillanatban épp hol volt a ceruza: $$ (x, y) = f(t). $$ Vegyük észre, hogy szemben az implicit alakkal, $f$ most egy vektor-értékű függvény. Paraméteres alakban az origó középpontú $r$ sugarú kört a következő formában írhatjuk le: $$ f(t) = (\cos t, \sin t) \qquad t \in [0, 2\pi). $$ A jegyzet további részeiben a paraméteres alakot fogjuk feltételezni. Procedurális forma A procedurális vagy generatív forma olyan, az előző két csoporton kívül eső eljárás, melynek segítségével görbepontokat generálhatunk. Például ilyenek a különböző subdivision sémák. Kontrollpontok Egy görbe megadásához általában szükségünk van úgynevezett kontrollpontokra, melyek a görbe által felvett alakot fogják meghatározni. Ha a görbe egy kontrollponton áthalad, akkor azt mondjuk, hogy interpolálja az adott pontot, míg ellenkező esetben approximálja. A görbe alakját a kontrollpontok határozzák meg, így a kontrollpontok manipulációjával tudjuk a görbét befolyásolni. Interpoláció Tegyük fel, hogy adottak a $p_0, p_1, \ldots, p_n$ kontrollpontok. Interpoláció esetén egy olyan $f(t)$ görbét keresünk, mely illeszkedik ezekre a pontokra. Azaz, a $t$ paraméter tartományában vannak olyan $t_0, t_1, \ldots, t_n$ értékek, hogy $$ \begin{align} f(t_0) &= p_0\ f(t_1) &= p_1 \ &\vdots \ f(t_n) &= p_n \ \end{align} $$ Folytonosság Gyakran előforduló probléma, hogy egynél több görbével (görbedarabbal) rendelkezünk, és ezeket szeretnénk valamilyen módon összekapcsolni. Azt, hogy a görbedarabok az összekapcsolás során hogyan találkoznak, a folytonossággal fogjuk jellemezni, és ezt a tulajdonságot a csatlakozási pontban fogjuk vizsgálni. Matematikai folytonosság $C^0$ matematikai folytonosság A $C^0$ matematikai folytonosság egyszerűen azt jelenti, hogy a görbék a végpontjaiknál kapcsolódnak. Azaz, ha van egy $f(t)$ függvénnyel megadott görbénk, melynek paramétertartománya $[t_1, t_2]$ és egy $g(u)$ függvénnyel megadott görbénk, melynek paramétertartománya $[u_1, u_2]$, akkor $$ f(t_2) = g(u_1). $$
addScript("js/c0-parametric-continuity", "c0-parametric-continuity")
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
$C^1$ matematikai folytonosság Ebben az esetben a görbedarabok első deriváltja (a görbéhez húzott érintővektor) megegyezik a csatlakozási pontban. Az előző $f(t)$ és $g(u)$ függvények által leírt görbék esetén tehát $$ f^\prime(t_2) = g^\prime(u_1). $$ Ha a $C^1$ folytonosság nem teljesül, akkor a csatlakozási pontban éles törést figyelhetünk meg.
addScript("js/c1-parametric-continuity", "c1-parametric-continuity")
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
$C^2$ matematikai folytonosság $C^2$ matematikai folytonosság esetén a csatlakozási pontban a görbék második deriváltja megegyezik. Azaz $$ f^{\prime\prime}(t_2) = g^{\prime\prime}(u_1). $$ $C^2$ folytonosság hiányában, bár nem lesz törés a csatlakozási pontban, azonban a görbe alakja hirtelen megváltozhat.
addScript("js/c2-parametric-continuity", "c2-parametric-continuity")
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
Geometriai folytonosság $G^0$ geometriai folytonosság Ugyanazt jelenti, mint a $C^0$ matematikai folytonosság, a görbedarabok csatlakoznak egymáshoz. $G^1$ geometriai folytonosság A $G^1$ geometriai folytonosság azt jelenti, hogy a két csatlakozó görbedarab csatlakozási pontba húzott érintővektora különböző nagyságú, azonban azonos irányú. Azaz $$ f^{\prime}(t_2) = k \cdot g^{\prime}(u_1), $$ ahol $k > 0$ valós szám.
addScript("js/g1-geometric-continuity", "g1-parametric-continuity")
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
Kapcsolat a matematikai és a geometriai folytonosság között A matematikai folytonosság szigorúbb, mint a geometriai folytonosság, hiszen az $n$-ed rendű matematikai folytonosság az $n$-edik deriváltak egyenlőségét kívánja meg. Emiatt, ha két görbe $C^n$ matematikai folytonossággal csatlakozik, akkor ez a csatlakozás egyúttal $G^n$ geometriai folytonosságú is. A paraméteres alak Ha rendelkezünk a görbe alakját befolyásoló kontrollpontokkal, valamint tudjuk, hogy hanyadfokú polinommal szeretnénk leírni a görbét, felírhatjuk a paraméteres alakot. Azonban ezt háromféle módon is megtehetjük: feltételeket adunk meg, melyeket a görbének (vagyis a görbét leíró függvénynek) teljesítenie kell, vagy megadunk egy karakterisztikus mátrixot, ami leírja a görbét, vagy megadjuk azokat a súlyfüggvényeket (bázisfüggvényeket), amelyekkel előállíható a görbe. A három felírási mód természetesen ekvivalens, azonban mindegyik más előnnyel bír. Nézzünk meg most egy-egy példát! Feltételes alak Legyen a görbét leíró paraméteres függvény $f(t)$, ahol $t \in [0, 1]$! Tegyük fel, hogy adott $4$ kontrollpont, $p_1, p_2, p_3, p_4$ és, hogy harmadfokú görbét szeretnénk képezni. Tegyük fel továbbá, hogy $f(t)$-nek a következő feltételeket kell teljesítenie: $$ \begin{align} f(0) &= p_1 \ f(1) &= p_4 \ f^{\prime}(0) &= 3(p_2 - p_1) \ f^{\prime}(1) &= 3(p_4 - p_3) \end{align} $$ Ezzel, azaz a paramétertartomány elejére és végén felvett értékekre tett feltételekkel egyértelműen megadtuk a görbét. Polinomiális alak Írjuk fel az előző feltételekkel megadott görbe polinomiális előállítását! Tudjuk, hogy egy olyan harmadfokú polinomot k1resünk, melyre a fenti feltételek teljesülnek. A polinomot írjuk fel először a következő formában: $$ f(t) = \sum\limits_{i=1}^{n}b_i(t) \cdot p_i, $$ ahol $b_i(t)$ az $i$-edik súlyfüggvény. Ezek a súlyfüggvények adják meg, hogy a paramétertartomány egy adott $t$ eleme esetén az eredetileg megadott geometriai feltételek (a $p_i$ kontrollpontok) milyen szerepet játszanak. Tehát $f(t)$ minden $t$ értékre a kontrollpontok egy lineáris kombinációját állítja elő. Az általános alakja egy $b_i$ súlyfüggvénynek (harmadfokú esetben) a következő: $$ b_i(t) = a_1 \cdot t^3 + b_1 \cdot t^2 + c_1 \cdot t + d_1 $$ Az előző feltételekkel adott görbe esetén a konkrét $b_i$ polinomok a következőek lesznek: $$ \begin{align} b_1(t) &= -t^3 + 3t^2 -3t + 1 \ b_2(t) &= 3t^3 -6t^2 + 3t \ b_3(t) &= -3t^3 + 3t^2 \ b_4(t) &= t^3 \end{align} $$ Mátrix alak Polinomális alakban felírt görbét könnyedén átírhatunk mátrix alakúra. Dolgozzunk most az előzőleg felírt polinomokkal! Ne felejtsük el, hogy harmadfokú görbével dolgozunk. Legyen $T(t)$ egy $4\times 1$-es paramétermátrix: $$ T(t) = \begin{bmatrix} t^3 \ t^2 \ t \ 1 \end{bmatrix} $$ Legyen továbbá $M$ az együtthatómátrix, melyet az egyes súlyfüggvényekben szereplő együtthatókból képzünk: $$ M = \begin{bmatrix} a_1 & b_1 & c_1 & d_1 \ a_2 & b_2 & c_2 & d_2 \ a_3 & b_3 & c_3 & d_3 \ a_4 & b_4 & c_4 & d_4 \ \end{bmatrix} $$ Azaz az előző példa esetében: $$ M = \begin{bmatrix} -1 & 3 & -3 & 1 \ 3 & -6 & 3 & 0 \ -3 & 3 & 0 & 0 \ 1 & 0 & 0 & 0 \end{bmatrix} $$ Végül írjuk fel a geometriai feltételek $G$ mátrixát: $$ G = \begin{bmatrix} p_1 & p_2 & p_3 & p_4 \end{bmatrix} $$ A $G$ mátrix oszlopaiban az egyes kontrollpontok megfelelő koordinátáit találjuk. Ezután $f(t)$ már felírható $$ f(t) = GMT(t) $$ alakban. Ha először az $M$ és $T(t)$ mátrixokat szorozzuk össze, akkor az előzőleg felírt bázisfüggvényeket kapjuk Ezeket ezután rendre a megfelelő kontrollponttal beszorozva kapjuk a kontrollpontok lineáris kombinációját. Források Schwarcz Tibor (2005). Bevezetés a számítógépi grafikába. pp 48-52., https://gyires.inf.unideb.hu/mobiDiak/Schwarcz-Tibor/Bevezetes-a-szamitogepi-grafikaba/bevgraf.pdf D. D. Hearn, M. P. Baker, W. Caritehers (2014). Computer Graphics With OpenGL, Fourth Edition, pp. 409-414. P. Shirley, S. Marschner (2009). Fundamentals of Computer Graphics. Third Edition, pp. 339-348.
def styling(): styles = open("../../styles/custom.html", "r").read() return HTML(styles) styling()
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
Models support dimension inference from data. You can defer some or all of the dimensions.
model = Linear(init_W=zero_init) print(f"Initialized model with no input/ouput dimensions.") X = numpy.zeros((128, 16), dtype="f") Y = numpy.zeros((128, 10), dtype="f") model.initialize(X=X, Y=Y) nI = model.get_dim("nI") nO = model.get_dim("nO") print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}.")
examples/01_intro_model_definition_methods.ipynb
spacy-io/thinc
mit
Simulate a random tree with 20 tips and crown age of 10M generations
tree = toytree.rtree.bdtree(ntips=20, seed=555) tree = tree.mod.node_scale_root_height(10e6) tree.draw(scalebar=True);
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Simulate SNPs with missing data and write to database (.seqs.hdf5)
# init simulator with one diploid sample from each tip model = ipcoal.Model(tree, Ne=1e6, nsamples=2, recomb=0) # simulate sequence data on 10K loci model.sim_loci(10000, 50) # add missing data (50%) model.apply_missing_mask(0.5) # write results to database file model.write_snps_to_hdf5(name="test-tet-miss50", outdir='/tmp', diploid=True)
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Infer tetrad tree
SNPS = "/tmp/test-tet-miss50.snps.hdf5" tet = ipa.tetrad( data=SNPS, name="test-tet-miss50", workdir="/tmp", nboots=10, nquartets=1e6, ) tet.run(auto=True, force=True)
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Draw the inferred tetrad tree
tre = toytree.tree(tet.trees.cons) rtre = tre.root(["r19", "r18", "r17"]) rtre.draw(ts='d', use_edge_lengths=False, node_labels="support");
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Does this tree match the true species tree?
rfdist = rtre.treenode.robinson_foulds(tree.treenode)[0] rfdist == 0
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Variable config api_key The key used to authenticate with the DarkSky API, get your own at darksky.net/dev Longitude/Latitude Geopy derived geographical information from your specified location base_url The base url used for GET request to the DarkSky API, containing your api_key and location information.
config = ConfigParser.RawConfigParser() config.read('synchronization.cfg') api_key = config.get('Darksky', 'api_key') geolocator = Nominatim() location = geolocator.geocode('Muntstraat 10 Leuven') latitude = location.latitude longitude = location.longitude base_url = config.get('Darksky', 'base_url') + api_key \ + '/' + str(latitude) + ',' + str(longitude) + ','
Basic_setup.ipynb
mdeloge/DarkSky
mit
URL builder Using this function a list is made that can later be iterated over to fetch a large amount of weather data in a fixed date range
def url_builder(start_date, end_date=dt.datetime.now()): url_list = [] delta = end_date - start_date for counter in range(delta.days): timestamp = str(time.mktime((start_date + dt.timedelta(days=counter)).timetuple()))[:-2] if os.path.isfile('local_data/full_data_' + timestamp + '.json'): continue full_url = base_url + timestamp url_list.append(full_url) return url_list url_list = url_builder(dt.datetime(2017,6,12)) len(url_list)
Basic_setup.ipynb
mdeloge/DarkSky
mit
Fahrenheit to Celsius Because everyone likes non retard units
def f_t_c(fahrenheit): return (((fahrenheit - 32) * 5.0) / 9.0)
Basic_setup.ipynb
mdeloge/DarkSky
mit
Fetch JSON data from URL JSON data is locally stored so all future code can use the locally stored files and don't require any remote API calls
def fetch_and_store_json(url): try: request = requests.get(url=url, timeout=10) except ReadTimeout as t: print "Read timeout" request = None if request is None: while(request is None): request = requests.get(url=url, timeout=1) content = json.loads(request.content) storage = open('local_data/full_data_' + url.split(',')[2] + '.json', 'w') #storage.write(json.dumps(content)) #for BigQuery ready json file storage.write(json.dumps(content, separators=(',', ': '), indent=5)) #For clean indentation storage.close() for url in tqdm(url_list): fetch_and_store_json(url)
Basic_setup.ipynb
mdeloge/DarkSky
mit
We want a bigger batch size as our data is not balanced.
AUTOTUNE = tf.data.AUTOTUNE GCS_PATH = "gs://kds-b38ce1b823c3ae623f5691483dbaa0f0363f04b0d6a90b63cf69946e" BATCH_SIZE = 64 IMAGE_SIZE = [1024, 1024]
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Load the data
FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/train*.tfrec") split_ind = int(0.9 * len(FILENAMES)) TRAINING_FILENAMES, VALID_FILENAMES = FILENAMES[:split_ind], FILENAMES[split_ind:] TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/test*.tfrec") print("Train TFRecord Files:", len(TRAINING_FILENAMES)) print("Validation TFRecord Files:", len(VALID_FILENAMES)) print("Test TFRecord Files:", len(TEST_FILENAMES))
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Decoding the data The images have to be converted to tensors so that it will be a valid input in our model. As images utilize an RBG scale, we specify 3 channels. We also reshape our data so that all of the images will be the same shape.
def decode_image(image): image = tf.image.decode_jpeg(image, channels=3) image = tf.cast(image, tf.float32) image = tf.reshape(image, [*IMAGE_SIZE, 3]) return image
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
As we load in our data, we need both our X and our Y. The X is our image; the model will find features and patterns in our image dataset. We want to predict Y, the probability that the lesion in the image is malignant. We will to through our TFRecords and parse out the image and the target values.
def read_tfrecord(example, labeled): tfrecord_format = ( { "image": tf.io.FixedLenFeature([], tf.string), "target": tf.io.FixedLenFeature([], tf.int64), } if labeled else {"image": tf.io.FixedLenFeature([], tf.string),} ) example = tf.io.parse_single_example(example, tfrecord_format) image = decode_image(example["image"]) if labeled: label = tf.cast(example["target"], tf.int32) return image, label return image
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Define loading methods Our dataset is not ordered in any meaningful way, so the order can be ignored when loading our dataset. By ignoring the order and reading files as soon as they come in, it will take a shorter time to load the data.
def load_dataset(filenames, labeled=True): ignore_order = tf.data.Options() ignore_order.experimental_deterministic = False # disable order, increase speed dataset = tf.data.TFRecordDataset( filenames ) # automatically interleaves reads from multiple files dataset = dataset.with_options( ignore_order ) # uses data as soon as it streams in, rather than in its original order dataset = dataset.map( partial(read_tfrecord, labeled=labeled), num_parallel_calls=AUTOTUNE ) # returns a dataset of (image, label) pairs if labeled=True or just images if labeled=False return dataset
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
We define the following function to get our different datasets.
def get_dataset(filenames, labeled=True): dataset = load_dataset(filenames, labeled=labeled) dataset = dataset.shuffle(2048) dataset = dataset.prefetch(buffer_size=AUTOTUNE) dataset = dataset.batch(BATCH_SIZE) return dataset
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Visualize input images
train_dataset = get_dataset(TRAINING_FILENAMES) valid_dataset = get_dataset(VALID_FILENAMES) test_dataset = get_dataset(TEST_FILENAMES, labeled=False) image_batch, label_batch = next(iter(train_dataset)) def show_batch(image_batch, label_batch): plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(image_batch[n] / 255.0) if label_batch[n]: plt.title("MALIGNANT") else: plt.title("BENIGN") plt.axis("off") show_batch(image_batch.numpy(), label_batch.numpy())
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Building our model Define callbacks The following function allows for the model to change the learning rate as it runs each epoch. We can use callbacks to stop training when there are no improvements in the model. At the end of the training process, the model will restore the weights of its best iteration.
initial_learning_rate = 0.01 lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=20, decay_rate=0.96, staircase=True ) checkpoint_cb = tf.keras.callbacks.ModelCheckpoint( "melanoma_model.h5", save_best_only=True ) early_stopping_cb = tf.keras.callbacks.EarlyStopping( patience=10, restore_best_weights=True )
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Build our base model Transfer learning is a great way to reap the benefits of a well-trained model without having the train the model ourselves. For this notebook, we want to import the Xception model. A more in-depth analysis of transfer learning can be found here. We do not want our metric to be accuracy because our data is imbalanced. For our example, we will be looking at the area under a ROC curve.
def make_model(): base_model = tf.keras.applications.Xception( input_shape=(*IMAGE_SIZE, 3), include_top=False, weights="imagenet" ) base_model.trainable = False inputs = tf.keras.layers.Input([*IMAGE_SIZE, 3]) x = tf.keras.applications.xception.preprocess_input(inputs) x = base_model(x) x = tf.keras.layers.GlobalAveragePooling2D()(x) x = tf.keras.layers.Dense(8, activation="relu")(x) x = tf.keras.layers.Dropout(0.7)(x) outputs = tf.keras.layers.Dense(1, activation="sigmoid")(x) model = tf.keras.Model(inputs=inputs, outputs=outputs) model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=lr_schedule), loss="binary_crossentropy", metrics=tf.keras.metrics.AUC(name="auc"), ) return model
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Train the model
with strategy.scope(): model = make_model() history = model.fit( train_dataset, epochs=2, validation_data=valid_dataset, callbacks=[checkpoint_cb, early_stopping_cb], )
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Predict results We'll use our model to predict results for our test dataset images. Values closer to 0 are more likely to be benign and values closer to 1 are more likely to be malignant.
def show_batch_predictions(image_batch): plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(image_batch[n] / 255.0) img_array = tf.expand_dims(image_batch[n], axis=0) plt.title(model.predict(img_array)[0]) plt.axis("off") image_batch = next(iter(test_dataset)) show_batch_predictions(image_batch)
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Show finite length capacity estimates for some codes of different lengths $n$
esno_dB_range = np.linspace(-4,3,100) esno_lin_range = [10**(esno_db/10) for esno_db in esno_dB_range] # compute sigma_n sigman_range = [np.sqrt(1/2/esno_lin) for esno_lin in esno_lin_range] capacity_BIAWGN = [C_BIAWGN(sigman) for sigman in sigman_range] Pe_BIAWGN_r12_n100 = [get_Pe_finite_length(100, 0.5, sigman) for sigman in sigman_range] Pe_BIAWGN_r12_n500 = [get_Pe_finite_length(500, 0.5, sigman) for sigman in sigman_range] Pe_BIAWGN_r12_n1000 = [get_Pe_finite_length(1000, 0.5, sigman) for sigman in sigman_range] Pe_BIAWGN_r12_n5000 = [get_Pe_finite_length(5000, 0.5, sigman) for sigman in sigman_range] fig = plt.figure(1,figsize=(12,7)) plt.semilogy(esno_dB_range, Pe_BIAWGN_r12_n100) plt.semilogy(esno_dB_range, Pe_BIAWGN_r12_n500) plt.semilogy(esno_dB_range, Pe_BIAWGN_r12_n1000) plt.semilogy(esno_dB_range, Pe_BIAWGN_r12_n5000) plt.axvspan(-4, -2.83, alpha=0.5, color='gray') plt.axvline(x=-2.83, color='k') plt.ylim((1e-8,1)) plt.xlim((-4,2)) plt.xlabel('$E_s/N_0$ (dB)', fontsize=16) plt.ylabel('$P_e$', fontsize=16) plt.legend(['$n = 100$', '$n=500$','$n=1000$', '$n=5000$'], fontsize=16) plt.text(-3.2, 1e-4, 'Capacity limit', {'color': 'k', 'fontsize': 20, 'rotation': -90}) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.grid(True) #plt.savefig('BI_AWGN_Pe_R12.pdf',bbox_inches='tight')
ccgbc/ch2_Codes_Basic_Concepts/BiAWGN_Capacity_Finitelength.ipynb
kit-cel/wt
gpl-2.0
Different representation, for a given channel (and here, we pick $E_s/N_0 = -2.83$ dB), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available.
#specify esno esno = -2.83 n_range = np.linspace(10,2000,100) sigman = np.sqrt(0.5*10**(-esno/10)) C = C_BIAWGN(sigman) V = V_BIAWGN_GH(C, sigman) r_Pe_1em3 = [C - np.sqrt(V/n)*norm.isf(1e-3) + 0.5*np.log2(n)/n for n in n_range] r_Pe_1em6 = [C - np.sqrt(V/n)*norm.isf(1e-6) + 0.5*np.log2(n)/n for n in n_range] r_Pe_1em9 = [C - np.sqrt(V/n)*norm.isf(1e-9) + 0.5*np.log2(n)/n for n in n_range] fig = plt.figure(1,figsize=(12,7)) plt.plot(n_range, r_Pe_1em3) plt.plot(n_range, r_Pe_1em6) plt.plot(n_range, r_Pe_1em9) plt.axhline(y=C, color='k') plt.ylim((0,0.55)) plt.xlim((0,2000)) plt.xlabel('Length $n$', fontsize=16) plt.ylabel('Rate $r$ (bit/channel use)', fontsize=16) plt.legend(['$P_e = 10^{-3}$', '$P_e = 10^{-6}$','$P_e = 10^{-9}$', '$C$'], fontsize=16) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.grid(True) #plt.savefig('BI_AWGN_r_esno_m283.pdf',bbox_inches='tight')
ccgbc/ch2_Codes_Basic_Concepts/BiAWGN_Capacity_Finitelength.ipynb
kit-cel/wt
gpl-2.0
Morale For sparse data representations - the computational time is smaller - the computatational efficiency is also smaller! Possible solutions Use blocking Use block matrix-by-vector product (multiply at once) What are FastPDE methods about? They are typically methods for large sparse linear systems These systems have certain additional structure, i.e. it is not a random sparse matrix (for example, not an adjacency matrix of a Facebook graph, although some algorithms can be reused) Next lecture considers methods to solve large sparse linear systems FEniCS demo
%matplotlib inline from __future__ import print_function import fenics import matplotlib.pyplot as plt import dolfin import mshr import math domain_vertices = [dolfin.Point(0.0, 0.0), dolfin.Point(10.0, 0.0), dolfin.Point(10.0, 2.0), dolfin.Point(8.0, 2.0), dolfin.Point(7.5, 1.0), dolfin.Point(2.5, 1.0), dolfin.Point(2.0, 4.0), dolfin.Point(0.0, 4.0), dolfin.Point(0.0, 0.0)] p = mshr.Polygon(domain_vertices); rect_mesh = mshr.generate_mesh(p, 20) fenics.plot(rect_mesh) V = fenics.FunctionSpace(rect_mesh, 'P', 1) u_D = fenics.Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2) def boundary(x, on_boundary): return on_boundary bc = fenics.DirichletBC(V, u_D, boundary) u = fenics.TrialFunction(V) v = fenics.TestFunction(V) f = fenics.Constant(-6.0) # Or f = Expression(’-6’, degree=0) # Left-hand side a = fenics.dot(fenics.grad(u), fenics.grad(v))*fenics.dx # Right-hand side L = f*v*fenics.dx u = fenics.Function(V) fenics.solve(a == L, u, bc) fenics.plot(u) error_L2 = fenics.errornorm(u_D, u, 'L2') print("Error in L2 norm = {}".format(error_L2)) error_H1 = fenics.errornorm(u_D, u, 'H1') print("Error in H1 norm = {}".format(error_H1))
lectures/Lecture-6.ipynb
oseledets/fastpde2017
mit
FVM demo using FiPy To install it run for Python 2 conda create --name &lt;MYFIPYENV&gt; --channel guyer --channel conda-forge fipy nomkl
%matplotlib inline import fipy cellSize = 0.05 radius = 1. mesh = fipy.Gmsh2D(''' cellSize = %(cellSize)g; radius = %(radius)g; Point(1) = {0, 0, 0, cellSize}; Point(2) = {-radius, 0, 0, cellSize}; Point(3) = {0, radius, 0, cellSize}; Point(4) = {radius, 0, 0, cellSize}; Point(5) = {0, -radius, 0, cellSize}; Circle(6) = {2, 1, 3}; Circle(7) = {3, 1, 4}; Circle(8) = {4, 1, 5}; Circle(9) = {5, 1, 2}; Line Loop(10) = {6, 7, 8, 9}; Plane Surface(11) = {10}; ''' % locals()) phi = fipy.CellVariable(name = "solution variable", mesh = mesh, value = 0.) # viewer = fipy.Viewer(vars=phi, datamin=-1, datamax=1.) D = 1. eq = fipy.TransientTerm() == fipy.DiffusionTerm(coeff=D) X, Y = mesh.faceCenters phi.constrain(X, mesh.exteriorFaces) timeStepDuration = 10 * 0.9 * cellSize**2 / (2 * D) steps = 10 for step in range(steps): eq.solve(var=phi, dt=timeStepDuration) # if viewer is not None: # viewer viewer
lectures/Lecture-6.ipynb
oseledets/fastpde2017
mit
Train Model We run 10,000 times through our data and every 500 epochs of training we output what the model considers to be a natural continuation to the sentence "the":
# train: for i in range(10000): error = model.update_fun(numerical_lines, numerical_lengths) if i % 100 == 0: print("epoch %(epoch)d, error=%(error).2f" % ({"epoch": i, "error": error})) if i % 500 == 0: print(vocab(model.greedy_fun(vocab.word2index["the"]))) a=1 print a import numpy as np import os from setuptools import setup, find_packages
Tutorial.ipynb
caiyunapp/theano_lstm
bsd-3-clause
Load default settings
active_ws = core.WorkSpace() active_ws.show_settings() active_ws.first_filter.set_filter('TYPE_AREA', '2') active_ws.show_settings()
.ipynb_checkpoints/lv_notebook_workspace_test-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Select Directories and file paths
def return_input(value): return value start_year = interactive(return_input, value = widgets.Dropdown( options=[2009, 2010, 2011, 2012, 2013], value=2009, description='Select start year:', disabled=False) ) end_year = interactive(return_input, value = widgets.Dropdown( options=[2011, 2012, 2013, 2014, 2015, 2016], value=2015, description='Select start year:', disabled=False) ) from IPython.display import display display(start_year, end_year) print(start_year.result, end_year.result) test_widget = core.jupyter_eventhandlers.MultiCheckboxWidget(['Bottenfauna', 'Växtplankton','Siktdjup','Näringsämnen']) test_widget # Display the widget if __name__ == '__main__': nr_marks = 60 print('='*nr_marks) print('Running module "lv_test_file.py"') print('-'*nr_marks) print('') #root_directory = os.path.dirname(os.path.abspath(__file__)) # works in root_directory = os.getcwd() # works in notebook resources_directory = root_directory + '/resources' filter_directory = root_directory + '/workspaces/default/filters' data_directory = root_directory + '/workspaces/default/data' # est_core.StationList(root_directory + '/test_data/Stations_inside_med_typ_attribute_table_med_delar_av_utsjö.txt') core.ParameterList() #-------------------------------------------------------------------------- print('{}\nSet directories and file paths'.format('*'*nr_marks)) raw_data_file_path = data_directory + '/raw_data/data_BAS_2000-2009.txt' first_filter_data_directory = data_directory + '/filtered_data' first_data_filter_file_path = filter_directory + '/selection_filters/first_data_filter.txt' winter_data_filter_file_path = filter_directory + '/selection_filters/winter_data_filter.txt' summer_data_filter_file_path = filter_directory + '/selection_filters/summer_data_filter.txt' tolerance_filter_file_path = filter_directory + '/tolerance_filters/tolerance_filter_template.txt'
.ipynb_checkpoints/lv_notebook_workspace_test-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Step 2: Doing a thing to the data Here we should enumerate the goals of the pre-processing steps that need to be taken initially. Whether this is organization or documentation of the data, or computing some trasformation, this step is generally taking the fresh, "raw"-ish data you provided and the user is expected to have, and sets it up so that in the third step they can do real processing. As an example, in the case of ndstore, when creating datasets/projects/channels we need to learn the following features about our data prior to beginning: - {x, y, z} image size - time range - data type - window range An external link to documentation which explain things, like this one for the above example, is always helpful for users who wish to have more than the superficial and functional picture you're currently providing. We, again, should have a code block that does some analysis. the one below gets some items from that list above.
import os import numpy as np files = os.listdir(datadir) # get a list of all files in the dataset print 'X image size: ', im.shape[1] # second dimension is X in our png print 'Y image size: ', im.shape[0] # first dimension is Y in our png print 'Z image size: ', len(files) # we get Z by counting the number of images in our directory print 'Time range: (0, 0)' # default value if the data is not time series dtype = im.dtype print 'Data type: ', dtype try: im_min = np.iinfo(dtype).max im_max = np.iinfo(dtype).min except: im_min = np.finfo(dtype).max im_max = np.finfo(dtype).min for f in files: # get range by checking each slice min and max temp_im = scm.imread(op.join(datadir, f)) im_min = np.min(temp_im) if np.min(temp_im) < im_min else im_min # update image stack min im_max = np.max(temp_im) if np.max(temp_im) > im_max else im_max # update image stack max print 'Window range: (%f, %f)' % (im_min, im_max)
tutorialguide.ipynb
neurostorm/tutorials
apache-2.0
It's also important to summarize what we've done, so that the user can Summarizing these results and those that require more intimate knowledge of the data, we come up with the following: | property | value | |:--------- |:------ | | dataset name | kki2009_demo | | x size | 182 | | y size | 218 | | z size | 182 | | time range | (0, 0) | | data type | uint8 | | window range | (0, 255) | Step 3: Doing the thing This is usually the real deal. You've set the stage, preprocessed as needed, and now are ready for the task. Here you should provide a detailed description of the next steps, link to documentation, and provide some way to validate that what you are getting the user to do worked as expected.
print "more code here, as always"
tutorialguide.ipynb
neurostorm/tutorials
apache-2.0
Then we can start training the reward model. Note that we need to specify the total timesteps that the agent should be trained and how many fragment comparisons should be made.
pref_comparisons.train( total_timesteps=1000, # Note: set to 40000 to achieve sensible results total_comparisons=120, # Note: set to 4000 to achieve sensible results )
examples/5_train_preference_comparisons.ipynb
HumanCompatibleAI/imitation
mit
After we trained the reward network using the preference comparisons algorithm, we can wrap our environment with that learned reward.
from imitation.rewards.reward_wrapper import RewardVecEnvWrapper learned_reward_venv = RewardVecEnvWrapper(venv, reward_net.predict)
examples/5_train_preference_comparisons.ipynb
HumanCompatibleAI/imitation
mit
Now we can train an agent, that only sees those learned reward.
from stable_baselines3 import PPO from stable_baselines3.ppo import MlpPolicy learner = PPO( policy=MlpPolicy, env=learned_reward_venv, seed=0, batch_size=64, ent_coef=0.0, learning_rate=0.0003, n_epochs=10, n_steps=64, ) learner.learn(1000) # Note: set to 100000 to train a proficient expert
examples/5_train_preference_comparisons.ipynb
HumanCompatibleAI/imitation
mit
Then we can evaluate it using the original reward.
from stable_baselines3.common.evaluation import evaluate_policy reward, _ = evaluate_policy(agent.policy, venv, 10) print(reward)
examples/5_train_preference_comparisons.ipynb
HumanCompatibleAI/imitation
mit
Declare DataLoader(s) The next step is to declare the DataLoader class that deals with input variables Define the input variables that shall be used for the MVA training note that you may also use variable expressions, which can be parsed by TTree::Draw( "expression" )] In this case the input data consists of an image of 16x16 pixels. Each single pixel is a branch in a ROOT TTree
inputFileName = "images_data.root" inputFile = ROOT.TFile.Open( inputFileName ) # retrieve input trees signalTree = inputFile.Get("sig_tree") backgroundTree = inputFile.Get("bkg_tree") signalTree.Print() loader = ROOT.TMVA.DataLoader("dataset") ### global event weights per tree (see below for setting event-wise weights) signalWeight = 1.0 backgroundWeight = 1.0 ### You can add an arbitrary number of signal or background trees loader.AddSignalTree ( signalTree, signalWeight ) loader.AddBackgroundTree( backgroundTree, backgroundWeight ) imgSize = 8 * 8; for i in range(0,imgSize): varName = "var"+str(i) loader.AddVariable(varName,'F');
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Setup Dataset(s) Define input data file and signal and background trees
## Apply additional cuts on the signal and background samples (can be different) mycuts = ROOT.TCut("") ## for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1"; mycutb = ROOT.TCut("") ## for example: TCut mycutb = "abs(var1)<0.5"; loader.PrepareTrainingAndTestTree( mycuts, mycutb, "nTrain_Signal=5000:nTrain_Background=5000:SplitMode=Random:" "NormMode=NumEvents:!V" )
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Booking Methods Here we book the TMVA methods. We book a DNN and a CNN Booking Deep Neural Network Here we book the new DNN of TMVA. If using master version you can use the new DL method
inputLayoutString = "InputLayout=1|1|64"; batchLayoutString= "BatchLayout=1|32|64"; layoutString = ("Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR") training1 = "Optimizer=ADAM,LearningRate=1e-3,Momentum=0.,Regularization=None,WeightDecay=1e-4," training1 += "DropConfig=0.+0.+0.+0.,MaxEpochs=30,ConvergenceSteps=10,BatchSize=32,TestRepetitions=1" trainingStrategyString = "TrainingStrategy=" + training1 dnnOptions = "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER::Architecture=CPU" dnnOptions += ":" + inputLayoutString dnnOptions += ":" + batchLayoutString dnnOptions += ":" + layoutString dnnOptions += ":" + trainingStrategyString #we can now book the method factory.BookMethod(loader, ROOT.TMVA.Types.kDL, "DL_DENSE", dnnOptions)
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Book Convolutional Neural Network in TMVA
#input layout inputLayoutString = "InputLayout=1|8|8" ## Batch Layout batchLayoutString = "BatchLayout=128|1|64" layoutString = ("Layout=CONV|10|3|3|1|1|1|1|RELU,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1," "RESHAPE|FLAT,DENSE|64|TANH,DENSE|1|LINEAR") ##Training strategies. training1 = ("LearningRate=1e-3,Momentum=0.9,Repetitions=1," "ConvergenceSteps=10,BatchSize=128,TestRepetitions=1," "MaxEpochs=20,WeightDecay=1e-4,Regularization=None," "Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0") trainingStrategyString = "TrainingStrategy=" + training1 ## General Options. cnnOptions = ("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:" "WeightInitialization=XAVIERUNIFORM"); cnnOptions += ":" + inputLayoutString cnnOptions += ":" + batchLayoutString cnnOptions += ":" + layoutString cnnOptions += ":" + trainingStrategyString cnnOptions += ":Architecture=CPU" ##book CNN factory.BookMethod(loader, ROOT.TMVA.Types.kDL, "DL_CNN", cnnOptions);
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Book Convolutional Neural Network in Keras using a generated model
## to use tensorflow backend import os ##os.environ["KERAS_BACKEND"] = "tensorflow" import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam, SGD from tensorflow.keras.initializers import TruncatedNormal from tensorflow.keras.layers import Input, Dense, Dropout, Flatten, Conv2D, MaxPooling2D, Reshape, BatchNormalization model = Sequential() model.add(Reshape((8,8, 1), input_shape=(64,))) model.add(Conv2D(10, kernel_size=(3,3), kernel_initializer='TruncatedNormal', activation='relu', padding='same' ) ) model.add(Conv2D(10, kernel_size=(3,3), kernel_initializer='TruncatedNormal', activation='relu', padding='same' ) ) #stride for maxpool is equal to pool size model.add(MaxPooling2D(pool_size=(2, 2) )) model.add(Flatten()) model.add(Dense(64, activation='tanh')) #model.add(Dropout(0.2)) model.add(Dense(2, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.001), metrics=['accuracy']) model.save('model_cnn.h5') model.summary() factory.BookMethod(loader, ROOT.TMVA.Types.kPyKeras, "PyKeras","H:!V:VarTransform=None:FilenameModel=model_cnn.h5:" "FilenameTrainedModel=trained_model_cnn.h5:NumEpochs=20:BatchSize=128");
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Test and Evaluate Methods
factory.TestAllMethods(); factory.EvaluateAllMethods();
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Plot ROC Curve We enable JavaScript visualisation for the plots
//%jsroot on c1 = factory.GetROCCurve(loader) c1.Draw() ## close outputfile to save output file outputFile.Close()
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Operations on Tensors Variables and Constants Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable). Constant values can not be changed, while variables values can be. The main difference is that instances of tf.Variable have methods allowing us to change their values while tensors constructed with tf.constant don't have these methods, and therefore their values can not be changed. When you want to change the value of a tf.Variable x use one of the following method: x.assign(new_value) x.assign_add(value_to_be_added) x.assign_sub(value_to_be_subtracted
x = tf.constant([2, 3, 4]) x x = tf.Variable(2.0, dtype=tf.float32, name="my_variable") x.assign(45.8) # TODO 1 x x.assign_add(4) # TODO 2 x x.assign_sub(3) # TODO 3 x
notebooks/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Training Loop Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
STEPS = 1000 LEARNING_RATE = 0.02 MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n" w0 = tf.Variable(0.0) w1 = tf.Variable(0.0) for step in range(0, STEPS + 1): dw0, dw1 = compute_gradients(X, Y, w0, w1) w0.assign_sub(dw0 * LEARNING_RATE) w1.assign_sub(dw1 * LEARNING_RATE) if step % 100 == 0: loss = loss_mse(X, Y, w0, w1) print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))
notebooks/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information. CNN Fox News Time Upload the image for the visualization to this directory and display the image inline in this notebook.
Image(filename='badgraphic.png') # Second picture should be attached now
assignments/assignment04/TheoryAndPracticeEx02.ipynb
brettavedisian/phys202-2015-work
mit
Certain classifiers in scikit-learn can also return the probability of a predicted class label via the predict_proba method. Using the predicted class probabilities instead of the class labels for majority voting can be useful if the classifiers in our ensemble are well calibrated. Let's assume that our classifiers c1,c2,c3 returns the following class membership probabilities for a particular sample x: c1(x) -> [0.9,0.1] c2(x) -> [0.8,0.2] c3(x) -> [0.4,0.6] To implement the weighted majority vote based on class probabilities, we can again make use of NumPy using numpy.average and np.argmax:
ex = np.array([[0.9, 0.1], [0.8, 0.2], [0.4, 0.6]]) p = np.average(ex, axis=0, weights=[0.2, 0.2, 0.6]) p np.argmax(p)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Putting everything together, let's now implement a MajorityVoteClassifier in Python:
from sklearn.base import BaseEstimator from sklearn.base import ClassifierMixin from sklearn.preprocessing import LabelEncoder from sklearn.externals import six from sklearn.base import clone from sklearn.pipeline import _name_estimators import numpy as np import operator class MajorityVoteClassifier(BaseEstimator, ClassifierMixin): """ A majority vote ensemble classifier Parameters ---------- classifiers : array-like, shape = [n_classifiers] Different classifiers for the ensemble vote : str, {'classlabel', 'probability'} (default='label') If 'classlabel' the prediction is based on the argmax of class labels. Else if 'probability', the argmax of the sum of probabilities is used to predict the class label (recommended for calibrated classifiers). weights : array-like, shape = [n_classifiers], optional (default=None) If a list of `int` or `float` values are provided, the classifiers are weighted by importance; Uses uniform weights if `weights=None`. """ def __init__(self, classifiers, vote='classlabel', weights=None): self.classifiers = classifiers self.named_classifiers = {key: value for key, value in _name_estimators(classifiers)} self.vote = vote self.weights = weights def fit(self, X, y): """ Fit classifiers. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Matrix of training samples. y : array-like, shape = [n_samples] Vector of target class labels. Returns ------- self : object """ if self.vote not in ('probability', 'classlabel'): raise ValueError("vote must be 'probability' or 'classlabel'" "; got (vote=%r)" % self.vote) if self.weights and len(self.weights) != len(self.classifiers): raise ValueError('Number of classifiers and weights must be equal' '; got %d weights, %d classifiers' % (len(self.weights), len(self.classifiers))) # Use LabelEncoder to ensure class labels start with 0, which # is important for np.argmax call in self.predict self.lablenc_ = LabelEncoder() self.lablenc_.fit(y) self.classes_ = self.lablenc_.classes_ self.classifiers_ = [] for clf in self.classifiers: fitted_clf = clone(clf).fit(X, self.lablenc_.transform(y)) self.classifiers_.append(fitted_clf) return self def predict(self, X): """ Predict class labels for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Matrix of training samples. Returns ---------- maj_vote : array-like, shape = [n_samples] Predicted class labels. """ if self.vote == 'probability': maj_vote = np.argmax(self.predict_proba(X), axis=1) else: # 'classlabel' vote # Collect results from clf.predict calls predictions = np.asarray([clf.predict(X) for clf in self.classifiers_]).T maj_vote = np.apply_along_axis( lambda x: np.argmax(np.bincount(x, weights=self.weights)), axis=1, arr=predictions) maj_vote = self.lablenc_.inverse_transform(maj_vote) return maj_vote def predict_proba(self, X): """ Predict class probabilities for X. Parameters ---------- X : {array-like, sparse matrix}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. Returns ---------- avg_proba : array-like, shape = [n_samples, n_classes] Weighted average probability for each class per sample. """ probas = np.asarray([clf.predict_proba(X) for clf in self.classifiers_]) avg_proba = np.average(probas, axis=0, weights=self.weights) return avg_proba def get_params(self, deep=True): """ Get classifier parameter names for GridSearch""" if not deep: return super(MajorityVoteClassifier, self).get_params(deep=False) else: out = self.named_classifiers.copy() for name, step in six.iteritems(self.named_classifiers): for key, value in six.iteritems(step.get_params(deep=True)): out['%s__%s' % (name, key)] = value return out
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
A lot of comments are added to the code to better understand the individual parts. However, before we implement the remaining methods, let's take a quick break and discuss some of the code that may look confusing at first. We used the parent classes BaseEstimator and ClassifierMixin to get some base functionality for free, including the methods get_params and set_params to set and return the classifier's parameters as well as the score method to calculate the prediction accuracy, respectively. Also note that we imported six to make the MajorityVoteClassifier compatible with Python 2.7. Next we will add the predict method to predict the class label via majority vote based on the class labels if we initialize a new MajorityVoteClassifier object with vote='classlabel'. Alternatively, we will be able to initialize the ensemble classifier with vote='probability' to predict the class label based on the class membership probabilities. Furthermore, we will also add a predict_proba method to return the average probabilities, which is useful to compute the Receiver Operator Characteristic area under the curve (ROC AUC). Also, note that we defined our own modified version of the 'get_params' methods to use the '_name_estimators' function in order to access the parameters of individual classifiers in the ensemble. This may look a little bit complicated at first, but it will make perfect sense when we use grid search for hyperparameter-tuning in later sections. Combining different algorithms for classification with majority vote Now it is about time to put the MajorityVoteClassifier that we implemented in the previous section into action. But first, let's prepare a dataset that we can test it on. Heart Disease Data Set We would using the Heart Disease Data Set. The dataset actually contains 76 attributes but for simplicity we would be dealing with only the most important 14 attributes. Using this 14 attributes, our job would be to predict if the patient has a heart disease or not. Attribute Information: The 14 attributes used: [age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca thal num] Complete attribute documentation: 1. age: age in years 2. sex: sex (1 = male; 0 = female) 3. cp: chest pain type - Value 1: typical angina - Value 2: atypical angina - Value 3: non-anginal pain - Value 4: asymptomatic 4. trestbps: resting blood pressure (in mm Hg on admission to the hospital) 5. chol: serum cholestoral in mg/dl 6. fbs: (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) 7. restecg: resting electrocardiographic results - Value 0: normal - Value 1: having ST-T wave abnormality (T wave inversions and/or ST elevation or depression of > 0.05 mV) - Value 2: showing probable or definite left ventricular hypertrophy by Estes' criteria 8. thalach: maximum heart rate achieved 9. exang: exercise induced angina (1 = yes; 0 = no) 10. oldpeak = ST depression induced by exercise relative to rest 11. slope: the slope of the peak exercise ST segment - Value 1: upsloping - Value 2: flat - Value 3: downsloping 12. ca: number of major vessels (0-3) colored by flourosopy 13. thal: 3 = normal; 6 = fixed defect; 7 = reversable defect 14. num: diagnosis of heart disease (angiographic disease status) We would be using the preprocessed dataset. It can be downloaded from here : http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data Once downloaded, we can move on to code:
import os from sklearn.tree import DecisionTreeClassifier, export_graphviz import pandas as pd import numpy as np from sklearn.cross_validation import train_test_split from sklearn import cross_validation, metrics from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import BernoulliNB from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from time import time from sklearn.pipeline import Pipeline from sklearn.metrics import roc_auc_score , classification_report from sklearn.grid_search import GridSearchCV from sklearn.pipeline import Pipeline from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report cols = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal', 'num'] import pandas as pd # read .csv from provided dataset csv_filename="processed.cleveland.data" # Seperator = ' ' i.e a single space df=pd.read_csv(csv_filename,sep=',',names=cols) df.head()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Thus we have our dataset. But we want our task to be a binary classification task i.e we would like to classify whether the patient has a heart disease or not. However our target variable 'num' contains 5 values 0,1,2,3,4. We would simply attempt to distinguish presence (values 1,2,3,4) from absence (value 0). We can make clean our target variable values accordingly:
count0 = 0 for z in df['num']: if z == 0: count0 = count0 + 1 print (count0) for v in df['num']: if v != 0 : df['num'].replace(v,1,inplace=True) count0 = 0 for z in df['num']: if z == 0: count0 = count0 + 1 print (count0) count0 = 0 for z in df['num']: if z != 0: count0 = count0 + 1 print (count0) df.head()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Our data contains 6 row with missing values. These values are represented by "?". So first we replace these "?" with NaN and then drop all rows which contain NaNs. We can simply achive this by doing the following:
df.replace("?",np.NaN,inplace=True) df.dropna(axis=0, inplace=True, how='any') df = df.reset_index(drop=True) df.head()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Now we can move on to classification of our data.
features = df.columns[:-1] features X = df[features] y = df['num'] y.unique()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit