markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
First, we find yearly mean temperatures and then calculate overall mean temperature.
yearly_tmean = dd1.T2MMEAN.resample(time="1AS").mean('time')[:,0,0] tmean_mean = yearly_tmean.mean(axis=0) print ('Overall mean for tmean is ' + str("%.2f" % tmean_mean.values))
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Now it is time to plot mean annual temperature in Arctic. We also marked overall average for 1980-2019 with a red dotted line. The green line marks a trend. The other significant thing we can see from looking at the plot is that the temperatures have been rising over the years. The small anomalies are normal, while af...
make_plot(yearly_tmean.loc[yearly_tmean['time.year'] < 2019],dataset,'Mean annual temperature in ' + ' '.join(area_name.split('_')),ylabel = 'Temperature [C]',compare_line = tmean_mean.values,trend=True)
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Here we show number of days in a year when temperature exceeds 10 °C. Here we can see that year 2004 stands out pretty well.
daily_data = dd1.T2MMEAN[:,0,0] make_plot(daily_data[np.where(daily_data.values > 10)].groupby('time.year').count(),dataset,'Number of days in year when mean temperature exceeds 10 $^o$C in ' + ' '.join(area_name.split('_')),ylabel = 'Days of year') print ('Yearly average days when temperature exceeds 10 C is ' + str("...
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
Last, we will compare observational data from the beginning of this year with the overall January and February mean.
compare_observations_analysis_mean(time_synop,temp_synop,'Air Temperature',jan_mean_temp.values,\ '10 year average temperature','Beginning of 2019 temperature in ' + ' '.join(area_name.split('_')),\ 'january_feb_' + area_name + '.png')
api-examples/arctic_temperature.ipynb
planet-os/notebooks
mit
For real problems (like below) this generally gives a bit of a speed boost. After doing that we can import quimb:
import quimb as qu
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
We are not going to contsruct the Hamiltonian directly, instead leave it as a Lazy object, so that each MPI process can construct its own rows and avoid redudant communication and memory. To do that we need to know the size of the matrix first:
# total hilbert space for 18 spin-1/2s n = 18 d = 2**n shape = (d, d) # And make the lazy representation H_opts = {'n': n, 'dh': 3.0, 'sparse': True, 'seed': 9} H = qu.Lazy(qu.ham_mbl, **H_opts, shape=shape) H
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
This Hamiltonian also conserves z-spin, which we can use to make the effective problem significantly smaller. This is done by supplying a projector onto the subspace we are targeting. We also need to know its size first if we want to leave it 'unconstructed':
# total Sz=0 subspace size (n choose n / 2) from scipy.special import comb ds = comb(n, n // 2, exact=True) shape = (d, ds) # And make the lazy representation P = qu.Lazy(qu.zspin_projector, n=n, shape=shape) P
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
Now we can solve the hamiltoniain, for 5 eigenpairs centered around energy 0:
%%time lk, vk = qu.eigh(H, P=P, k=5, sigma=0.0, backend='slepc') print('energies:', lk)
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
eigh takes care of projecting H into the subspace ($\tilde{H} = P^{\dagger} H P$), and mapping the eigenvectors back the computation basis once found. Here we specified the 'slepc' backend. In an interactive session, this will spawn the MPI workers for you (using mpi4py). Other options would be to run this in a script ...
# get an MPI executor pool pool = qu.linalg.mpi_launcher.get_mpi_pool() # 'submit' the function with args to the pool e_k_b_ij = [[pool.submit(qu.ent_cross_matrix, vk[:, [k]], sz_blc=b) for b in [1, 2, 3, 4]] for k in range(5)] e_k_b_ij
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
Once we have submitted all this work to the pool (which works in any of the modes described above), we can retrieve the results:
# convert each 'Future' into its result e_k_b_ij = [[f.result() for f in e_b_ij] for e_b_ij in e_k_b_ij] %matplotlib inline from matplotlib import pyplot as plt fig, axes = plt.subplots(4, 5, figsize=(15, 10), squeeze=True) for k in range(5): for b in [1, 2, 3, 4]: e_ij = e_k_b_...
docs/examples/ex_distributed_shift_invert.ipynb
jcmgray/quijy
mit
MyriaL Basics First you would scan in whatever tables you want to use in that cell. These are the tables visible in the Myria-Web datasets tab. R1 = scan(cosmo8_000970); This put all of the data in the cosmo8_000970 table into the relation R1 which can now be queried with MyriaL R2 = select * from R1 limit 5; Once we ...
%%query -- comments in MyriaL look like this -- notice that the notebook highlighting still thinks we are writing python: in, from, for, range, return R1 = scan(cosmo8_000970); R2 = select * from R1 limit 5; R3 = select iOrder from R1 limit 5; store(R2, garbage); %%query -- there are some built in functions that are...
ipnb examples/Cosmo8+Demo+Notebook.ipynb
uwescience/myria-python
bsd-3-clause
User Defined Functions In MyriaL we can define our own functions that will then be applied to the results of a query. These can either be written in Python and registered with Myria or they can be written directly within a MyriaL cell (but not in Python). When registering a Python function as a UDF, we need to specify ...
from raco.types import DOUBLE_TYPE from myria.udf import MyriaPythonFunction # each row is passed in as a tupl within a list def sillyUDF(tuplList): row = tuplList[0] x = row[0] y = row[1] z = row[2] if (x > y): return x + y + z else: return z # A python function needs...
ipnb examples/Cosmo8+Demo+Notebook.ipynb
uwescience/myria-python
bsd-3-clause
There is also special syntax for user defined aggregate functions, which use all of the rows to produce a single output, like a Reduce or Fold function pattern: uda func-name(args) { initialization-expr(s); update-expr(s); result-expr(s); }; Where each of the inner lines is a bracketed statement with an entry for ea...
%%query -- UDA example using MyriaL functions inside the UDA update line def pickBasedOnValue2(val1, arg1, val2, arg2): case when val1 >= val2 then arg1 else arg2 end; def maxValue2(val1, val2): case when val1 >= val2 then val1 else val2 end; uda arg...
ipnb examples/Cosmo8+Demo+Notebook.ipynb
uwescience/myria-python
bsd-3-clause
Working with multiple snapshots On the Myria demo cluster we only provide cosmo8_000970, but on a private cluster we could load in any number of snapshots to look for how things change over time.
%%query c8_000970 = scan(cosmo8_000970); c8_000962 = scan(cosmo8_000962); -- finding all gas particles that were destroyed between step 000962 and 000970 c1Gases = select iOrder from c8_000962 where type = 'gas'; c2Gases = select iOrder from c8_000970 where type = 'gas'; exist = select c1.iOrder from c1Gases as c1, c...
ipnb examples/Cosmo8+Demo+Notebook.ipynb
uwescience/myria-python
bsd-3-clause
percentage
percentage_1 = len(np.where((samples1 >= -1) & (samples1 <=1))[0]) / N percentage_2 = len(np.where((samples1 >= -2) & (samples1 <=2))[0]) / N percentage_3 = len(np.where((samples1 >= -3) & (samples1 <=3))[0]) / N print("between -1 and 1: %{}".format(percentage_1)) print("between -2 and 2: %{}".format(percentage_2)) pri...
Trueskill.ipynb
muatik/dm
mit
creating the second sample set
N = 10000 mean = 1 std_dev = 1 samples2 = np.random.normal(mean, std_dev, N)
Trueskill.ipynb
muatik/dm
mit
pairing the two sample sets the X co-ordinate of each point is a sample from the first set and the Y co-ordinate is the corresponding sample from the second set
plt.scatter(samples1, samples2) plt.plot(np.linspace(-3,5, 10000), np.linspace(-3,5, 10000), c="r")
Trueskill.ipynb
muatik/dm
mit
the fraction of samples which lie above the diagonal line where X=Y
len(np.where((samples2 > samples1))[0]) / N
Trueskill.ipynb
muatik/dm
mit
the posterior distribution numerically over Ywins
def calc_ywins(): from scipy.stats import norm meanD = 1 - 0 varD = 1 + 1 x = np.linspace(-10, 11, 1000) dProbs = norm.pdf(x, meanD, scale=varD**(1/2)) # scale is standard deviation plt.plot(x, dProbs) plt.plot([0,0],[0,0.25], 'r-') plt.fill_between(x,0,dProbs,color='cyan', where=x>0) ...
Trueskill.ipynb
muatik/dm
mit
Simple graph analytics for the Twitter stream For this first step we want: - top 10 retweeted users - top 10 PageRanked users - basic matplotlib viz
import matplotlib as mpl mpl.use('Agg') import matplotlib.pyplot as plt %matplotlib inline import networkx as nx
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Building the directed graph We build the retweet graph, where an edge is from the original tweeter to the retweeter. We add node weights corresponding to how much each node was retweeted
graph = nx.DiGraph() for tweet in data: if tweet.get('retweet') == 'Y': name = tweet.get('name') original_name = tweet.get('original_name') followers = tweet.get('followers') if name not in graph: graph.add_node(name, retweets = 0) if original_name not in graph: ...
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Most retweeted users
top10_retweets = sorted([(node,graph.node[node]['retweets']) for node in graph.nodes()], key = lambda x: -x[1])[0:10] top10_retweets
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Top 10 Pageranked users Note - these are the 'archetypal retweeters' of the graph (well, not exactly. see https://en.wikipedia.org/wiki/PageRank)
pr = nx.pagerank(graph) colors = [pr[node] for node in graph.nodes()] top10_pr = sorted([(k,v) for k,v in pr.items()], key = lambda x: x[1])[0:10] label_dict = dict([(k[0],k[0]) for k in top10_pr]) top10_pr
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Basic network viz size of nodes is number of retweets color of nodes is pagerank we only label the top 10 pageranked users
plt.figure(figsize=(11,11)) plt.axis('off') weights = [10*(graph.node[node]['retweets'] + 1) for node in graph.nodes()] nx.draw_networkx(graph, node_size = weights, width = .1, linewidths = .1, with_labels=True, node_color = colors, cmap = 'RdYlBu', labels = label_dict) consumer.clo...
exploratory_notebooks/eventador_simplegraph.ipynb
jss367/assemble
mit
Setting up the inputs Target inputs (optical constant database, sphere coordinates, composition and size)
# building the optical constant database eps_db_out=py_gmm.mat.generate_eps_db('../epsilon/',ext='*.edb') eps_files,eps_names,eps_db=eps_db_out['eps_files'],eps_db_out['eps_names'],eps_db_out['eps_db'] # sphere radius (in nm) v_r = np.array([ 40., 40.]) # sphere position (in nm) m_xyz = np.array([[ -42.5, 0. , ...
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Plane wave incident field
# Euler angles: (alpha,beta,gamma)=(0,0,0) means a z-directed, x-polarized plane wave alpha = 0.0 # azimuth beta = 0.0 # polar gamma = 0.0 # polarization # Wavelengths for the specrum computation wl_min = 300 wl_max = 800 n_wl = 250 v_wl = np.linspace(wl_min,wl_max,n_wl) # Wavelength for the local field computati...
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Additional inputs for the simulation
n_stop=10 # maximum multipolar expansion order f_int=0.0; # interaction cutoff (normally set to zero to have full interactions) lf_ratio=300; # plot sphere local field contribution up to a distance equal to d=lf_ratio*r_sphere qs_flag='no' # retarded simulation n_E = 400 # local field plotting grid resolution
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Target plot
# target plot fig = plt.figure(num=1,figsize=(10,10)) # setting the figure size ax = fig.add_subplot(1, 1, 1, aspect='equal') # creating the plotting axis # plot bounds and eliminating x and y ticks plt.xlim(-1.1*(-m_xyz[0,0]+v_r[0]),1.1*(m_xyz[1,0]+v_r[ns-1])) plt.ylim(-1.1*(v_r[0]),1.1*(v_r[0])) plt.xticks([]) plt....
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Local Field
# local field for the first resonance v_field = [] for wl_lf in v_wl_lf: # optical constants e_list=py_gmm.mat.db_to_eps(wl_lf,eps_db,target_comp); m_eps=np.column_stack((np.real(e_list),np.imag(e_list))); # gmm coefficients computation out=py_gmm.gmm_py.gmm_f2py_module.expansion_coefficients(m_xy...
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Plotting the results Extinction cross section
# cross section plot f_size=25; f_size_ticks=20; plt.figure(1,figsize=(15,10)); plt.plot(v_wl,np.sum(v_cext,axis=1),'k',linewidth=3.0); plt.plot(v_wl,v_cext[:,0],'0.6', v_wl,v_cext[:,1],'y',linewidth=2.0); # plt title plt.title('AuAg dimer',fontsize=f_size) # axes labels plt.xlabel(r'wavelength (nm)', fontsi...
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Local field enhancement
# local field plot f_size = 25 fig = plt.figure(2,figsize=(14,10)) v_title = ['High energy resonance','Low energy resonance'] for i_E,m_E in enumerate(v_field): ax = fig.add_subplot(2, 1, i_E+1, aspect='equal') # creating the plotting axis plt.imshow(m_E.T,origin='lower',cmap='gnuplot2', aspect=(y_max-y_min)/(...
examples/AuAg_dimer.ipynb
gevero/py_gmm
gpl-3.0
Programming Idioms To Make Life Easier Iterating Lists - Enumerate
my_list = ['hi', 'this', 'is', 'my', 'list'] for i,e in enumerate(my_list): print(i, e)
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
Iterating Lists - Zip
x = range(10) y = range(10, 20) for xi, yi in zip(x, y): print(xi, yi)
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
List Comprehensions This allows you to avoid a for loop when you don't want to type one out
powers = [xi ** 2 for xi in range(4)] print(powers) # make y = x^2 quickly x = [1, 3, 10] y = [xi ** 2 for xi in x] for xi, yi in zip(x,y): print(xi, yi) # by convention, use _ to indicate we don't care about a variable zeros = [0 for _ in range(4)] print(zeros) x = [4,3, 6, 1, 4] mean_x = sum(x) / len(x) delta...
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
Dict Comprehensions
x = [4, 10, 11] y = ['number of people', 'the number 10', 'another number'] #key: value my_dict = {yi: xi for xi, yi in zip(x,y)} print(my_dict)
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
F Strings You can simplify string formatting with f strings:
answer = 4 mean = -3 print(f'The answer is {answer} and the mean is {mean:.2f}')
unit_9/lectures/lecture_1.ipynb
whitead/numerical_stats
gpl-3.0
Getting started Easy to install: pip install plotly How to save and view files? Can work offline and save as .html files to open on web browser Jupyter notebook Upload to online account for easy sharing: import statement automatically signs you in How It Works Graph objects Same structure as native Python dictionari...
# (*) Tools to communicate with Plotly's server import plotly.plotly as py # (*) Useful Python/Plotly tools import plotly.tools as tls # (*) Graph objects to piece together your Plotly plots import plotly.graph_objs as go
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
The following code will make a simple line and scatter plot:
# Create random data with numpy import numpy as np N = 100 random_x = np.linspace(0, 1, N) random_y0 = np.random.randn(N)+5 random_y1 = np.random.randn(N) random_y2 = np.random.randn(N)-5 # (1.1) Make a 1st Scatter object trace0 = go.Scatter( x = random_x, y = random_y0, mode = 'markers', name = '$\m...
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Figure objects store data like a Python dictionary.
# (5) Send Figure object to Plotly and show plot in notebook py.iplot(fig, filename='scatter-mode')
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Can save a static image as well:
py.image.save_as(fig, filename='scatter-mode.png')
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Histograms
# (1) Generate some random numbers x0 = np.random.randn(500) x1 = np.random.randn(500)+1 # (2.1) Create the first Histogram object trace1 = go.Histogram( x=x0, histnorm='count', name='control', autobinx=False, xbins=dict( start=-3.2, end=2.8, size=0.2 ), marker=dict(...
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Distplots Similar to seaborn.distplot. Plot a histogram, kernel density or normal curve, and a rug plot all together.
from plotly.tools import FigureFactory as FF # Add histogram data x1 = np.random.randn(200)-2 x2 = np.random.randn(200) x3 = np.random.randn(200)+2 x4 = np.random.randn(200)+4 # Group data together hist_data = [x1, x2, x3, x4] group_labels = ['Group 1', 'Group 2', 'Group 3', 'Group 4'] # Create distplot wit...
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
2D Contour Plot
x = np.random.randn(1000) y = np.random.randn(1000) py.iplot([go.Histogram2dContour(x=x, y=y, \ contours=go.Contours(coloring='fill')), \ go.Scatter(x=x, y=y, mode='markers', \ marker=go.Marker(color='white', size=3, opacity=0.3))])
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
3D Surface Plot Plot the function: $f(x,y) = A \cos(\pi x y) e^{-(x^2+y^2)/2}$
# Define the function to be plotted def fxy(x, y): A = 1 # choose a maximum amplitude return A*(np.cos(np.pi*x*y))**2 * np.exp(-(x**2+y**2)/2.) # Choose length of square domain, make row and column vectors L = 4 x = y = np.arange(-L/2., L/2., 0.1) # use a mesh spacing of 0.1 yt = y[:, np.newaxis] # (!) mak...
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Matplotlib Conversion
import matplotlib.pyplot as plt import matplotlib.mlab as mlab n = 50 x, y, z, s, ew = np.random.rand(5, n) c, ec = np.random.rand(2, n, 4) area_scale, width_scale = 500, 5 fig, ax = plt.subplots() sc = ax.scatter(x, y, c=c, s=np.square(s)*area_scale, edgecolor=ec, line...
lecture9/Plotly_Presentation.ipynb
jstac/quantecon_nyu_2016
bsd-3-clause
Alapozás Görbék megadási módjai Implicit alak Az implicit alak a görbét alkotó pontokat egy teszt formájában adja meg, melynek segítségével el lehet dönteni, hogy egy adott pont rajta fekszik-e a görbén. Kétdimenziós esetben az implicit alak felírható a következő formában $$ f(x, y) = 0, $$ mely egyenletet a görbét alk...
addScript("js/c0-parametric-continuity", "c0-parametric-continuity")
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
$C^1$ matematikai folytonosság Ebben az esetben a görbedarabok első deriváltja (a görbéhez húzott érintővektor) megegyezik a csatlakozási pontban. Az előző $f(t)$ és $g(u)$ függvények által leírt görbék esetén tehát $$ f^\prime(t_2) = g^\prime(u_1). $$ Ha a $C^1$ folytonosság nem teljesül, akkor a csatlakozási pontban ...
addScript("js/c1-parametric-continuity", "c1-parametric-continuity")
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
$C^2$ matematikai folytonosság $C^2$ matematikai folytonosság esetén a csatlakozási pontban a görbék második deriváltja megegyezik. Azaz $$ f^{\prime\prime}(t_2) = g^{\prime\prime}(u_1). $$ $C^2$ folytonosság hiányában, bár nem lesz törés a csatlakozási pontban, azonban a görbe alakja hirtelen megváltozhat.
addScript("js/c2-parametric-continuity", "c2-parametric-continuity")
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
Geometriai folytonosság $G^0$ geometriai folytonosság Ugyanazt jelenti, mint a $C^0$ matematikai folytonosság, a görbedarabok csatlakoznak egymáshoz. $G^1$ geometriai folytonosság A $G^1$ geometriai folytonosság azt jelenti, hogy a két csatlakozó görbedarab csatlakozási pontba húzott érintővektora különböző nagyságú, a...
addScript("js/g1-geometric-continuity", "g1-parametric-continuity")
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
Kapcsolat a matematikai és a geometriai folytonosság között A matematikai folytonosság szigorúbb, mint a geometriai folytonosság, hiszen az $n$-ed rendű matematikai folytonosság az $n$-edik deriváltak egyenlőségét kívánja meg. Emiatt, ha két görbe $C^n$ matematikai folytonossággal csatlakozik, akkor ez a csatlakozás eg...
def styling(): styles = open("../../styles/custom.html", "r").read() return HTML(styles) styling()
notebooks/01-alapozas/01-alapozas.ipynb
kompgraf/course-material
mit
Models support dimension inference from data. You can defer some or all of the dimensions.
model = Linear(init_W=zero_init) print(f"Initialized model with no input/ouput dimensions.") X = numpy.zeros((128, 16), dtype="f") Y = numpy.zeros((128, 10), dtype="f") model.initialize(X=X, Y=Y) nI = model.get_dim("nI") nO = model.get_dim("nO") print(f"Initialized model with input dimension nI={nI} and output dimensi...
examples/01_intro_model_definition_methods.ipynb
spacy-io/thinc
mit
Simulate a random tree with 20 tips and crown age of 10M generations
tree = toytree.rtree.bdtree(ntips=20, seed=555) tree = tree.mod.node_scale_root_height(10e6) tree.draw(scalebar=True);
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Simulate SNPs with missing data and write to database (.seqs.hdf5)
# init simulator with one diploid sample from each tip model = ipcoal.Model(tree, Ne=1e6, nsamples=2, recomb=0) # simulate sequence data on 10K loci model.sim_loci(10000, 50) # add missing data (50%) model.apply_missing_mask(0.5) # write results to database file model.write_snps_to_hdf5(name="test-tet-miss50", outdi...
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Infer tetrad tree
SNPS = "/tmp/test-tet-miss50.snps.hdf5" tet = ipa.tetrad( data=SNPS, name="test-tet-miss50", workdir="/tmp", nboots=10, nquartets=1e6, ) tet.run(auto=True, force=True)
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Draw the inferred tetrad tree
tre = toytree.tree(tet.trees.cons) rtre = tre.root(["r19", "r18", "r17"]) rtre.draw(ts='d', use_edge_lengths=False, node_labels="support");
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Does this tree match the true species tree?
rfdist = rtre.treenode.robinson_foulds(tree.treenode)[0] rfdist == 0
testdocs/analysis/cookbook-tetrad-ipcoal.ipynb
dereneaton/ipyrad
gpl-3.0
Variable config api_key The key used to authenticate with the DarkSky API, get your own at darksky.net/dev Longitude/Latitude Geopy derived geographical information from your specified location base_url The base url used for GET request to the DarkSky API, containing your api_key and location information.
config = ConfigParser.RawConfigParser() config.read('synchronization.cfg') api_key = config.get('Darksky', 'api_key') geolocator = Nominatim() location = geolocator.geocode('Muntstraat 10 Leuven') latitude = location.latitude longitude = location.longitude base_url = config.get('Darksky', 'base_url') + api_key \ ...
Basic_setup.ipynb
mdeloge/DarkSky
mit
URL builder Using this function a list is made that can later be iterated over to fetch a large amount of weather data in a fixed date range
def url_builder(start_date, end_date=dt.datetime.now()): url_list = [] delta = end_date - start_date for counter in range(delta.days): timestamp = str(time.mktime((start_date + dt.timedelta(days=counter)).timetuple()))[:-2] if os.path.isfile('local_data/full_data_' + timestamp + '.json'): ...
Basic_setup.ipynb
mdeloge/DarkSky
mit
Fahrenheit to Celsius Because everyone likes non retard units
def f_t_c(fahrenheit): return (((fahrenheit - 32) * 5.0) / 9.0)
Basic_setup.ipynb
mdeloge/DarkSky
mit
Fetch JSON data from URL JSON data is locally stored so all future code can use the locally stored files and don't require any remote API calls
def fetch_and_store_json(url): try: request = requests.get(url=url, timeout=10) except ReadTimeout as t: print "Read timeout" request = None if request is None: while(request is None): request = requests.get(url=url, timeout=1) content = json.loads(request.con...
Basic_setup.ipynb
mdeloge/DarkSky
mit
We want a bigger batch size as our data is not balanced.
AUTOTUNE = tf.data.AUTOTUNE GCS_PATH = "gs://kds-b38ce1b823c3ae623f5691483dbaa0f0363f04b0d6a90b63cf69946e" BATCH_SIZE = 64 IMAGE_SIZE = [1024, 1024]
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Load the data
FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/train*.tfrec") split_ind = int(0.9 * len(FILENAMES)) TRAINING_FILENAMES, VALID_FILENAMES = FILENAMES[:split_ind], FILENAMES[split_ind:] TEST_FILENAMES = tf.io.gfile.glob(GCS_PATH + "/tfrecords/test*.tfrec") print("Train TFRecord Files:", len(TRAINING_FILENAMES)) prin...
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Decoding the data The images have to be converted to tensors so that it will be a valid input in our model. As images utilize an RBG scale, we specify 3 channels. We also reshape our data so that all of the images will be the same shape.
def decode_image(image): image = tf.image.decode_jpeg(image, channels=3) image = tf.cast(image, tf.float32) image = tf.reshape(image, [*IMAGE_SIZE, 3]) return image
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
As we load in our data, we need both our X and our Y. The X is our image; the model will find features and patterns in our image dataset. We want to predict Y, the probability that the lesion in the image is malignant. We will to through our TFRecords and parse out the image and the target values.
def read_tfrecord(example, labeled): tfrecord_format = ( { "image": tf.io.FixedLenFeature([], tf.string), "target": tf.io.FixedLenFeature([], tf.int64), } if labeled else {"image": tf.io.FixedLenFeature([], tf.string),} ) example = tf.io.parse_single_...
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Define loading methods Our dataset is not ordered in any meaningful way, so the order can be ignored when loading our dataset. By ignoring the order and reading files as soon as they come in, it will take a shorter time to load the data.
def load_dataset(filenames, labeled=True): ignore_order = tf.data.Options() ignore_order.experimental_deterministic = False # disable order, increase speed dataset = tf.data.TFRecordDataset( filenames ) # automatically interleaves reads from multiple files dataset = dataset.with_options( ...
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
We define the following function to get our different datasets.
def get_dataset(filenames, labeled=True): dataset = load_dataset(filenames, labeled=labeled) dataset = dataset.shuffle(2048) dataset = dataset.prefetch(buffer_size=AUTOTUNE) dataset = dataset.batch(BATCH_SIZE) return dataset
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Visualize input images
train_dataset = get_dataset(TRAINING_FILENAMES) valid_dataset = get_dataset(VALID_FILENAMES) test_dataset = get_dataset(TEST_FILENAMES, labeled=False) image_batch, label_batch = next(iter(train_dataset)) def show_batch(image_batch, label_batch): plt.figure(figsize=(10, 10)) for n in range(25): ax = p...
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Building our model Define callbacks The following function allows for the model to change the learning rate as it runs each epoch. We can use callbacks to stop training when there are no improvements in the model. At the end of the training process, the model will restore the weights of its best iteration.
initial_learning_rate = 0.01 lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay( initial_learning_rate, decay_steps=20, decay_rate=0.96, staircase=True ) checkpoint_cb = tf.keras.callbacks.ModelCheckpoint( "melanoma_model.h5", save_best_only=True ) early_stopping_cb = tf.keras.callbacks.EarlyStoppin...
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Build our base model Transfer learning is a great way to reap the benefits of a well-trained model without having the train the model ourselves. For this notebook, we want to import the Xception model. A more in-depth analysis of transfer learning can be found here. We do not want our metric to be accuracy because our ...
def make_model(): base_model = tf.keras.applications.Xception( input_shape=(*IMAGE_SIZE, 3), include_top=False, weights="imagenet" ) base_model.trainable = False inputs = tf.keras.layers.Input([*IMAGE_SIZE, 3]) x = tf.keras.applications.xception.preprocess_input(inputs) x = base_model...
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Train the model
with strategy.scope(): model = make_model() history = model.fit( train_dataset, epochs=2, validation_data=valid_dataset, callbacks=[checkpoint_cb, early_stopping_cb], )
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Predict results We'll use our model to predict results for our test dataset images. Values closer to 0 are more likely to be benign and values closer to 1 are more likely to be malignant.
def show_batch_predictions(image_batch): plt.figure(figsize=(10, 10)) for n in range(25): ax = plt.subplot(5, 5, n + 1) plt.imshow(image_batch[n] / 255.0) img_array = tf.expand_dims(image_batch[n], axis=0) plt.title(model.predict(img_array)[0]) plt.axis("off") image_ba...
examples/keras_recipes/ipynb/tfrecord.ipynb
keras-team/keras-io
apache-2.0
Show finite length capacity estimates for some codes of different lengths $n$
esno_dB_range = np.linspace(-4,3,100) esno_lin_range = [10**(esno_db/10) for esno_db in esno_dB_range] # compute sigma_n sigman_range = [np.sqrt(1/2/esno_lin) for esno_lin in esno_lin_range] capacity_BIAWGN = [C_BIAWGN(sigman) for sigman in sigman_range] Pe_BIAWGN_r12_n100 = [get_Pe_finite_length(100, 0.5, sigman) f...
ccgbc/ch2_Codes_Basic_Concepts/BiAWGN_Capacity_Finitelength.ipynb
kit-cel/wt
gpl-2.0
Different representation, for a given channel (and here, we pick $E_s/N_0 = -2.83$ dB), show the rate the code should at most have to allow for decoding with an error rate $P_e$ (here we specify different $P_e$) if a certain length $n$ is available.
#specify esno esno = -2.83 n_range = np.linspace(10,2000,100) sigman = np.sqrt(0.5*10**(-esno/10)) C = C_BIAWGN(sigman) V = V_BIAWGN_GH(C, sigman) r_Pe_1em3 = [C - np.sqrt(V/n)*norm.isf(1e-3) + 0.5*np.log2(n)/n for n in n_range] r_Pe_1em6 = [C - np.sqrt(V/n)*norm.isf(1e-6) + 0.5*np.log2(n)/n for n in n_range] r_...
ccgbc/ch2_Codes_Basic_Concepts/BiAWGN_Capacity_Finitelength.ipynb
kit-cel/wt
gpl-2.0
Morale For sparse data representations - the computational time is smaller - the computatational efficiency is also smaller! Possible solutions Use blocking Use block matrix-by-vector product (multiply at once) What are FastPDE methods about? They are typically methods for large sparse linear systems These systems ...
%matplotlib inline from __future__ import print_function import fenics import matplotlib.pyplot as plt import dolfin import mshr import math domain_vertices = [dolfin.Point(0.0, 0.0), dolfin.Point(10.0, 0.0), dolfin.Point(10.0, 2.0), dolfin.Point(8.0, 2.0), ...
lectures/Lecture-6.ipynb
oseledets/fastpde2017
mit
FVM demo using FiPy To install it run for Python 2 conda create --name &lt;MYFIPYENV&gt; --channel guyer --channel conda-forge fipy nomkl
%matplotlib inline import fipy cellSize = 0.05 radius = 1. mesh = fipy.Gmsh2D(''' cellSize = %(cellSize)g; radius = %(radius)g; Point(1) = {0, 0, 0, cellSize}; Point(2) = {-radius, 0, 0, cellSize}; Point(3) = {0, radius, 0, cellSize}; ...
lectures/Lecture-6.ipynb
oseledets/fastpde2017
mit
Train Model We run 10,000 times through our data and every 500 epochs of training we output what the model considers to be a natural continuation to the sentence "the":
# train: for i in range(10000): error = model.update_fun(numerical_lines, numerical_lengths) if i % 100 == 0: print("epoch %(epoch)d, error=%(error).2f" % ({"epoch": i, "error": error})) if i % 500 == 0: print(vocab(model.greedy_fun(vocab.word2index["the"]))) a=1 print a import numpy as np...
Tutorial.ipynb
caiyunapp/theano_lstm
bsd-3-clause
Load default settings
active_ws = core.WorkSpace() active_ws.show_settings() active_ws.first_filter.set_filter('TYPE_AREA', '2') active_ws.show_settings()
.ipynb_checkpoints/lv_notebook_workspace_test-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Select Directories and file paths
def return_input(value): return value start_year = interactive(return_input, value = widgets.Dropdown( options=[2009, 2010, 2011, 2012, 2013], value=2009, description='Select start year:', disabled=False) ) end_year = interactive(return_input, value = widge...
.ipynb_checkpoints/lv_notebook_workspace_test-checkpoint.ipynb
ekostat/ekostat_calculator
mit
Step 2: Doing a thing to the data Here we should enumerate the goals of the pre-processing steps that need to be taken initially. Whether this is organization or documentation of the data, or computing some trasformation, this step is generally taking the fresh, "raw"-ish data you provided and the user is expected to h...
import os import numpy as np files = os.listdir(datadir) # get a list of all files in the dataset print 'X image size: ', im.shape[1] # second dimension is X in our png print 'Y image size: ', im.shape[0] # first dimension is Y in our png print 'Z image size: ', len(files) # we get Z by counting the number of ima...
tutorialguide.ipynb
neurostorm/tutorials
apache-2.0
It's also important to summarize what we've done, so that the user can Summarizing these results and those that require more intimate knowledge of the data, we come up with the following: | property | value | |:--------- |:------ | | dataset name | kki2009_demo | | x si...
print "more code here, as always"
tutorialguide.ipynb
neurostorm/tutorials
apache-2.0
Then we can start training the reward model. Note that we need to specify the total timesteps that the agent should be trained and how many fragment comparisons should be made.
pref_comparisons.train( total_timesteps=1000, # Note: set to 40000 to achieve sensible results total_comparisons=120, # Note: set to 4000 to achieve sensible results )
examples/5_train_preference_comparisons.ipynb
HumanCompatibleAI/imitation
mit
After we trained the reward network using the preference comparisons algorithm, we can wrap our environment with that learned reward.
from imitation.rewards.reward_wrapper import RewardVecEnvWrapper learned_reward_venv = RewardVecEnvWrapper(venv, reward_net.predict)
examples/5_train_preference_comparisons.ipynb
HumanCompatibleAI/imitation
mit
Now we can train an agent, that only sees those learned reward.
from stable_baselines3 import PPO from stable_baselines3.ppo import MlpPolicy learner = PPO( policy=MlpPolicy, env=learned_reward_venv, seed=0, batch_size=64, ent_coef=0.0, learning_rate=0.0003, n_epochs=10, n_steps=64, ) learner.learn(1000) # Note: set to 100000 to train a proficient ...
examples/5_train_preference_comparisons.ipynb
HumanCompatibleAI/imitation
mit
Then we can evaluate it using the original reward.
from stable_baselines3.common.evaluation import evaluate_policy reward, _ = evaluate_policy(agent.policy, venv, 10) print(reward)
examples/5_train_preference_comparisons.ipynb
HumanCompatibleAI/imitation
mit
Declare DataLoader(s) The next step is to declare the DataLoader class that deals with input variables Define the input variables that shall be used for the MVA training note that you may also use variable expressions, which can be parsed by TTree::Draw( "expression" )] In this case the input data consists of an image...
inputFileName = "images_data.root" inputFile = ROOT.TFile.Open( inputFileName ) # retrieve input trees signalTree = inputFile.Get("sig_tree") backgroundTree = inputFile.Get("bkg_tree") signalTree.Print() loader = ROOT.TMVA.DataLoader("dataset") ### global event weights per tree (see below for setting event-wi...
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Setup Dataset(s) Define input data file and signal and background trees
## Apply additional cuts on the signal and background samples (can be different) mycuts = ROOT.TCut("") ## for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1"; mycutb = ROOT.TCut("") ## for example: TCut mycutb = "abs(var1)<0.5"; loader.PrepareTrainingAndTestTree( mycuts, mycutb, ...
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Booking Methods Here we book the TMVA methods. We book a DNN and a CNN Booking Deep Neural Network Here we book the new DNN of TMVA. If using master version you can use the new DL method
inputLayoutString = "InputLayout=1|1|64"; batchLayoutString= "BatchLayout=1|32|64"; layoutString = ("Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR") training1 = "Optimizer=ADAM,LearningRate=1e-3,Momentum=0.,Regularization=None,WeightDecay=1e-4," training1 += "DropConfig=0.+0.+0.+0.,Ma...
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Book Convolutional Neural Network in TMVA
#input layout inputLayoutString = "InputLayout=1|8|8" ## Batch Layout batchLayoutString =...
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Book Convolutional Neural Network in Keras using a generated model
## to use tensorflow backend import os ##os.environ["KERAS_BACKEND"] = "tensorflow" import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam, SGD from tensorflow.keras.initializers import TruncatedNormal from tensorflow.keras.layers import Input, Dense, Dropou...
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Test and Evaluate Methods
factory.TestAllMethods(); factory.EvaluateAllMethods();
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Plot ROC Curve We enable JavaScript visualisation for the plots
//%jsroot on c1 = factory.GetROCCurve(loader) c1.Draw() ## close outputfile to save output file outputFile.Close()
NCPSchool2021/Examples/TMVA_CNN_Classification_py.ipynb
root-mirror/training
gpl-2.0
Operations on Tensors Variables and Constants Tensors in TensorFlow are either contant (tf.constant) or variables (tf.Variable). Constant values can not be changed, while variables values can be. The main difference is that instances of tf.Variable have methods allowing us to change their values while tensors construc...
x = tf.constant([2, 3, 4]) x x = tf.Variable(2.0, dtype=tf.float32, name="my_variable") x.assign(45.8) # TODO 1 x x.assign_add(4) # TODO 2 x x.assign_sub(3) # TODO 3 x
notebooks/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Training Loop Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
STEPS = 1000 LEARNING_RATE = 0.02 MSG = "STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\n" w0 = tf.Variable(0.0) w1 = tf.Variable(0.0) for step in range(0, STEPS + 1): dw0, dw1 = compute_gradients(X, Y, w0, w1) w0.assign_sub(dw0 * LEARNING_RATE) w1.assign_sub(dw1 * LEARNING_RATE) if step % 100 == ...
notebooks/introduction_to_tensorflow/solutions/1_core_tensorflow.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information. CNN Fox News Time Upload the image for the visualization to this directory and displa...
Image(filename='badgraphic.png') # Second picture should be attached now
assignments/assignment04/TheoryAndPracticeEx02.ipynb
brettavedisian/phys202-2015-work
mit
Certain classifiers in scikit-learn can also return the probability of a predicted class label via the predict_proba method. Using the predicted class probabilities instead of the class labels for majority voting can be useful if the classifiers in our ensemble are well calibrated. Let's assume that our classifiers c1,...
ex = np.array([[0.9, 0.1], [0.8, 0.2], [0.4, 0.6]]) p = np.average(ex, axis=0, weights=[0.2, 0.2, 0.6]) p np.argmax(p)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Putting everything together, let's now implement a MajorityVoteClassifier in Python:
from sklearn.base import BaseEstimator from sklearn.base import ClassifierMixin from sklearn.preprocessing import LabelEncoder from sklearn.externals import six from sklearn.base import clone from sklearn.pipeline import _name_estimators import numpy as np import operator class MajorityVoteClassifier(BaseEstimator, ...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
A lot of comments are added to the code to better understand the individual parts. However, before we implement the remaining methods, let's take a quick break and discuss some of the code that may look confusing at first. We used the parent classes BaseEstimator and ClassifierMixin to get some base functionality for f...
import os from sklearn.tree import DecisionTreeClassifier, export_graphviz import pandas as pd import numpy as np from sklearn.cross_validation import train_test_split from sklearn import cross_validation, metrics from sklearn.ensemble import RandomForestClassifier from sklearn.naive_bayes import BernoulliNB from sklea...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Thus we have our dataset. But we want our task to be a binary classification task i.e we would like to classify whether the patient has a heart disease or not. However our target variable 'num' contains 5 values 0,1,2,3,4. We would simply attempt to distinguish presence (values 1,2,3,4) from absence (value 0). We can m...
count0 = 0 for z in df['num']: if z == 0: count0 = count0 + 1 print (count0) for v in df['num']: if v != 0 : df['num'].replace(v,1,inplace=True) count0 = 0 for z in df['num']: if z == 0: count0 = count0 + 1 print (count0) count0 = 0 for z in df['num']: if z != 0: count...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Our data contains 6 row with missing values. These values are represented by "?". So first we replace these "?" with NaN and then drop all rows which contain NaNs. We can simply achive this by doing the following:
df.replace("?",np.NaN,inplace=True) df.dropna(axis=0, inplace=True, how='any') df = df.reset_index(drop=True) df.head()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Now we can move on to classification of our data.
features = df.columns[:-1] features X = df[features] y = df['num'] y.unique()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit