markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
6.1 Sky Models<a id='deconv:sec:skymodels'></a> Before we dive into deconvolution methods we need to introduce the concept of a sky model. Since we are making an incomplete sampling of the visibilities with limited resolution we do not recover the 'true' sky from an observation. The dirty image is the 'true' sky convol...
fig = plt.figure(figsize=(16, 7)) gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-model.fits', \ figure=fig, subplot=[0.0,0.1,0.35,0.8]) gc1.show_colorscale(vmin=-0.1, vmax=1.0, cmap='viridis') gc1.hide_axis_labels() gc1.hide_tick_labels() plt.title('Sky...
6_Deconvolution/6_1_sky_models.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
Left: a point-source sky model of a field of sources with various intensities. Right: PSF response of KAT-7 for a 6 hour observation at a declination of $-30^{\circ}$. By convolving the ideal sky with the array PSF we effectively are recreating the dirty image. The figure on the left below shows the sky model convolved...
fig = plt.figure(figsize=(16, 5)) fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-model.fits') skyModel = fh[0].data fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-psf.fits') psf = fh[0].data fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10M...
6_Deconvolution/6_1_sky_models.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
Left: the point-source sky model convolved with the KAT-7 PSF with the residual image added. Centre: the original dirty image. Right: the difference between the PSF-convoled sky model and the dirty image. Now that we see we can recreate the dirty image from a sky model and the array PSF, we just need to learn how to do...
fig = plt.figure(figsize=(16, 7)) gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits', \ figure=fig, subplot=[0.1,0.1,0.35,0.8]) gc1.show_colorscale(vmin=-0.8, vmax=3., cmap='viridis') gc1.hide_axis_labels() gc1.hide_tick_labels() plt.title('R...
6_Deconvolution/6_1_sky_models.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
Left: residual image after running a CLEAN deconvolution. Right: restored image constructed from convolving the point-source sky model with an 'ideal' PSF and adding the residual image. Deconvolution, as can be seen in the figure on the left, builds a sky model by subtracts sources from the dirty image and adding them ...
def gauss2d(sigma): """Return a normalized 2d Gaussian function, sigma: size in pixels""" return lambda x,y: (1./(2.*np.pi*(sigma**2.))) * np.exp(-1. * ((xpos**2. + ypos**2.) / (2. * sigma**2.))) imgSize = 512 xpos, ypos = np.mgrid[0:imgSize, 0:imgSize].astype(float) xpos -= imgSize/2. ypos -= imgSize/2. sigma...
6_Deconvolution/6_1_sky_models.ipynb
landmanbester/fundamentals_of_interferometry
gpl-2.0
All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a str...
dta = sm.datasets.get_rdataset("Guerry", "HistData", cache=True) df = dta.data[["Lottery", "Literacy", "Wealth", "Region"]].dropna() df.head()
v0.13.1/examples/notebooks/generated/formulas.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Fit the model:
mod = ols(formula="Lottery ~ Literacy + Wealth + Region", data=df) res = mod.fit() print(res.summary())
v0.13.1/examples/notebooks/generated/formulas.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Categorical variables Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories. If Region had been an integer var...
res = ols(formula="Lottery ~ Literacy + Wealth + C(Region)", data=df).fit() print(res.params)
v0.13.1/examples/notebooks/generated/formulas.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Patsy's mode advanced features for categorical variables are discussed in: Patsy: Contrast Coding Systems for categorical variables Operators We have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix. Removing variables The "-" ...
res = ols(formula="Lottery ~ Literacy + Wealth + C(Region) -1 ", data=df).fit() print(res.params)
v0.13.1/examples/notebooks/generated/formulas.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Multiplicative interactions ":" adds a new column to the design matrix with the interaction of the other two columns. "*" will also include the individual columns that were multiplied together:
res1 = ols(formula="Lottery ~ Literacy : Wealth - 1", data=df).fit() res2 = ols(formula="Lottery ~ Literacy * Wealth - 1", data=df).fit() print(res1.params, "\n") print(res2.params)
v0.13.1/examples/notebooks/generated/formulas.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Many other things are possible with operators. Please consult the patsy docs to learn more. Functions You can apply vectorized functions to the variables in your model:
res = smf.ols(formula="Lottery ~ np.log(Literacy)", data=df).fit() print(res.params)
v0.13.1/examples/notebooks/generated/formulas.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Define a custom function:
def log_plus_1(x): return np.log(x) + 1.0 res = smf.ols(formula="Lottery ~ log_plus_1(Literacy)", data=df).fit() print(res.params)
v0.13.1/examples/notebooks/generated/formulas.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Any function that is in the calling namespace is available to the formula. Using formulas with models that do not (yet) support them Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices can then be fed to the fitting funct...
import patsy f = "Lottery ~ Literacy * Wealth" y, X = patsy.dmatrices(f, df, return_type="matrix") print(y[:5]) print(X[:5])
v0.13.1/examples/notebooks/generated/formulas.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
To generate pandas data frames:
f = "Lottery ~ Literacy * Wealth" y, X = patsy.dmatrices(f, df, return_type="dataframe") print(y[:5]) print(X[:5]) print(sm.OLS(y, X).fit().summary())
v0.13.1/examples/notebooks/generated/formulas.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Get endpoint, host headers, and load the image from a file or from the MNIST dataset.
print('************************************************************') print('************************************************************') print('************************************************************') print("starting query") if len(sys.argv) < 3: raise Exception("No endpoint specified. ") endpoint = sys....
docs/samples/explanation/aix/mnist/query_explain.ipynb
kubeflow/kfserving-lts
apache-2.0
Display the input image to be used.
fig0 = (inputs[:,:,0] + 0.5)*255 f, axarr = plt.subplots(1, 1, figsize=(10,10)) axarr.set_title("Original Image") axarr.imshow(fig0, cmap="gray") plt.show()
docs/samples/explanation/aix/mnist/query_explain.ipynb
kubeflow/kfserving-lts
apache-2.0
Send the image to the inferenceservice.
print("Sending Explain Query") x = time.time() res = requests.post(endpoint, json=input_image, headers=headers) print("TIME TAKEN: ", time.time() - x)
docs/samples/explanation/aix/mnist/query_explain.ipynb
kubeflow/kfserving-lts
apache-2.0
Unwrap the response from the inferenceservice and display the explanations.
print(res) if not res.ok: res.raise_for_status() res_json = res.json() temp = np.array(res_json["explanations"]["temp"]) masks = np.array(res_json["explanations"]["masks"]) top_labels = np.array(res_json["explanations"]["top_labels"]) fig, m_axs = plt.subplots(2,5, figsize = (12,6)) for i, c_ax in enumerate(m_axs....
docs/samples/explanation/aix/mnist/query_explain.ipynb
kubeflow/kfserving-lts
apache-2.0
1 Introduction Scenario There are 8 schools in Neighborhood Y of City X and a total of 100 microscopes for the biology classes at the 8 schools, though the microscopes are not evenly distributed across the locations. Since last academic year there has been a significant enrollment shift in the neighborhood, and at 4 of...
supply_schools = [1, 6, 7, 8] demand_schools = [2, 3, 4, 5]
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Amount of supply and demand at each location (indexed by supply_schools and demand_schools)
amount_supply = [20, 30, 15, 35] amount_demand = [5, 45, 10, 40]
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Solution class
class TransportationProblem: def __init__( self, supply_nodes, demand_nodes, cij, si, dj, xij_tag="x_%s,%s", supply_constr_tag="supply(%s)", demand_constr_tag="demand(%s)", solver="cbc", display=True, ): """Instantia...
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Plotting helper functions and constants Note: originating shipments
shipping_colors = ["maroon", "cyan", "magenta", "orange"] def obs_labels(o, b, s, col="id", **kwargs): """Label each point pattern observation.""" def _lab_loc(_x): """Helper for labeling observations.""" return _x.geometry.coords[0] if o.index.name != "schools": X = o.index.name[...
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Streets
streets = geopandas.read_file(examples.get_path("streets.shp")) streets.crs = "esri:102649" streets = streets.to_crs("epsg:2762")
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Schools
schools = geopandas.read_file(examples.get_path("schools.shp")) schools.index.name = "schools" schools.crs = "esri:102649" schools = schools.to_crs("epsg:2762")
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Schools - supply nodes
schools_supply = schools[schools["POLYID"].isin(supply_schools)] schools_supply.index.name = "supply" schools_supply
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Schools - demand nodes
schools_demand = schools[schools["POLYID"].isin(demand_schools)] schools_demand.index.name = "demand" schools_demand
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Instantiate a network object
ntw = spaghetti.Network(in_data=streets) vertices, arcs = spaghetti.element_as_gdf(ntw, vertices=True, arcs=True)
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Plot
# plot network base = arcs.plot(linewidth=3, alpha=0.25, color="k", zorder=0, figsize=(10, 10)) vertices.plot(ax=base, markersize=2, color="red", zorder=1) # plot observations schools.plot(ax=base, markersize=5, color="k", zorder=2) schools_supply.plot(ax=base, markersize=100, alpha=0.25, color="b", zorder=2) schools_d...
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Associate both the supply and demand schools with the network and plot
ntw.snapobservations(schools_supply, "supply") supply = spaghetti.element_as_gdf(ntw, pp_name="supply") supply.index.name = "supply" supply_snapped = spaghetti.element_as_gdf(ntw, pp_name="supply", snapped=True) supply_snapped.index.name = "supply snapped" supply_snapped ntw.snapobservations(schools_demand, "demand") ...
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Calculate distance matrix while generating shortest path trees
s2d, tree = ntw.allneighbordistances("supply", "demand", gen_tree=True) s2d[:3, :3] list(tree.items())[:4], list(tree.items())[-4:]
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
3. The Transportation Problem Create decision variables for the supply locations and amount to be supplied
supply["dv"] = supply["id"].apply(lambda _id: "s_%s" % _id) supply["s_i"] = amount_supply supply
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Create decision variables for the demand locations and amount to be received
demand["dv"] = demand["id"].apply(lambda _id: "d_%s" % _id) demand["d_j"] = amount_demand demand
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Solve the Transportation Problem Note: shipping costs are in meters per microscope
s, d, s_i, d_j = supply["dv"], demand["dv"], supply["s_i"], demand["d_j"] trans_prob = TransportationProblem(s, d, s2d, s_i, d_j)
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Linear program (compare to its formulation in the Introduction)
trans_prob.print_lp()
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Extract all network shortest paths
paths = ntw.shortest_paths(tree, "supply", "demand") paths_gdf = spaghetti.element_as_gdf(ntw, routes=paths) paths_gdf.head()
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Extract the shipping paths
shipments = trans_prob.extract_shipments(paths_gdf, "id") shipments
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
Plot optimal shipping schedule
# plot network base = arcs.plot(alpha=0.2, linewidth=1, color="k", figsize=(10, 10), zorder=0) vertices.plot(ax=base, markersize=1, color="r", zorder=2) # plot observations schools.plot(ax=base, markersize=5, color="k", zorder=2) supply.plot(ax=base, markersize=100, alpha=0.25, color="b", zorder=3) supply_snapped.plot(...
notebooks/transportation-problem.ipynb
pysal/spaghetti
bsd-3-clause
1. Utwórz wektor zer o rozmiarze 10 python np.zeros 2. Ile pamięci zajmuje tablica? 3.Utwórz wektor 10 zer z wyjątkiem 5-tego elementu równego 4 4. Utwórz wektor kolejnych liczb od 111 do 144. np.arange 5. Odwróć kolejność elementów wektora. 6. Utwórz macierz 4x4 z wartościamy od 0 do 15 reshape 7. Znajdź wsk...
import numpy as np x = np.linspace(0,10,23) f = np.sin(x) %matplotlib inline import matplotlib.pyplot as plt plt.plot(x,f,'o-') plt.plot(4,0,'ro') # f1 = f[1:-1] * f[:] print(np.shape(f[:-1])) print(np.shape(f[1:])) ff = f[:-1] * f[1:] print(ff.shape) x_zero = x[np.where(ff < 0)] x_zero2 = x[np.where(ff < 0)[0] +...
ML_SS2017/Numpy_cwiczenia.ipynb
marcinofulus/teaching
gpl-3.0
9. Utwórz macierz 3x3: identycznościową np.eye losową z wartościami 0,1,2 10. Znajdz minimalną wartość macierzy i jej wskaźnik 11. Znajdz średnie odchylenie od wartości średniej dla wektora
Z = np.random.random(30)
ML_SS2017/Numpy_cwiczenia.ipynb
marcinofulus/teaching
gpl-3.0
12. Siatka 2d. Utworz index-array warości współrzędnych x i y dla obszaru $(-2,1)\times(-1,3)$. * Oblicz na nim wartości funkcji $sin(x^2+y^2)$ * narysuj wynik za pomocą imshow i countour
x = np.linspace(0,3,64) y = np.linspace(0,3,64) X,Y = np.meshgrid(x,y) X Y np.sin(X**2+Y**2) plt.contourf(X,Y,np.sin(X**2+Y**2))
ML_SS2017/Numpy_cwiczenia.ipynb
marcinofulus/teaching
gpl-3.0
Quantum Clustering with Schrödinger's equation Background This method starts off by creating a Parzen-window density estimation of the input data by associating a Gaussian with each point, such that $$ \psi (\mathbf{x}) = \sum ^N _{i=1} e^{- \frac{\left \| \mathbf{x}-\mathbf{x}_i \right \| ^2}{2 \sigma ^2}} $$ where $N...
def fineCluster2(xyData,pV,minD): n = xyData.shape[0] clust = np.zeros(n) # index of points sorted by potential sortedUnclust=pV.argsort() # index of unclestered point with lowest potential i=sortedUnclust[0] # fist cluster index is 1 clustInd=1 while np.min(clust)==0: x=xyData[i] # euclidean di...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
Iris The iris dataset (available at the UCI ML repository) has 3 classes each with 50 datapoints each. There are 4 features. The data is preprocessed using PCA.
# load data #dataset='/home/chiroptera/workspace/datasets/iris/iris.csv' dataset='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' irisPCA=True normalize=False irisData=np.genfromtxt(dataset,delimiter=',') irisData_o=irisData[:,:-1] # remove classification column iN,iDims=irisData_o.shape # ...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
I choose $\sigma=\frac{1}{4}$ to reproduce the experiments in [3]. We use only the first two PC here. For more complete results the algorithm is also executed using all PC.
#%%timeit sigma=0.25 steps=80 irisD1,iV1,iE=HornAlg.graddesc(irisData_c[:,0:2],sigma=sigma,steps=steps) #%%timeit sigma=0.9 steps=80 irisD2,iV2,iE=HornAlg.graddesc(irisData_c,sigma=sigma,steps=steps)
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
Comments The results shown above distinguish cluster assignment by colour. However, the colours might not be consistent throughout all figures. They serve only as a visual way to see how similar clusters are. This is due to the cluster assignment algorithm being used. Two methods may be used and differ only in the orde...
dist=1.8 irisClustering=HornAlg.fineCluster(irisD1,dist)#,potential=iV) print 'Number of clusters:',max(irisClustering) iFig2=plt.figure(figsize=(16,12)) iAx1=iFig2.add_subplot(2,2,1) iAx2=iFig2.add_subplot(2,2,2) iAx3=iFig2.add_subplot(2,2,4) iAx1.set_title('Final quantum system') iAx1.set_xlabel('PC1') iAx1.set_yl...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
Turning to the results, in the first case (clustering on the 2 first PC), the results show the clustering algorithm was able to cluster well one of the clusters (the one that is linearly seperable from the other two) but struggled with outliers present in the space of the other 2 clusters. Furthermore, the separation b...
dist=4.5 irisClustering=HornAlg.fineCluster(irisD2,dist,potential=iV2) print 'Number of clusters:',max(irisClustering) iFig2=plt.figure(figsize=(16,6)) iAx1=iFig2.add_subplot(1,2,1) iAx2=iFig2.add_subplot(1,2,2) #iAx3=iFig2.add_subplot(2,2,4) iAx1.set_title('Final quantum system') iAx1.set_xlabel('PC1') iAx1.set_yla...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
In this case, we use all PC. In the final quantum system, the number of minima is the same. However, some of the minima are very close to others and have less datapoints assigned which suggest that they might be local minima and should probably be annexed to the bigger minima close by. Once again the outliers were not ...
crabsPCA=True crabsNormalize=False crabs=np.genfromtxt('/home/chiroptera/workspace/datasets/crabs/crabs.dat') crabsData=crabs[1:,3:] # PCA if crabsPCA: ncrabsData1, cComps,cEigs=HornAlg.pcaFun(crabsData,whiten=True,center=False, method='eig',type='cov',normalize=crabsN...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
We're visualizing the data projected on the second and third principal components to replicate the results presented on [3]. They use PCA with the correlation matrix. Below we can see the data on different representations. The closest representation of the data is using the covariance matrix with uncentered data (uncon...
cFig1=plt.figure(figsize=(16,12)) cF1Ax1=cFig1.add_subplot(2,2,1) cF1Ax2=cFig1.add_subplot(2,2,2) cF1Ax3=cFig1.add_subplot(2,2,3) cF1Ax4=cFig1.add_subplot(2,2,4) cF1Ax1.set_title('Original crab data') for i in range(len(crabsAssign)): cF1Ax1.plot(crabsData[i,2],crabsData[i,1],marker='.',c=tableau20[int(crabsAssign...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
Cluster We're clustering according to the second and third PC to try to replicate [3], along with the same $\sigma$.
#%%timeit sigma=1.0/sqrt(2) steps=80 crab2cluster=ncrabsData1 crabD,V,E=HornAlg.graddesc(crab2cluster[:,1:3],sigma=sigma,steps=steps) dist=1 crabClustering=HornAlg.fineCluster(crabD,dist,potential=V) print 'Number of clusters:',max(crabClustering) print 'Unclestered points:', np.count_nonzero(crabClustering==0) cF...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
The 'Final quantum system' shows how the points evolved in 80 steps. We can see that they all converged to 4 minima of the potential for $\sigma=\frac{1}{\sqrt{2}}$, making it easy to identify the number of clusters to choose. However, this is only clear observing the results. The distance used to actually assign the p...
sigma=1.0/sqrt(2) steps=80 crab2cluster=ncrabsData3 crabD,V,E=HornAlg.graddesc(crab2cluster[:,1:3],sigma=sigma,steps=steps) #%%debug dist=1 crabClustering=HornAlg.fineCluster(crabD,dist,potential=V) print 'Number of clusters:',max(crabClustering) print 'Unclestered points:', np.count_nonzero(crabClustering==0) cFig2...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
Using conventional PCA, clustering results are better. Other preprocessing Let's now consider clustering on data projected on all principal components (with centered data) and on original data.
#1.0/np.sqrt(2) sigma_allpc=0.5 steps_allpc=200 crabD_allpc,V_allpc,E=HornAlg.graddesc(ncrabsData1[:,:3],sigma=sigma_allpc,steps=steps_allpc) sigma_origin=1.0/sqrt(2) steps_origin=80 crabD_origin,V_origin,E=HornAlg.graddesc(crabsData,sigma=sigma_origin,steps=steps_origin) dist_allpc=12 dist_origin=15 crabClustering_...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
The results of the last experimens show considerably worse results. The final quantum system suggests a great ammount of minima and bigger variance on the final convergence of the points. Furthermore the distribution of the minima doesn't suggest any natural clustering for the user, contrary to what happened before. Th...
n_samples=400 n_features=5 centers=4 x_Gauss,x_assign=sklearn.datasets.make_blobs(n_samples=n_samples,n_features=n_features,centers=centers) #nX=sklearn.preprocessing.normalize(x_Gauss,axis=0) x_2cluster=x_Gauss gMix_fig=plt.figure() plt.title('Gaussian Mix, '+str(n_features)+' features') for i in range(x_Gauss.shape...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
PCA Mix
pcaX,gaussComps,gaussEigs=HornAlg.pcaFun(x_Gauss,whiten=True,center=True, method='eig',type='cov',normalize=False) gPCAf=plt.figure() plt.title('PCA') for i in range(x_Gauss.shape[0]): plt.plot(pcaX[i,0],pcaX[i,1],marker='.',c=tableau20[int(x_assign[i])*2]) sigma=2. ste...
notebooks/Horn accuracy.ipynb
Chiroptera/QCThesis
mit
Testing params The following params are introduced to test the new param.imag parametrization by going back to three channels for the existing modelzoo models
def arbitrary_channels_to_rgb(*args, channels=None, **kwargs): channels = channels or 10 full_im = param.image(*args, channels=channels, **kwargs) r = tf.reduce_mean(full_im[...,:channels//3]**2, axis=-1) g = tf.reduce_mean(full_im[...,channels//3:2*channels//3]**2, axis=-1) b = tf.reduce_mean(full_...
notebooks/feature-visualization/any_number_channels.ipynb
tensorflow/lucid
apache-2.0
Arbitrary channels parametrization param.arbitrary_channels calls param.image and then reduces the arbitrary number of channels to 3 for visualizing with modelzoo models.
_ = render.render_vis(model, "mixed4a_pre_relu:476", param_f=lambda:arbitrary_channels_to_rgb(128, channels=10))
notebooks/feature-visualization/any_number_channels.ipynb
tensorflow/lucid
apache-2.0
Grayscale parametrization param.grayscale_image creates param.image with a single channel and then tiles them 3 times for visualizing with modelzoo models.
_ = render.render_vis(model, "mixed4a_pre_relu:476", param_f=lambda:grayscale_image_to_rgb(128))
notebooks/feature-visualization/any_number_channels.ipynb
tensorflow/lucid
apache-2.0
Testing different objectives Different objectives applied to both parametrizations.
_ = render.render_vis(model, objectives.deepdream("mixed4a_pre_relu"), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10)) _ = render.render_vis(model, objectives.channel("mixed4a_pre_relu", 360), param_f=lambda:arbitrary_channels_to_rgb(128, channels=10)) _ = render.render_vis(model, objectives.neuron("mixed4a...
notebooks/feature-visualization/any_number_channels.ipynb
tensorflow/lucid
apache-2.0
variable definitions figure directory
figure_directory = ""
figures/Figure 3 - noise removal.ipynb
jacobdein/alpine-soundscapes
mit
example recording 1
site1 = Site.objects.get(name='Höttinger Rain') sound_db1 = Sound.objects.get(id=147)
figures/Figure 3 - noise removal.ipynb
jacobdein/alpine-soundscapes
mit
example recording 2
site2 = Site.objects.get(name='Pfaffensteig') sound_db2 = Sound.objects.get(id=158)
figures/Figure 3 - noise removal.ipynb
jacobdein/alpine-soundscapes
mit
formating
style.set_font()
figures/Figure 3 - noise removal.ipynb
jacobdein/alpine-soundscapes
mit
remove noise remove noise from example recordings 1 and 2 using the adaptive level equalization algorithm
# example recording 1 wave1 = Wave(sound_db1.get_filepath()) wave1.read() wave1.normalize() samples1 = wave1.samples[(100 * wave1.rate):(160 * wave1.rate)] duration = 60 f, t, a_pass = psd(samples1, rate=wave1.rate, window_length=512) ale_pass = remove_background_noise(a_pass, N=0.18, iterations=3) b_pass = remove_anth...
figures/Figure 3 - noise removal.ipynb
jacobdein/alpine-soundscapes
mit
plot
# create figure figure3 = pyplot.figure() #figure3.subplots_adjust(left=0.04, bottom=0.12, right=0.96, top=0.97, wspace=0, hspace=0) figure3.subplots_adjust(left=0.04, bottom=0.04, right=0.96, top=0.99, wspace=0, hspace=0) figure3.set_figwidth(6.85) figure3.set_figheight(9.21) # specify frequency bins (width of 1 kilo...
figures/Figure 3 - noise removal.ipynb
jacobdein/alpine-soundscapes
mit
save figure
#figure3.savefig(path.join(figure_directory, "figure3.png"), dpi=300)
figures/Figure 3 - noise removal.ipynb
jacobdein/alpine-soundscapes
mit
Line plot
ts = np.linspace(0,16*np.pi,1000) xs = np.sin(ts) ys = np.cos(ts) zs = ts fig = plt.figure() ax = fig.add_subplot(111, projection ='3d') ax.plot(xs,ys,zs, zdir = 'z')
3D_Plots_01.ipynb
AdrianoValdesGomez/Master-Thesis
cc0-1.0
Esta es la manera "canónica" de generar una gráfica en la que habrá una curva en $\mathbb{R}^3$. Las xs, ys, zs son las coordenadas de la curva, en este caso están dadas por arreglos de numpy. zdir hace alusión a la dirección que se considerará como la dirección z en caso de introducir una gráfica 2D en esta misma. Sca...
ts = np.linspace(0,8*np.pi,1000) xs = np.sin(ts) ys = np.cos(ts) zs = ts fig = plt.figure() ax = fig.add_subplot(111, projection ='3d') ax.scatter(xs,ys,zs, zdir = 'z', alpha = 0.3)
3D_Plots_01.ipynb
AdrianoValdesGomez/Master-Thesis
cc0-1.0
Wireframe Plot En este caso necesitamos arreglos bidimensionales para las xs y las ys, para ello usamos la función meshgrid, de la siguiente forma
x = np.linspace(-1.5,1.5,100) y = np.linspace(-1.5,1.5,100) Xs, Ys = np.meshgrid(x,y) Zs = np.sin(2*Xs)*np.sin(2*Ys) fig = plt.figure(figsize=(5.9,5.9)) ax = fig.add_subplot(111, projection ='3d') ax.plot_wireframe(Xs,Ys,Zs, rstride=3, cstride=3, alpha = 0.4) #plt.figure?
3D_Plots_01.ipynb
AdrianoValdesGomez/Master-Thesis
cc0-1.0
Quiver Plot
pts_x_ini = np.array([0]) pts_y_ini = np.array([0]) pts_z_ini = np.array([0]) pts_x_fin = np.array([0]) pts_y_fin = np.array([0]) pts_z_fin = np.array([1]) fig = plt.figure() ax = fig.add_subplot(111, projection = '3d') ax.quiver(0,0,0,0,0,10,length=1.0, arrow_length_ratio = .1) ax.set_xlim(-1,1) ax.set_ylim(-1,1) ax....
3D_Plots_01.ipynb
AdrianoValdesGomez/Master-Thesis
cc0-1.0
Vector FIeld
xc, yc, zc = np.meshgrid(np.arange(-0.8, 1, 0.2), np.arange(-0.8, 1, 0.2), np.arange(-0.8, 1, 0.8)) u = np.sin(np.pi * xc) * np.cos(np.pi * yc) * np.cos(np.pi * zc) v = -np.cos(np.pi * xc) * np.sin(np.pi * yc) * np.cos(np.pi * zc) w = (np.sqrt(2.0 / 3.0) * np.cos(np.pi * xc)...
3D_Plots_01.ipynb
AdrianoValdesGomez/Master-Thesis
cc0-1.0
Campo vectorial Eléctroestático
xr,yr,zr = np.meshgrid(np.arange(-1,1,.1),np.arange(-1,1,.1),np.arange(-1,1,.1)) theta = np.linspace(0,np.pi,100) phi = np.linspace(0,2*np.pi,100) r = 1/np.sqrt(xr**2+yr**2+zr**2) fig = plt.figure() U,V,W = np.sin(theta)*np.cos(phi), np.sin(theta)*np.sin(phi), np.cos(theta) ax = fig.add_subplot(111,projection = '3d') a...
3D_Plots_01.ipynb
AdrianoValdesGomez/Master-Thesis
cc0-1.0
2D plots inside 3D plots
fig = plt.figure() ax = fig.gca(projection='3d') Ex = np.linspace(0, 2*np.pi, 100) Ey = np.sin(Ex * 2 * np.pi) / 2 + 0.5 ax.plot(Ex, Ey, zs=0, zdir='z', label='zs=0, zdir=z') Bx = np.linspace(0, 2*np.pi, 100) By = np.sin(Bx * 2 * np.pi) / 2 + 0.5 ax.plot(Bx, By, zs=0, zdir='y', label='zs=0, zdir=z') #colors = ('r...
3D_Plots_01.ipynb
AdrianoValdesGomez/Master-Thesis
cc0-1.0
Fill_Between in 3D plots
import math as mt import matplotlib.pyplot as pl import numpy as np import random as rd from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection # Parameter (reference height) h = 0.0 # Code to generate the data n = 200 alpha = 0.75 * mt.pi theta = [alpha + 2.0 * mt.pi * (floa...
3D_Plots_01.ipynb
AdrianoValdesGomez/Master-Thesis
cc0-1.0
Putting text inside the plots
fig = plt.figure() ax = fig.gca(projection='3d') plt.rc('text', usetex=True) zdirs = (None, 'x', 'y', 'z', (1, 1, 0), (1, 1, 1)) xs = (1, 4, 4, 9, 4, 1) ys = (2, 5, 8, 10, 1, 2) zs = (10, 3, 8, 9, 1, 8) for zdir, x, y, z in zip(zdirs, xs, ys, zs): label = '(%d, %d, %d), dir=%s' % (x, y, z, zdir) ax.text(x, y, ...
3D_Plots_01.ipynb
AdrianoValdesGomez/Master-Thesis
cc0-1.0
Then we load the modules, some issues remain with matplotlib on Jupyter, but we'll fix them later.
# NN import keras # Descriptor (unused) import dscribe # Custom Libs import cpmd import filexyz # Maths import numpy as np from scipy.spatial.distance import cdist # Plots import matplotlib matplotlib.use('nbAgg') import matplotlib.pyplot as plt # Scalers from sklearn.preprocessing import StandardScaler, MinMaxScaler f...
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Then we write some functions that are not yet on LibAtomicSim, but should be soon(ish)
def getDistance1Dsq( position1, position2, length): dist = position1-position2 half_length = length*0.5 if dist > half_length : dist -= length elif dist < -half_length: dist += length return dist*dist def getDistanceOrtho( positions, index1, index2, cell_lengths ): dist=0 for...
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Data Parameters
volume=8.82 temperature=3000 nb_type=2 nbC=32 nbO=64 run_nb=1 nb_atoms=nbC+nbO path_sim = str( "/Users/mathieumoog/Documents/CO2/" + str(volume) + "/" + str(temperature) + "K/" + str(run_nb) + "-run/")
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Loading Trajectory Here we load the trajectory, including forces and velocities, and convert the positions back into angstroms, while the forces are still in a.u (although we could do everything in a.u.).
cell_lengths = np.ones(3)*volume ftraj_path = str( path_sim + "FTRAJECTORY" ) positions, velocities, forces = cpmd.readFtraj( ftraj_path, True ) nb_step = positions.shape[0] ang2bohr = 0.529177 positions = positions*ang2bohr for i in range(3): positions[:,:,i] = positions[:,:,i] % cell_lengths[i]
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Data parametrization Setting up the parameters for the data construction.
sigma_C = 0.9 sigma_O = 0.9 size_data = nb_step*nbC dx = 0.1 positions_offset = np.zeros( (6,3), dtype=float ) size_off = 6 n_features=int(2*(size_off+1)) for i,ival in enumerate(np.arange(0,size_off,2)): positions_offset[ ival , i ] += dx positions_offset[ ival+1 , i ] -= dx
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Building complete data set, with the small caveat that we don't seek to load all of the positions for time constraints (for now at least).
max_step = 1000 start_step = 1000 stride = 10 size_data = max_step*nbC data = np.zeros( (max_step*nbC, size_off+1, nb_type ), dtype=float ) for step in np.arange(start_step,stride*max_step+start_step,stride): # Distance from all atoms (saves time?) matrix = computeDistanceMatrix( positions[step,:,:], cell_lengt...
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Creating test and train set Here we focus on the carbon atoms, and we create the input and output shape of the data. The input is created by reshaping the positions array, while the output is simply the forces reshaped. Once this is done, we chose the train et test set by making sure that there is no overlap between th...
nb_data_train = 30000 nb_data_test = 1000 size_data = max_step*nbC if nb_data_train + nb_data_test > data.shape[0]: print("Datasets larger than amount of available data") data = data.reshape( size_data, int(2*(size_off+1)) ) choice = np.random.choice( size_data, nb_data_train+nb_data_test, replace=False) choice_tr...
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Here we reshape the data and choose the point for the train and test set making sure that they do not overlap
input_train = data[ choice_train ] input_test = data[ choice_test ] output_total = forces[start_step:start_step+max_step*stride:stride,0:nbC,0].reshape(size_data,1) output_train = output_total[ choice_train ] output_test = output_total[ choice_test ]
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Scaling input and output for the Neural Net
# Creating Scalers scaler = [] scaler.append( StandardScaler() ) scaler.append( StandardScaler() ) # Fitting Scalers scaler[0].fit( input_train ) scaler[1].fit( output_train ) # Scaling input and output input_train_scale = scaler[0].transform( input_train ) input_test_scale = scaler[0].transform( input_test)...
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Neural Net Structure Here we set the NN parameters
# Iteration parameters loss_fct = 'mean_squared_error' # Loss function in the NN optimizer = 'Adam' # Choice of optimizers for training of the NN weights learning_rate = 0.001 n_epochs = 5000 # Number of epoch for optimization? patience = 100 # Patience for converge...
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
Here we create the neural net structure and compile it
# Individual net structure force_net = keras.Sequential(name='force_net') #force_net.add( keras.layers.Dropout( dropout_rate_init ) ) for node in nodes: force_net.add( keras.layers.Dense( node, activation=activation_fct, kernel_constraint=keras.constraints.maxnorm(3))) #force_net.add( keras.layers.Dropout( drop...
Projects/Moog_2016-2019/CO2/CO2_NN/forces.ipynb
CondensedOtters/PHYSIX_Utils
gpl-3.0
The layout property can be shared between multiple widgets and assigned directly.
Button(description='Another button with the same layout', layout=b.layout)
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
Description You may have noticed that the widget's length is shorter in presence of a description. This because the description is added inside of the widget's total length. You cannot change the width of the internal description field. If you need more flexibility to layout widgets and captions, you should use a combi...
from ipywidgets import HBox, Label, IntSlider HBox([Label('A too long description'), IntSlider()])
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
Natural sizes, and arrangements using HBox and VBox Most of the core-widgets have - a natural width that is a multiple of 148 pixels - a natural height of 32 pixels or a multiple of that number. - a default margin of 2 pixels which will be the ones used when it is not specified in the layout attribute. This allows sim...
from ipywidgets import Button, HBox, VBox words = ['correct', 'horse', 'battery', 'staple'] items = [Button(description=w) for w in words] HBox([VBox([items[0], items[1]]), VBox([items[2], items[3]])])
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
Latex Widgets such as sliders and text inputs have a description attribute that can render Latex Equations. The Label widget also renders Latex equations.
from ipywidgets import IntSlider, Label IntSlider(description='$\int_0^t f$') Label(value='$e=mc^2$')
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
Number formatting Sliders have a readout field which can be formatted using Python's Format Specification Mini-Language. If the space available for the readout is too narrow for the string representation of the slider value, a different styling is applied to show that not all digits are visible. The Flexbox layout In f...
from ipywidgets import Layout, Button, Box items_layout = Layout(flex='1 1 auto', width='auto') # override the default width of the button to 'auto' to let the button grow box_layout = Layout(display='flex', flex_flow='column', align_items='stretch', ...
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
Three buttons in an HBox. Items flex proportionaly to their weight.
from ipywidgets import Layout, Button, Box items = [ Button(description='weight=1'), Button(description='weight=2', layout=Layout(flex='2 1 auto', width='auto')), Button(description='weight=1'), ] box_layout = Layout(display='flex', flex_flow='row', align_items='s...
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
A more advanced example: a reactive form. The form is a VBox of width '50%'. Each row in the VBox is an HBox, that justifies the content with space between..
from ipywidgets import Layout, Button, Box, FloatText, Textarea, Dropdown, Label, IntSlider form_item_layout = Layout( display='flex', flex_flow='row', justify_content='space-between' ) form_items = [ Box([Label(value='Age of the captain'), IntSlider(min=40, max=60)], layout=form_item_layout), Box...
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
A more advanced example: a carousel.
from ipywidgets import Layout, Button, Box item_layout = Layout(height='100px', min_width='40px') items = [Button(layout=item_layout, description=str(i), button_style='warning') for i in range(40)] box_layout = Layout(overflow_x='scroll', border='3px solid black', width='500px',...
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
Predefined styles If you wish the styling of widgets to make use of colors and styles defined by the environment (to be consistent with e.g. a notebook theme), many widgets enable choosing in a list of pre-defined styles. For example, the Button widget has a button_style attribute that may take 5 different values: 'pr...
from ipywidgets import Button Button(description='Danger Button', button_style='danger')
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
The style attribute While the layout attribute only exposes layout-related CSS properties for the top-level DOM element of widgets, the style attribute is used to expose non-layout related styling attributes of widgets. However, the properties of the style atribute are specific to each widget type.
b1 = Button(description='Custom color') b1.style.button_color = 'lightgreen' b1
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
Just like the layout attribute, widget styles can be assigned to other widgets.
b2 = Button() b2.style = b1.style b2
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
Widget styling attributes are specific to each widget type.
s1 = IntSlider(description='Blue handle') s1.style.handle_color = 'lightblue' s1
docs/source/examples/Widget Styling.ipynb
cornhundred/ipywidgets
bsd-3-clause
1) Fit the Model Using Data Augmentation Here is some code to set up some ImageDataGenerators. Run it, and then answer the questions below about it.
from tensorflow.keras.applications.resnet50 import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator image_size = 224 # Specify the values for all arguments to data_generator_with_aug. data_generator_with_aug = ImageDataGenerator(preprocessing_function=preprocess_input, ...
notebooks/deep_learning/raw/ex5_data_augmentation.ipynb
Kaggle/learntools
apache-2.0
Why do we need both a generator with augmentation and a generator without augmentation? After thinking about it, check out the solution below.
# Check your answer (Run this code cell to receive credit!) q_1.solution()
notebooks/deep_learning/raw/ex5_data_augmentation.ipynb
Kaggle/learntools
apache-2.0
2) Choosing Augmentation Types ImageDataGenerator offers many types of data augmentation. For example, one argument is rotation_range. This rotates each image by a random amount that can be up to whatever value you specify. Would it be sensible to use automatic rotation for this problem? Why or why not?
# Check your answer (Run this code cell to receive credit!) q_2.solution()
notebooks/deep_learning/raw/ex5_data_augmentation.ipynb
Kaggle/learntools
apache-2.0
3) Code Fill in the missing pieces in the following code. We've supplied some boilerplate. You need to think about what ImageDataGenerator is used for each data source.
# Specify which type of ImageDataGenerator above is to load in training data train_generator = data_generator_with_aug.flow_from_directory( directory = '../input/dogs-gone-sideways/images/train', target_size=(image_size, image_size), batch_size=12, class_mode='categorical') # Specify wh...
notebooks/deep_learning/raw/ex5_data_augmentation.ipynb
Kaggle/learntools
apache-2.0
4) Did Data Augmentation Help? How could you test whether data augmentation improved your model accuracy?
# Check your answer (Run this code cell to receive credit!) q_4.solution()
notebooks/deep_learning/raw/ex5_data_augmentation.ipynb
Kaggle/learntools
apache-2.0
Magnetic isochrones were computed earlier. Details can be found in this notebook entry on a small magnetic stellar grid. I'll focus on those computed with the Grevesse, Asplund, & Sauval (2007; henceforth GAS07) solar abundance distribution. Three ages will be examined: 5 Myr, 12 Myr, and 30 Myr, in line with the previ...
std_iso_05 = np.genfromtxt('files/dmestar_00005.0myr_z+0.00_a+0.00_gas07_t010.iso') std_iso_12 = np.genfromtxt('files/dmestar_00012.0myr_z+0.00_a+0.00_gas07_t010.iso') std_iso_30 = np.genfromtxt('files/dmestar_00030.0myr_z+0.00_a+0.00_gas07_t010.iso') mag_iso_05 = np.genfromtxt('files/dmestar_00005.0myr_gas07_z+0.00_a...
Daily/20150729_young_magnetic_models.ipynb
gfeiden/Notebook
mit