repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
jeiranj/gensim
docs/notebooks/deepir.ipynb
gpl-3.0
import re contractions = re.compile(r"'|-|\"") # all non alphanumeric symbols = re.compile(r'(\W+)', re.U) # single character removal singles = re.compile(r'(\s\S\s)', re.I|re.U) # separators (any whitespace) seps = re.compile(r'\s+') # cleaner (order matters) def clean(text): text = text.lower() text = contractions.sub('', text) text = symbols.sub(r' \1 ', text) text = singles.sub(' ', text) text = seps.sub(' ', text) return text # sentence splitter alteos = re.compile(r'([!\?])') def sentences(l): l = alteos.sub(r' \1 .', l).rstrip("(\.)*\n") return l.split(".") """ Explanation: Deep Inverse Regression with Yelp reviews In this note we'll use gensim to turn the Word2Vec machinery into a document classifier, as in Document Classification by Inversion of Distributed Language Representations from ACL 2015. Data and prep First, download to the same directory as this note the data from the Yelp recruiting contest on kaggle: * https://www.kaggle.com/c/yelp-recruiting/download/yelp_training_set.zip * https://www.kaggle.com/c/yelp-recruiting/download/yelp_test_set.zip You'll need to sign-up for kaggle. You can then unpack the data and grab the information we need. We'll use an incredibly simple parser End of explanation """ from zipfile import ZipFile import json def YelpReviews(label): with ZipFile("yelp_%s_set.zip"%label, 'r') as zf: with zf.open("yelp_%s_set/yelp_%s_set_review.json"%(label,label)) as f: for line in f: rev = json.loads(line) yield {'y':rev['stars'],\ 'x':[clean(s).split() for s in sentences(rev['text'])]} """ Explanation: And put everything together in a review generator that provides tokenized sentences and the number of stars for every review. End of explanation """ YelpReviews("test").next() """ Explanation: For example: End of explanation """ revtrain = list(YelpReviews("training")) print len(revtrain), "training reviews" ## and shuffle just in case they are ordered import numpy as np np.random.shuffle(revtrain) """ Explanation: Now, since the files are small we'll just read everything into in-memory lists. It takes a minute ... End of explanation """ def StarSentences(reviews, stars=[1,2,3,4,5]): for r in reviews: if r['y'] in stars: for s in r['x']: yield s """ Explanation: Finally, write a function to generate sentences -- ordered lists of words -- from reviews that have certain star ratings End of explanation """ from gensim.models import Word2Vec import multiprocessing ## create a w2v learner basemodel = Word2Vec( workers=multiprocessing.cpu_count(), # use your cores iter=3) # sweeps of SGD through the data; more is better print basemodel """ Explanation: Word2Vec modeling We fit out-of-the-box Word2Vec End of explanation """ basemodel.build_vocab(StarSentences(revtrain)) """ Explanation: Build vocab from all sentences (you could also pre-train the base model from a neutral or un-labeled vocabulary) End of explanation """ from copy import deepcopy starmodels = [deepcopy(basemodel) for i in range(5)] for i in range(5): slist = list(StarSentences(revtrain, [i+1])) print i+1, "stars (", len(slist), ")" starmodels[i].train( slist, total_examples=len(slist) ) """ Explanation: Now, we will deep copy each base model and do star-specific training. This is where the big computations happen... End of explanation """ """ docprob takes two lists * docs: a list of documents, each of which is a list of sentences * models: the candidate word2vec models (each potential class) it returns the array of class probabilities. Everything is done in-memory. """ import pandas as pd # for quick summing within doc def docprob(docs, mods): # score() takes a list [s] of sentences here; could also be a sentence generator sentlist = [s for d in docs for s in d] # the log likelihood of each sentence in this review under each w2v representation llhd = np.array( [ m.score(sentlist, len(sentlist)) for m in mods ] ) # now exponentiate to get likelihoods, lhd = np.exp(llhd - llhd.max(axis=0)) # subtract row max to avoid numeric overload # normalize across models (stars) to get sentence-star probabilities prob = pd.DataFrame( (lhd/lhd.sum(axis=0)).transpose() ) # and finally average the sentence probabilities to get the review probability prob["doc"] = [i for i,d in enumerate(docs) for s in d] prob = prob.groupby("doc").mean() return prob """ Explanation: Inversion of the distributed representations At this point, we have 5 different word2vec language representations. Each 'model' has been trained conditional (i.e., limited to) text from a specific star rating. We will apply Bayes rule to go from p(text|stars) to p(stars|text). Fo any new sentence we can obtain its likelihood (lhd; actually, the composite likelihood approximation; see the paper) using the score function in the word2vec class. We get the likelihood for each sentence in the first test review, then convert to a probability over star ratings. This is all in the following handy wrapper. End of explanation """ # read in the test set revtest = list(YelpReviews("test")) # get the probs (note we give docprob a list of lists of words, plus the models) probs = docprob( [r['x'] for r in revtest], starmodels ) %matplotlib inline probpos = pd.DataFrame({"out-of-sample prob positive":probs[[3,4]].sum(axis=1), "true stars":[r['y'] for r in revtest]}) probpos.boxplot("out-of-sample prob positive",by="true stars", figsize=(12,5)) """ Explanation: Test set example As an example, we apply the inversion on the full test set. End of explanation """
weikang9009/pysal
notebooks/model/spvcm/using_the_sampler.ipynb
bsd-3-clause
from pysal.model import spvcm as spvcm #package API spvcm.both_levels.Generic # abstract customizable class, ignores rho/lambda, equivalent to MVCM spvcm.both_levels.MVCM # no spatial effect spvcm.both_levels.SESE # both spatial error (SE) spvcm.both_levels.SESMA # response-level SE, region-level spatial moving average spvcm.both_levels.SMASE # response-level SMA, region-level SE spvcm.both_levels.SMASMA # both levels SMA spvcm.upper_level.Upper_SE # response-level uncorrelated, region-level SE spvcm.upper_level.Upper_SMA # response-level uncorrelated, region-level SMA spvcm.lower_level.Lower_SE # response-level SE, region-level uncorrelated spvcm.lower_level.Lower_SMA # response-level SMA, region-level uncorrelated """ Explanation: Using the sampler spvcm is a generic gibbs sampling framework for spatially-correlated variance components models. The current supported models are: spvcm.both contains specifications with correlated errors in both levels, with the first statement se/sma describing the lower level and the second statement se/sma describing the upper level. In addition, MVCM, the multilevel variance components model with no spatial correlation, is in the both namespace. spvcm.lower contains two specifications, se/sma, that can be used for a variance components model with correlated lower-level errors. spvcm.upper contains two specifications, se/sma that can be used for a variance components model with correlated upper-level errors. Specification These derive from a variance components specification: $$ Y \sim \mathcal{N}(X\beta, \Psi_1(\lambda, \sigma^2) + \Delta\Psi_2(\rho, \tau^2)\Delta') $$ Where: 1. $\beta$, called Betas in code, is the marginal effect parameter. In this implementation, any region-level covariates $Z$ get appended to the end of $X$. So, if $X$ is $n \times p$ ($n$ observations of $p$ covariates) and $Z$ is $J \times p'$ ($p'$ covariates observed for $J$ regions), then the model's $X$ matrix is $n \times (p + p')$ and $\beta$ is $p + p' \times 1$. 2. $\Psi_1$ is the covariance function for the response-level model. In the software, a separable covariance is assumed, so that $\Psi_1(\rho, \sigma^2) = \Psi_1(\rho) * I \sigma^2)$, where $I$ is the $n \times n$ covariance matrix. Thus, $\rho$ is the spatial autoregressive parameter and $\sigma^2$ is the variance parameter. In the software, $\Psi_1$ takes any of the following forms: - Spatial Error (SE): $\Psi_1(\rho) = [(I - \rho \mathbf{W})'(I - \rho \mathbf{W})]^{-1} \sigma^2$ - Spatial Moving Average (SMA): $\Psi_1(\rho) = (I + \rho \mathbf{W})(I + \lambda \mathbf{W})'$ - Identity: $\Psi_1(\rho) = I$ 2. $\Psi_2$ is the region-level covariance function, with region-level autoregressive parameter $\lambda$ and region-level variance $\tau^2$. It has the same potential forms as $\Psi_1$. 3. $\alpha$, called Alphas in code, is the region-level random effect. In a variance components model, this is interpreted as a random effect for the upper-level. For a Varying-intercept format, this random component should be added to a region-level fixed effect to provide the varying intercept. This may also make it more difficult to identify the spatial parameter. Softare implementation All of the possible combinations of Spatial Moving Average and Spatial Error processes are contained in the following classes. I will walk through estimating one below, and talk about the various features of the package. First, the API of the package is defined by the spvcm.api submodule. To load it, use from pysal.model import spvcm.api as spvcm: End of explanation """ #seaborn is required for the traceplots import pysal as ps import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import geopandas as gpd %matplotlib inline """ Explanation: Depending on the structure of the model, you need at least: - X, data at the response (lower) level - Y, system response in the lower level - membership or Delta, the membership vector relating each observation to its group or the "dummy variable" matrix encoding the same information. Then, if spatial correlation is desired, M is the "upper-level" weights matrix and W the lower-level weights matrix. Any upper-level data should be passed in $Z$, and have $J$ rows. To fit a varying-intercept model, include an identity matrix in $Z$. You can include state-level and response-level intercept terms simultaneously. Finally, there are many configuration and tuning options that can be passed in at the start, or assigned after the model is initialized. First, though, let's set up some data for a model on southern counties predicting HR90, the Homicide Rate in the US South in 1990, using the the percent of the labor force that is unemployed (UE90), a principal component expressing the population structure (PS90), and a principal component expressing resource deprivation. We will also use the state-level average percentage of families below the poverty line and the average Gini coefficient at the state level for a $Z$ variable. End of explanation """ data = ps.pdio.read_files(ps.examples.get_path('south.shp')) gdf = gpd.read_file(ps.examples.get_path('south.shp')) data = data[data.STATE_NAME != 'District of Columbia'] X = data[['UE90', 'PS90', 'RD90']].values N = X.shape[0] Z = data.groupby('STATE_NAME')[['FP89', 'GI89']].mean().values J = Z.shape[0] Y = data.HR90.values.reshape(-1,1) """ Explanation: Reading in the data, we'll extract these values we need from the dataframe. End of explanation """ W2 = ps.queen_from_shapefile(ps.examples.get_path('us48.shp'), idVariable='STATE_NAME') W2 = ps.w_subset(W2, ids=data.STATE_NAME.unique().tolist()) #only keep what's in the data W1 = ps.queen_from_shapefile(ps.examples.get_path('south.shp'), idVariable='FIPS') W1 = ps.w_subset(W1, ids=data.FIPS.tolist()) #again, only keep what's in the data W1.transform = 'r' W2.transform = 'r' """ Explanation: Then, we'll construct some queen contiguity weights from the files to show how to run a model. End of explanation """ membership = data.STATE_NAME.apply(lambda x: W2.id_order.index(x)).values """ Explanation: With the data, upper-level weights, and lower-level weights, we can construct a membership vector or a dummy data matrix. For now, I'll create the membership vector. End of explanation """ Delta_frame = pd.get_dummies(data.STATE_NAME) Delta = Delta_frame.values """ Explanation: But, we could also build the dummy variable matrix using pandas, if we have a suitable categorical variable: End of explanation """ vcsma = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, membership=membership, n_samples=5000, configs=dict(tuning=1000, adapt_step=1.01)) """ Explanation: Every call to the sampler is of the following form: sampler(Y, X, W, M, Z, membership, Delta, n_samples, **configuration) Where W, M are passed if appropriate, Z is passed if used, and only one of membership or Delta is required. In the end, Z is appended to X, so the effects pertaining to the upper level will be at the tail end of the $\beta$ effects vector. If both Delta and membership are supplied, they're verified against each other to ensure that they agree before they are used in the model. For all models, the membership vector or an equivalent dummy variable matrix is required. For models with correlation in the upper level, only the upper-level weights matrix $\mathbf{M}$ is needed. For lower level models, the lower-level weights matrix $\mathbf{W}$ is required. For models with correlation in both levels, both $\mathbf{W}$ and $\mathbf{M}$ are required. Every sampler uses, either in whole or in part, spvcm.both.generic, which implements the full generic sampler discussed in the working paper. For efficiency, the upper-level samplers modify this runtime to avoid processing the full lower-level covariance matrix. Like many of the R packages dedicated to bayesian models, configuration occurs by passing the correct dictionary to the model call. In addition, you can "setup" the model, configure it, and then run samples in separate steps. The most common way to call the sampler is something like: End of explanation """ vcsma.trace.varnames """ Explanation: This model, spvcm.upper_level.Upper_SMA, is a variance components/varying intercept model with a state-level SMA-correlated error. Thus, there are only five parameters in this model, since $\rho$, the lower-level autoregressive parameter, is constrained to zero: End of explanation """ vcsma.trace.varnames """ Explanation: The results and state of the sampler are stored within the vcsma object. I'll step through the most important parts of this object. trace The quickest way to get information out of the model is via the trace object. This is where the results of the tracked parameters are stored each iteration. Any variable in the sampler state can be added to the tracked params. Trace objects are essentially dictionaries with the keys being the name of the tracked parameter and the values being a list of each iteration's sampler output. End of explanation """ trace_dataframe = vcsma.trace.to_df() """ Explanation: In this case, Lambda is the upper-level moving average parameter, Alphas is the vector of correlated group-level random effects, Tau2 is the upper-level variance, Betas are the marginal effects, and Sigma2 is the lower-level error variance. I've written two helper functions for working with traces. First is to just dump all the output into a pandas dataframe, which makes it super easy to do work on the samples, or write them out to csv and assess convergence in R's coda package. End of explanation """ trace_dataframe.head() """ Explanation: the dataframe will have columns containing the elements of the parameters and each row is a single iteration of the sampler: End of explanation """ trace_dataframe.mean() """ Explanation: You can write this out to a csv or analyze it in memory like a typical pandas dataframes: End of explanation """ fig, ax = vcsma.trace.plot() plt.show() """ Explanation: The second is a method to plot the traces: End of explanation """ vcsma.trace['Lambda',-4:] #last 4 draws of lambda vcsma.trace[['Tau2', 'Sigma2'], 0:2] #the first 2 variance parameters """ Explanation: The trace object can be sliced by (chain, parameter, index) tuples, or any subset thereof. End of explanation """ vcsma_p = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, membership=membership, #run 3 chains n_samples=5000, n_jobs=3, configs=dict(tuning=500, adapt_step=1.01)) vcsma_p.trace[0, 'Betas', -1] #the last draw of Beta on the first chain. vcsma_p.trace[1, 'Betas', -1] #the last draw of Beta on the second chain """ Explanation: We only ran a single chain, so the first index is assumed to be zero. You can run more than one chain in parallel, using the builtin python multiprocessing library: End of explanation """ vcsma_p.trace.plot(burn=1000, thin=10) plt.suptitle('SMA of Homicide Rate in Southern US Counties', y=0, fontsize=20) #plt.savefig('trace.png') #saves to a file called "trace.png" plt.show() vcsma_p.trace.plot(burn=-100, varnames='Lambda') #A negative burn-in works like negative indexing in Python & R plt.suptitle('First 100 iterations of $\lambda$', fontsize=20, y=.02) plt.show() #so this plots Lambda in the first 100 iterations. """ Explanation: and the chain plotting works also for the multi-chain traces. In addition, there are quite a few traceplot options, and all the plots are returned by the methods as matplotlib objects, so they can also be saved using plt.savefig(). End of explanation """ df = vcsma.trace.to_df() df.describe() """ Explanation: To get stuff like posterior quantiles, you can use the attendant pandas dataframe functionality, like describe. End of explanation """ vcsma.trace.summarize() """ Explanation: There is also a trace.summarize function that will compute various things contained in spvcm.diagnostics on the chain. It takes a while for large chains, because the statsmodels.tsa.AR estimator is much slower than the ar estimator in R. If you have rpy2 installed and CODA installed in your R environment, I attempt to use R directly. End of explanation """ from statsmodels.api import tsa #if you don't have it, try removing the comment and: #! pip install statsmodels """ Explanation: So, 5000 iterations, but many parameters have an effective sample size that's much less than this. There's debate about whether it's necesasry to thin these samples in accordance with the effective size, and I think you should thin your sample to the effective size and see if it affects your HPD/Standard Errorrs. The existing python packages for MCMC diagnostics were incorrect. So, I've implemented many of the diagnostics from CODA, and have verified that the diagnostics comport with CODA diagnostics. One can also use numpy & statsmodels functions. I'll show some types of analysis. End of explanation """ plt.plot(tsa.pacf(vcsma.trace['Lambda', -2500:])) """ Explanation: For example, a plot of the partial autocorrelation in $\lambda$, the upper-level spatial moving average parameter, over the last half of the chain is: End of explanation """ tsa.pacf(df.Lambda)[0:3] """ Explanation: So, the chain is close-to-first order: End of explanation """ betas = [c for c in df.columns if c.startswith('Beta')] f,ax = plt.subplots(len(betas), 2, figsize=(10,8)) for i, col in enumerate(betas): ax[i,0].plot(tsa.acf(df[col].values)) ax[i,1].plot(tsa.pacf(df[col].values)) #the pacf plots take a while ax[i,0].set_title(col +' (ACF)') ax[i,1].set_title('(PACF)') f.tight_layout() plt.show() """ Explanation: We could do this for many parameters, too. An Autocorrelation/Partial Autocorrelation plot can be made of the marginal effects by: End of explanation """ gstats = spvcm.diagnostics.geweke(vcsma, varnames='Tau2') #takes a while print(gstats) """ Explanation: As far as the builtin diagnostics for convergence and simulation quality, the diagnostics module exposes a few things: Geweke statistics for differences in means between chain components: End of explanation """ plt.plot(gstats[0]['Tau2'][:-1]) """ Explanation: Typically, this means the chain is converged at the given "bin" count if the line stays within $\pm2$. The geweke statistic is a test of differences in means between the given chunk of the chain and the remaining chain. If it's outside of +/- 2 in the early part of the chain, you should discard observations early in the chain. If you get extreme values of these statistics throughout, you need to keep running the chain. End of explanation """ spvcm.diagnostics.mcse(vcsma, varnames=['Tau2', 'Sigma2']) """ Explanation: We can also compute Monte Carlo Standard Errors like in the mcse R package, which represent the intrinsic error contained in the estimate: End of explanation """ spvcm.diagnostics.psrf(vcsma_p, varnames=['Tau2', 'Sigma2']) """ Explanation: Another handy statistic is the Partial Scale Reduction factor, which measures of how likely a set of chains run in parallel have converged to the same stationary distribution. It provides the difference in variance between between chains vs. within chains. If these are significantly larger than one (say, 1.5), the chain probably has not converged. Being marginally below $1$ is fine, too. End of explanation """ spvcm.diagnostics.hpd_interval(vcsma, varnames=['Betas', 'Lambda', 'Sigma2']) """ Explanation: Highest posterior density intervals provide a kind of interval estimate for parameters in Bayesian models: End of explanation """ vcsma.trace.map(np.percentile, varnames=['Lambda', 'Tau2', 'Sigma2'], #arguments to pass to the function go last q=[25, 50, 75]) """ Explanation: Sometimes, you want to apply arbitrary functions to each parameter trace. To do this, I've written a map function that works like the python builtin map. For example, if you wanted to get arbitrary percentiles from the chain: End of explanation """ vcsma.trace.to_csv('./model_run.csv') """ Explanation: In addition, you can pop the trace results pretty simply to a .csv file and analyze it elsewhere, like if you want to use use the coda Bayesian Diagnostics package in R. To write out a model to a csv, you can use: End of explanation """ tr = spvcm.abstracts.Trace.from_csv('./model_run.csv') print(tr.varnames) tr.plot(varnames=['Tau2']) """ Explanation: And, you can even load traces from csvs: End of explanation """ vcsma.draw() """ Explanation: Working with models: draw and sample These two functions are used to call the underlying Gibbs sampler. They take no arguments, and operate on the sampler in place. draw provides a single new sample: End of explanation """ vcsma.sample(10) """ Explanation: And sample steps forward an arbitrary number of times: End of explanation """ vcsma.cycles """ Explanation: At this point, we did 5000 initial samples and 11 extra samples. Thus: End of explanation """ vcsma_p.sample(10) vcsma_p.cycles """ Explanation: Parallel models can suspend/resume sampling too: End of explanation """ print(vcsma.state.keys()) """ Explanation: Under the hood, it's the draw method that actually ends up calling one run of model._iteration, which is where the actual statistical code lives. Then, it updates all model.traced_params by adding their current value in model.state to model.trace. In addition, model._finalize is called the first time sampling is run, which computes some of the constants & derived quantities that save computing time. Working with models: state This is the collection of current values in the sampler. To be efficient, Gibbs sampling must keep around some of the computations used in the simulation, since sometimes the same terms show up in different conditional posteriors. So, the current values of the sampler are stored in state. All of the following are tracked in the state: End of explanation """ example = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, membership=membership, n_samples=250, extra_traced_params = ['DeltaAlphas'], configs=dict(tuning=500, adapt_step=1.01)) example.trace.varnames """ Explanation: If you want to track how something (maybe a hyperparameter) changes over sampling, you can pass extra_traced_params to the model declaration: End of explanation """ vcsma.configs """ Explanation: configs this is where configuration options for the various MCMC steps are stored. For multilevel variance components models, these are called $\rho$ for the lower-level error parameter and $\lambda$ for the upper-level parameter. Two exact sampling methods are implemented, Metropolis sampling & Slice sampling. Each MCMC step has its own config: End of explanation """ vcsma.configs.Lambda.accepted """ Explanation: Since vcsma is an upper-level-only model, the Rho config is skipped. But, we can look at the Lambda config. The number of accepted lambda draws is contained in : End of explanation """ vcsma.configs.Lambda.accepted / float(vcsma.cycles) """ Explanation: so, the acceptance rate is End of explanation """ example = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, membership=membership, n_samples=500, configs=dict(tuning=250, adapt_step=1.01, debug=True)) """ Explanation: Also, if you want to get verbose output from the metropolis sampler, there is a "debug" flag: End of explanation """ example.configs.Lambda._cache[-1] #let's only look at the last one """ Explanation: Which stores the information about each iteration in a list, accessible from model.configs.&lt;parameter&gt;._cache: End of explanation """ from pysal.model.spvcm.steps import Metropolis, Slice """ Explanation: Configuration of the MCMC steps is done using the config options dictionary, like done in spBayes in R. The actual configuration classes exist in spvcm.steps: End of explanation """ example = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, membership=membership, n_samples=500, configs=dict(tuning=250, adapt_step=1.01, debug=True, ar_low=.1, ar_hi=.4)) example.configs.Lambda.ar_hi, example.configs.Lambda.ar_low example_slicer = spvcm.upper_level.Upper_SMA(Y, X, M=W2, Z=Z, membership=membership, n_samples=500, configs=dict(Lambda_method='slice')) example_slicer.trace.plot(varnames='Lambda') plt.show() example_slicer.configs.Lambda.adapt, example_slicer.configs.Lambda.width """ Explanation: Most of the common options are: Metropolis jump: the starting standard deviation of the proposal distribution tuning: the number of iterations to tune the scale of the proposal ar_low: the lower bound of the target acceptance rate range ar_hi: the upper bound of the target acceptance rate range adapt_step: a number (bigger than 1) that will be used to modify the jump in order to keep the acceptance rate betwen ar_lo and ar_hi. Values much larger than 1 result in much more dramatic tuning. Slice width: starting width of the level set adapt: number of previous slices use in the weighted average for the next slice. If 0, the width is not dynamically tuned. End of explanation """ vcsese = spvcm.both_levels.SESE(Y, X, W=W1, M=W2, Z=Z, membership=membership, n_samples=0) """ Explanation: Working with models: customization If you're doing heavy customization, it makes the most sense to first initialize the class without sampling. We did this before when showing how the "extra_traced_params" option worked. To show, let's initialize a double-level SAR-Error variance components model, but not actually draw anything. To do this, you pass the option n_samples=0. End of explanation """ vcsese.configs """ Explanation: This sets up a two-level spatial error model with the default uninformative configuration. This means the prior precisions are all I * .001*, prior means are all 0, spatial parameters are set to -1/(n-1), and prior scale factors are set arbitrarily. Configs Options are set by assgning to the relevant property in model.configs. The model configuration object is another dictionary with a few special methods. Configuration options are stored for each parameter separately: End of explanation """ vcsese.configs.Lambda.max_tuning = 0 vcsese.configs.Lambda.jump = .25 """ Explanation: So, for example, if we wanted to turn off adaptation in the upper-level parameter, and fix the Metrpolis jump variance to .25: End of explanation """ Delta = vcsese.state.Delta DeltaZ = Delta.dot(Z) vcsese.state.Betas_mean0 = ps.spreg.OLS(Y, np.hstack((X, DeltaZ))).betas """ Explanation: Priors Another thing that might be interesting (though not "bayesian") would be to fix the prior mean of $\beta$ to the OLS estimates. One way this could be done would be to pull the Delta matrix out from the state, and estimate: $$ Y = X\beta + \Delta Z + \epsilon $$ using PySAL: End of explanation """ vcsese.state.Lambda = -.25 """ Explanation: Starting Values If you wanted to start the sampler at a given starting value, you can do so by assigning that value to the Lambda value in state. End of explanation """ vcsese.state.Betas += np.random.uniform(-10, 10, size=(vcsese.state.p,1)) """ Explanation: Sometimes, it's suggested that you start the beta vector randomly, rather than at zero. For the parallel sampling, the model starting values are adjusted to induce overdispersion in the start values. You could do this manually, too: End of explanation """ from scipy import stats def Lambda_prior(val): if (val < 0) or (val > 1): return -np.inf return np.log(stats.beta.pdf(val, 2,1)) def Rho_prior(val): if (val > .5) or (val < -.5): return -np.inf return np.log(stats.truncnorm.pdf(val, -.5, .5, loc=0, scale=.5)) """ Explanation: Spatial Priors Changing the spatial parameter priors is also done by changing their prior in state. This prior must be a function that takes a value of the parameter and return the log of the prior probability for that value. For example, we could assign P(\lambda) = Beta(2,1) and zero if outside $(0,1)$, and asign $\rho$ a truncated $\mathcal{N}(0,.5)$ prior by first defining their functional form: End of explanation """ vcsese.state.LogLambda0 = Lambda_prior vcsese.state.LogRho0 = Rho_prior """ Explanation: And then assigning to their symbols, LogLambda0 and LogRho0 in the state: End of explanation """ %timeit vcsese.draw() """ Explanation: Performance The efficiency of the sampler is contingent on the lower-level size. If we were to estimate the draw in a dual-level SAR-Error Variance Components iteration: End of explanation """ %time vcsese.sample(100) vcsese.sample(10) """ Explanation: To make it easy to work with the model, you can interrupt and resume sampling using keyboard interrupts (ctrl-c or the stop button in the notebook). End of explanation """ vcsese.state.Psi_1 #lower-level covariance vcsese.state.Psi_2 #upper-level covariance vcsma.state.Psi_2 #upper-level covariance vcsma.state.Psi_2i vcsma.state.Psi_1 """ Explanation: Under the Hood Package Structure Most of the tools in the package are stored in relevant python files in the top level or a dedicated subfolder. Explaining a few: abstracts.py - the abstract class machinery to iterate over a sampling loop. This is where the classes are defined, like Trace, Sampler_Mixin, or Hashmap. plotting.py - tools for plotting output steps.py - the step method definitions verify.py - like user checks in pysal.spreg, this contains a few sanity checks. utils.py- contains statistical or numerical utilities to make the computation easier, like cholesky multivariate normal sampling, more sparse utility functions, etc. diagnostics.py - all the diagnostics priors.py - definitions of alternative prior forms. Right now, this is pretty simple. sqlite.py - functions to use a sqlite database instead of an in-memory chain are defined here. The implementation of a Model The package is implemented so that every "model type" first sends off to the spvcm.both.Base_Generic, which sets up the state, trace, and priors. Models are added by writing a model.py file and possibly a sample.py file. The model.py file defines a Base/User class pair (like spreg) that sets up the state and trace. It must define hyperparameters, and can precompute objects used in the sampling loop. The base class should inherit from Sampler_Mixin, which defines all of the machinery of sampling. The loop through the conditional posteriors should be defined in model.py, in the model._iteration function. This should update the model state in place. The model may also define a _finalize function which is run once before sampling. So, if I write a new model, like a varying-intercept model with endogenously-lagged intercepts, I would write a model.py containing something like: ```python class Base_VISAR(spvcm.generic.Base_Generic): def init(self, Y, X, M, membership=None, Delta=None, extra_traced_params=None, #record extra things in state n_samples=1000, n_jobs=1, #sampling config priors = None, # dict with prior values for params configs=None, # dict with configs for MCMC steps starting_values=None, # dict with starting values truncation=None, # options to truncate MCMC step priors center=False, # Whether to center the X,Z matrices scale=False # Whether re-scale the X,Z matrices ): super(Base_VISAR, self).init(self, Y, X, M, W=None, membership=membership, Delta=Delta, n_samples=0, n_jobs=n_jobs, priors=priors, configs=configs, starting_values=starting_values, truncation=truncation, center=center, scale=scale ) self.sample(n_samples, n_jobs=n_jobs) def _finalize(self): # the degrees of freedom of the variance parameter is constant self.state.Sigma2_an = self.state.N/2 + self.state.Sigma2_a0 ... def _iteration(self): # computing the values needed to sample from the conditional posteriors mean = spdot(X.T, spdot(self.PsiRhoi, X)) / Sigma2 + self.state.bmean0 ... ... `` I've organized the directories in this project intoboth_levels,upper_level,lower_level, andhierarchical`, which contains some of the spatially-varying coefficient models & other models I'm working on that are unrelated to the multilevel variance components stuff. Since most of the _iteration loop is the same between models, most of the models share the same sampling code, but customize the structure of the covariance in each level. These covariance variables are stored in the state.Psi_1, for the lower-level covariance, and state.Psi_2 for the upper-level covariance. Likewise, the precision functions are state.Psi_1i and state.Psi_2i. For example: End of explanation """
batfish/pybatfish
docs/source/notebooks/forwarding.ipynb
apache-2.0
bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Packet Forwarding This category of questions allows you to query how different types of traffic is forwarded by the network and if endpoints are able to communicate. You can analyze these aspects in a few different ways. Traceroute Bi-directional Traceroute Reachability Bi-directional Reachability Loop detection Multipath Consistency for host-subnets Multipath Consistency for router loopbacks End of explanation """ result = bf.q.traceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame() """ Explanation: Traceroute Traces the path(s) for the specified flow. Performs a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified. Unlike a real traceroute, this traceroute is directional. That is, for it to succeed, the reverse connectivity is not needed. This feature can help debug connectivity issues by decoupling the two directions. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- startLocation | Location (node and interface combination) to start tracing from. | LocationSpec | False | headers | Packet header constraints. | HeaderConstraints | False | maxTraces | Limit the number of traces returned. | int | True | ignoreFilters | If set, filters/ACLs encountered along the path are ignored. | bool | True | Invocation End of explanation """ result.Flow """ Explanation: Return Value Name | Description | Type --- | --- | --- Flow | The flow | Flow Traces | The traces for this flow | Set of Trace TraceCount | The total number traces for this flow | int Retrieving the flow definition End of explanation """ len(result.Traces) result.Traces[0] """ Explanation: Retrieving the detailed Trace information End of explanation """ result.Traces[0][0] """ Explanation: Evaluating the first Trace End of explanation """ result.Traces[0][0].disposition """ Explanation: Retrieving the disposition of the first Trace End of explanation """ result.Traces[0][0][0] """ Explanation: Retrieving the first hop of the first Trace End of explanation """ result.Traces[0][0][-1] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Retrieving the last hop of the first Trace End of explanation """ result = bf.q.bidirectionalTraceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame() """ Explanation: Bi-directional Traceroute Traces the path(s) for the specified flow, along with path(s) for reverse flows. This question performs a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified. If the trace succeeds, a traceroute is performed in the reverse direction. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- startLocation | Location (node and interface combination) to start tracing from. | LocationSpec | False | headers | Packet header constraints. | HeaderConstraints | False | maxTraces | Limit the number of traces returned. | int | True | ignoreFilters | If set, filters/ACLs encountered along the path are ignored. | bool | True | Invocation End of explanation """ result.Forward_Flow """ Explanation: Return Value Name | Description | Type --- | --- | --- Forward_Flow | The forward flow. | Flow Forward_Traces | The forward traces. | List of Trace New_Sessions | Sessions initialized by the forward trace. | List of str Reverse_Flow | The reverse flow. | Flow Reverse_Traces | The reverse traces. | List of Trace Retrieving the Forward flow definition End of explanation """ len(result.Forward_Traces) result.Forward_Traces[0] """ Explanation: Retrieving the detailed Forward Trace information End of explanation """ result.Forward_Traces[0][0] """ Explanation: Evaluating the first Forward Trace End of explanation """ result.Forward_Traces[0][0].disposition """ Explanation: Retrieving the disposition of the first Forward Trace End of explanation """ result.Forward_Traces[0][0][0] """ Explanation: Retrieving the first hop of the first Forward Trace End of explanation """ result.Forward_Traces[0][0][-1] """ Explanation: Retrieving the last hop of the first Forward Trace End of explanation """ result.Reverse_Flow """ Explanation: Retrieving the Return flow definition End of explanation """ len(result.Reverse_Traces) result.Reverse_Traces[0] """ Explanation: Retrieving the detailed Return Trace information End of explanation """ result.Reverse_Traces[0][0] """ Explanation: Evaluating the first Reverse Trace End of explanation """ result.Reverse_Traces[0][0].disposition """ Explanation: Retrieving the disposition of the first Reverse Trace End of explanation """ result.Reverse_Traces[0][0][0] """ Explanation: Retrieving the first hop of the first Reverse Trace End of explanation """ result.Reverse_Traces[0][0][-1] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Retrieving the last hop of the first Reverse Trace End of explanation """ result = bf.q.reachability(pathConstraints=PathConstraints(startLocation = '/as2/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), actions='SUCCESS').answer().frame() """ Explanation: Reachability Finds flows that match the specified path and header space conditions. Searches across all flows that match the specified conditions and returns examples of such flows. This question can be used to ensure that certain services are globally accessible and parts of the network are perfectly isolated from each other. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- pathConstraints | Constraint the path a flow can take (start/end/transit locations). | PathConstraints | True | headers | Packet header constraints. | HeaderConstraints | True | actions | Only return flows for which the disposition is from this set. | DispositionSpec | True | success maxTraces | Limit the number of traces returned. | int | True | invertSearch | Search for packet headers outside the specified headerspace, rather than inside the space. | bool | True | ignoreFilters | Do not apply filters/ACLs during analysis. | bool | True | Invocation End of explanation """ result.Flow """ Explanation: Return Value Name | Description | Type --- | --- | --- Flow | The flow | Flow Traces | The traces for this flow | Set of Trace TraceCount | The total number traces for this flow | int Retrieving the flow definition End of explanation """ len(result.Traces) result.Traces[0] """ Explanation: Retrieving the detailed Trace information End of explanation """ result.Traces[0][0] """ Explanation: Evaluating the first Trace End of explanation """ result.Traces[0][0].disposition """ Explanation: Retrieving the disposition of the first Trace End of explanation """ result.Traces[0][0][0] """ Explanation: Retrieving the first hop of the first Trace End of explanation """ result.Traces[0][0][-1] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Retrieving the last hop of the first Trace End of explanation """ result = bf.q.bidirectionalReachability(pathConstraints=PathConstraints(startLocation = '/as2dist1/'), headers=HeaderConstraints(dstIps='host1', srcIps='0.0.0.0/0', applications='DNS'), returnFlowType='SUCCESS').answer().frame() """ Explanation: Bi-directional Reachability Searches for successfully delivered flows that can successfully receive a response. Performs two reachability analyses, first originating from specified sources, then returning back to those sources. After the first (forward) pass, sets up sessions in the network and creates returning flows for each successfully delivered forward flow. The second pass searches for return flows that can be successfully delivered in the presence of the setup sessions. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- pathConstraints | Constraint the path a flow can take (start/end/transit locations). | PathConstraints | True | headers | Packet header constraints. | HeaderConstraints | False | returnFlowType | Specifies the type of return flows to search. | str | True | SUCCESS Invocation End of explanation """ result.Forward_Flow """ Explanation: Return Value Name | Description | Type --- | --- | --- Forward_Flow | The forward flow. | Flow Forward_Traces | The forward traces. | List of Trace New_Sessions | Sessions initialized by the forward trace. | List of str Reverse_Flow | The reverse flow. | Flow Reverse_Traces | The reverse traces. | List of Trace Retrieving the Forward flow definition End of explanation """ len(result.Forward_Traces) result.Forward_Traces[0] """ Explanation: Retrieving the detailed Forward Trace information End of explanation """ result.Forward_Traces[0][0] """ Explanation: Evaluating the first Forward Trace End of explanation """ result.Forward_Traces[0][0].disposition """ Explanation: Retrieving the disposition of the first Forward Trace End of explanation """ result.Forward_Traces[0][0][0] """ Explanation: Retrieving the first hop of the first Forward Trace End of explanation """ result.Forward_Traces[0][0][-1] """ Explanation: Retrieving the last hop of the first Forward Trace End of explanation """ result.Reverse_Flow """ Explanation: Retrieving the Return flow definition End of explanation """ len(result.Reverse_Traces) result.Reverse_Traces[0] """ Explanation: Retrieving the detailed Return Trace information End of explanation """ result.Reverse_Traces[0][0] """ Explanation: Evaluating the first Reverse Trace End of explanation """ result.Reverse_Traces[0][0].disposition """ Explanation: Retrieving the disposition of the first Reverse Trace End of explanation """ result.Reverse_Traces[0][0][0] """ Explanation: Retrieving the first hop of the first Reverse Trace End of explanation """ result.Reverse_Traces[0][0][-1] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Retrieving the last hop of the first Reverse Trace End of explanation """ result = bf.q.detectLoops().answer().frame() """ Explanation: Loop detection Detects forwarding loops. Searches across all possible flows in the network and returns example flows that will experience forwarding loops. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- maxTraces | Limit the number of traces returned. | int | True | Invocation End of explanation """ result.head(5) bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Return Value Name | Description | Type --- | --- | --- Flow | The flow | Flow Traces | The traces for this flow | Set of Trace TraceCount | The total number traces for this flow | int Print the first 5 rows of the returned Dataframe End of explanation """ result = bf.q.subnetMultipathConsistency().answer().frame() """ Explanation: Multipath Consistency for host-subnets Validates multipath consistency between all pairs of subnets. Searches across all flows between subnets that are treated differently (i.e., dropped versus forwarded) by different paths in the network and returns example flows. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- maxTraces | Limit the number of traces returned. | int | True | Invocation End of explanation """ result.Flow """ Explanation: Return Value Name | Description | Type --- | --- | --- Flow | The flow | Flow Traces | The traces for this flow | Set of Trace TraceCount | The total number traces for this flow | int Retrieving the flow definition End of explanation """ len(result.Traces) result.Traces[0] """ Explanation: Retrieving the detailed Trace information End of explanation """ result.Traces[0][0] """ Explanation: Evaluating the first Trace End of explanation """ result.Traces[0][0].disposition """ Explanation: Retrieving the disposition of the first Trace End of explanation """ result.Traces[0][0][0] """ Explanation: Retrieving the first hop of the first Trace End of explanation """ result.Traces[0][0][-1] bf.set_network('generate_questions') bf.set_snapshot('generate_questions') """ Explanation: Retrieving the last hop of the first Trace End of explanation """ result = bf.q.loopbackMultipathConsistency().answer().frame() """ Explanation: Multipath Consistency for router loopbacks Validates multipath consistency between all pairs of loopbacks. Finds flows between loopbacks that are treated differently (i.e., dropped versus forwarded) by different paths in the presence of multipath routing. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- maxTraces | Limit the number of traces returned. | int | True | Invocation End of explanation """ result.Flow """ Explanation: Return Value Name | Description | Type --- | --- | --- Flow | The flow | Flow Traces | The traces for this flow | Set of Trace TraceCount | The total number traces for this flow | int Retrieving the flow definition End of explanation """ len(result.Traces) result.Traces[0] """ Explanation: Retrieving the detailed Trace information End of explanation """ result.Traces[0][0] """ Explanation: Evaluating the first Trace End of explanation """ result.Traces[0][0].disposition """ Explanation: Retrieving the disposition of the first Trace End of explanation """ result.Traces[0][0][0] """ Explanation: Retrieving the first hop of the first Trace End of explanation """ result.Traces[0][0][-1] """ Explanation: Retrieving the last hop of the first Trace End of explanation """
rishuatgithub/MLPy
nlp/UPDATED_NLP_COURSE/01-NLP-Python-Basics/02-Stemming.ipynb
apache-2.0
# Import the toolkit and the full Porter Stemmer library import nltk from nltk.stem.porter import * p_stemmer = PorterStemmer() words = ['run','runner','running','ran','runs','easily','fairly'] for word in words: print(word+' --> '+p_stemmer.stem(word)) """ Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> Stemming Often when searching text for a certain keyword, it helps if the search returns variations of the word. For instance, searching for "boat" might also return "boats" and "boating". Here, "boat" would be the stem for [boat, boater, boating, boats]. Stemming is a somewhat crude method for cataloging related words; it essentially chops off letters from the end until the stem is reached. This works fairly well in most cases, but unfortunately English has many exceptions where a more sophisticated process is required. In fact, spaCy doesn't include a stemmer, opting instead to rely entirely on lemmatization. For those interested, there's some background on this decision here. We discuss the virtues of lemmatization in the next section. Instead, we'll use another popular NLP tool called nltk, which stands for Natural Language Toolkit. For more information on nltk visit https://www.nltk.org/ Porter Stemmer One of the most common - and effective - stemming tools is Porter's Algorithm developed by Martin Porter in 1980. The algorithm employs five phases of word reduction, each with its own set of mapping rules. In the first phase, simple suffix mapping rules are defined, such as: From a given set of stemming rules only one rule is applied, based on the longest suffix S1. Thus, caresses reduces to caress but not cares. More sophisticated phases consider the length/complexity of the word before applying a rule. For example: Here m&gt;0 describes the "measure" of the stem, such that the rule is applied to all but the most basic stems. End of explanation """ from nltk.stem.snowball import SnowballStemmer # The Snowball Stemmer requires that you pass a language parameter s_stemmer = SnowballStemmer(language='english') words = ['run','runner','running','ran','runs','easily','fairly'] # words = ['generous','generation','generously','generate'] for word in words: print(word+' --> '+s_stemmer.stem(word)) """ Explanation: <font color=green>Note how the stemmer recognizes "runner" as a noun, not a verb form or participle. Also, the adverbs "easily" and "fairly" are stemmed to the unusual root "easili" and "fairli"</font> Snowball Stemmer This is somewhat of a misnomer, as Snowball is the name of a stemming language developed by Martin Porter. The algorithm used here is more acurately called the "English Stemmer" or "Porter2 Stemmer". It offers a slight improvement over the original Porter stemmer, both in logic and speed. Since nltk uses the name SnowballStemmer, we'll use it here. End of explanation """ words = ['consolingly'] print('Porter Stemmer:') for word in words: print(word+' --> '+p_stemmer.stem(word)) print('Porter2 Stemmer:') for word in words: print(word+' --> '+s_stemmer.stem(word)) """ Explanation: <font color=green>In this case the stemmer performed the same as the Porter Stemmer, with the exception that it handled the stem of "fairly" more appropriately with "fair"</font> Try it yourself! Pass in some of your own words and test each stemmer on them. Remember to pass them as strings! End of explanation """ phrase = 'I am meeting him tomorrow at the meeting' for word in phrase.split(): print(word+' --> '+p_stemmer.stem(word)) """ Explanation: Stemming has its drawbacks. If given the token saw, stemming might always return saw, whereas lemmatization would likely return either see or saw depending on whether the use of the token was as a verb or a noun. As an example, consider the following: End of explanation """
AkshanshChahal/BTP
Satellite/Try Test Learn.ipynb
mit
import numpy as np import pandas as pd # importing the dataset we prepared and saved using Baseline 1 Notebook ricep = pd.read_csv("/Users/macbook/Documents/BTP/Notebook/BTP/ricep.csv") ricep.head() ricep = ricep.drop(["Unnamed: 0"],axis=1) ricep["phosphorus"] = ricep["phosphorus"]*10 ricep["value"] = ricep["Production"]/ricep["Area"] ricep.head() ricep.index[ ricep['ind_district'] == 'anantapur'].tolist() ricep.index[ ricep['ind_district'] == 'anantapur' & ricep['Crop_Year'] == '2002' ] ricep.index[ (ricep['ind_district'] == 'anantapur') & (ricep['Crop_Year'] == 2002) ].tolist() """ Explanation: Finally able to do Reverse Geocoding without using any paid API -------------------------------------------------------------------------------------- Lets try selecting rows from a DataFrame with indexes End of explanation """ a = np.empty((ricep.shape[0],1))*np.NAN ricex = ricep.assign(test = a) ricex.head() """ Explanation: So we had to just put the parentheses End of explanation """ %time v = ricex.iloc[0,13] v d = v + 5 d if pd.isnull(v): v = 3 v+5 v df = pd.DataFrame(np.arange(1,7).reshape(2,3), columns = list('abc'), index=pd.Series([2,5], name='b')) df x = 3 x += 2 x s = "Akshansh" s[-4] # bx = False # if bx: continue # else: bx = True """ Explanation: Time taken to execute a cell End of explanation """
mdeff/ntds_2016
toolkit/03_ex_hpc.ipynb
mit
def accuracy_python(y_pred, y_true): """Plain Python implementation.""" num_correct = 0 for y_pred_i, y_true_i in zip(y_pred, y_true): if y_pred_i == y_true_i: num_correct += 1 return num_correct / len(y_true) """ Explanation: A Python Tour of Data Science: High Performance Computing Michaël Defferrard, PhD student, EPFL LTS2 Exercise: is Python slow ? That is one of the most heard complain about Python. Because CPython, the Python reference implementation, interprets the language (i.e. it compiles Python code to intermediate bytecode which is then interpreted by a virtual machine), it is inherentably slower than compiled languages, especially for computation heavy tasks such as number crunching. There are three ways around it that we'll explore in this exercise: 1. Specialized libraries. 1. Compile Python to machine code. 1. Implement in a compiled language and call from Python. In this exercise we'll compare many possible implementations of a function. Our goal is to compare the execution time of our implementations and get a sense of the many ways to write efficient Python code. We test seven implementations: Along the exercise we'll use the function $$accuracy(\hat{y}, y) = \frac1n \sum_{i=0}^{i=n-1} 1(\hat{y}_i = y_i),$$ where $1(x)$ is the indicator function and $n$ is the number of samples. This function computes the accuracy, i.e. the percentage of correct predictions, of a classifier. A pure Python implementation is given below. End of explanation """ import numpy as np c = 10 # Number of classes. n = int(1e6) # Number of samples. y_true = np.random.randint(0, c, size=n) y_pred = np.random.randint(0, c, size=n) print('Expected accuracy: {}'.format(1/c)) print('Empirical accuracy: {}'.format(accuracy_python(y_pred, y_true))) %timeit accuracy_python(y_pred, y_true) """ Explanation: Below we test and measure the execution time of the above implementation. The %timeit function provided by IPython is a useful helper to measure the execution time of a line of Python code. As we'll see, the above implementation is very inefficient compared to what we can achieve. End of explanation """ def accuracy_numpy(y_pred, y_true): """Numpy implementation.""" # Your code here. return 0 def accuracy_sklearn(y_pred, y_true): """Scikit-learn implementation.""" # Your code here. return 0 # assert np.allclose(accuracy_numpy(y_pred, y_true), accuracy_python(y_pred, y_true)) # assert np.allclose(accuracy_sklearn(y_pred, y_true), accuracy_python(y_pred, y_true)) # %timeit accuracy_numpy(y_pred, y_true) # %timeit accuracy_sklearn(y_pred, y_true) """ Explanation: 1 Specialized libraries Specialized libraries, which provide efficient compiled implementations of the heavy computations, is an easy way to solve the performance problem. That is for example NumPy, which uses efficient BLAS and LAPACK implementations as a backend. SciPy and scikit-learn fall in the same category. Implement below the accuracy function using: 1. Only functions provided by NumPy. The idea here is to vectorize the computation. 2. The implementation of scikit-learn, our machine learning library. Then test that it provides the correct result and measure it's execution time. How much faster are they compared to the pure Python implementation ? End of explanation """ from numba import jit @jit def accuracy_numba(y_pred, y_true): """Plain Python implementation, compiled by LLVM through Numba.""" return 0 %load_ext Cython %%cython cimport numpy as np cimport cython def accuracy_cython(np.ndarray[long, ndim=1] y_pred, np.ndarray[long, ndim=1] y_true): """Python implementation with type information, transpiled to C by Cython.""" return 0 """ Explanation: 2 Compiled Python The second option of choice, when the algorirthm does not exist in our favorite libraries and we have to implement it, it to implement in Python and compile it to machine code. Below you'll compile Python with two frameworks. 1. Numba is a just-in-time (JIT) compiler for Python, using the LLVM compiler infrastructure. 1. Cython, which requires type information, transpiles Python to C then compiles the generated C code. While these two approaches offer maximal compatibility with the CPython and NumPy ecosystems, another approach is to use another Python implementation such as PyPy, which features a just-in-time compiler and supports multiple back-ends (C, CLI, JVM). Alternatives are Jython, which runs Python on the Java platform, and IronPython / PythonNet for the .NET platform. End of explanation """ # Your code here. """ Explanation: Evaluate below the performance of those two implementations, while testing their correctness. How do they compare with plain Python and specialized libraries ? End of explanation """ %%file function.c // The content of this cell is written to the "function.c" file in the current directory. double accuracy(long* y_pred, long* y_true, int n) { // Your code here. return 0; } """ Explanation: 3 Using C from Python Here we'll explore our third option to make Python faster: implement in another language ! Below you'll 1. implement the accuracy function in C, 1. compile it, e.g. with the GNU compiler collection (GCC), 1. call it from Python. End of explanation """ %%script sh FILE=function gcc -c -O3 -Wall -std=c11 -pedantic -fPIC -o $FILE.o $FILE.c gcc -o lib$FILE.so -shared $FILE.o file lib$FILE.so """ Explanation: The below cell describe a shell script, which will be executed by IPython as if you typed the commands in your terminal. Those commands are compiling the above C program into a dynamic library with GCC. You can use any other compiler, text editor or IDE to produce the C library. Windows users, you may want to use Microsoft toolchain. End of explanation """ import ctypes libfunction = np.ctypeslib.load_library('libfunction', './') libfunction.accuracy.restype = ctypes.c_double libfunction.accuracy.argtypes = [ np.ctypeslib.ndpointer(dtype=np.int), np.ctypeslib.ndpointer(dtype=np.int), ctypes.c_int ] def accuracy_c(y_pred, y_true): n = y_pred.size return libfunction.accuracy(y_pred, y_true, n) """ Explanation: The below cell finally create a wrapper around our C library so that we can easily use it from Python. End of explanation """ # Your code here. """ Explanation: Evaluate below the performance of your C implementation, and test its correctness. How does it compare with the others ? End of explanation """ %%file function.f ! The content of this cell is written to the "function.f" file in the current directory. SUBROUTINE DACCURACY(YPRED, YTRUE, ACC, N) CF2PY INTENT(OUT) :: ACC CF2PY INTENT(HIDE) :: N INTEGER*4 YPRED(N) INTEGER*4 YTRUE(N) DOUBLE PRECISION ACC INTEGER N, NCORRECT ! Your code here. ACC = 0 END """ Explanation: 4 Using Fortran from Python Same idea as before, with Fortran ! Fortran is an imperative programming language developed in the 50s, especially suited to numeric computation and scientific computing. While you probably won't write new code in Fortran, you may have to interface with legacy code (especially in large and old corporations). Here we'll resort to the f2py utility provided by the Numpy project for the (almost automatic) generation of a wrapper. End of explanation """ !f2py -c -m function function.f # >> /dev/null import function def accuracy_fortran(y_pred, y_true): return function.daccuracy(y_pred, y_true) """ Explanation: The below command compile the Fortran code and generate a Python wrapper. End of explanation """ # Your code here. """ Explanation: Evaluate below the performance of your Fortran implementation, and test its correctness. How does it compare with the others ? End of explanation """ # Your code here. """ Explanation: 5 Analysis Plot a graph with n as the x-axis and the execution time of the various methods on the y-axis. End of explanation """
jameslao/Algorithmic-Pearls
0-1-Knapsack.ipynb
mit
def knapsack(v, w, limit, n): F = [[0] * (limit + 1) for x in range(n + 1)] for i in range(0, n): # F[-1] is all 0. for j in range(limit + 1): if j >= w[i]: F[i][j] = max(F[i - 1][j], F[i - 1][j - w[i]] + v[i]) else: F[i][j] = F[i - 1][j] return F if __name__ == "__main__": with open("0-1-Knapsack/test/knapsack_tiny.txt") as f: limit, n = map(int, f.readline().split()) v, w = zip(*[map(int, ln.split()) for ln in f.readlines()]) F = knapsack(v, w, limit, n) print("Max value:", F[n - 1][limit]) """ Display selected items""" y = limit for i in range(n - 1, -1, -1): if F[i][y] > F[i - 1][y]: print ("item: ", i, "value:", v[i], "weight:", w[i]) y -= w[i] """ Explanation: 0-1 背包问题详解 @jameslao / www.jlao.net 背包问题是啥 不知道你还记不记得我们小时候看过的《太阳山》的故事: <img scale="0" src="http://www.jlao.net/wp-content/uploads/2015/08/cover.jpg" alt="cover" class="aligncenter size-full wp-image-10219" height="273" width="375"> 从前,有弟兄俩,老大是个贪心的富人,老二是个穷人。 老二没有地,种老大的地。他一年到头在地里做活;可是老是吃不上,穿不上,租子交不够。 有一天,老二上山打柴。天都快黑了,他还坐在山上;他发愁,加上打柴,租子还是交不够。这时候,忽然刮起一阵黑风,飞来一只大鸟。大鸟落在他跟前,对他说:“你不用发愁。太阳山上有很多金子、银子,还有宝石。我背你上太阳山去吧!你从那儿可以拿些金子回来。” 老二爬到大鸟的背上。大鸟叫他闭上眼睛。他刚闭上眼睛,觉得飕一下子,就听见大鸟说:“到了。你看,有多少金子!去拿吧!你可不要贪心。这是太阳山。如果我们在这儿待得时间太长,赶上太阳回来,就把你烧死了。那时候,我也没办法救你。” 老二一边答应着,一边从大鸟的背上跳下来。哎呀!遍地是金子、银子、宝石,金光闪闪,晃得人睁不开眼睛。老二想了想,就拿一小块金子,装进口袋,要大鸟背他回去。大鸟问他为什么只拿一小块儿。他说够了。 老二又爬到大鸟的背上,闭上眼睛。飕一下子,大鸟落到他家门口了。他谢过大鸟,大鸟就飞走了。 老二有了金子,买了地,盖了房子。他早晨起来去种地,晚上回到新盖的房子里休息,日子过得很好。 老大见老二有金子,很眼红。他问老二的金子是哪儿来的。老二把大鸟背自己到太阳山的事告诉他了。 第二天,老大学着老二到山上去打柴。天快黑了,他故意不回家,坐在山上假装发愁。大鸟飞来了,也答应背他上太阳山去拿些金子。 大鸟背老大到了太阳山,嘱咐他不要贪心,像嘱咐老二一样,可是他连哼也不哼。他看见那么多金子、银子、宝石,忙着张开大布袋子,捡最大块儿的金子往里装。 老大拼命的往大布袋子里装金子,装个没完。大鸟催他回去,他说再让他拿几块。大鸟再催他回去,他还是这么说。最后,大鸟说:“时候到了!太阳马上就回来了!”他这才站起来,朝着大鸟走,摇摇晃晃的,大布袋子压得他走不稳了。没走几步,他又弯下腰来,说:“等一会儿。我再拿几颗宝石吧!” 正在这时候,火红的太阳回来了。它把烈火似的阳光喷到太阳山上。大鸟飞走了。老大被烧死了。 要是换你去拿,你会怎么拿呢?你是只拿一小块,还是拿到自己走不动?有没有什么办法,能够拿到最多的金子呢? 所谓背包 (knapsack) 问题,就是说一个要去爬山的人(或者大盗之类的也行),他有很多东西可以带,他的包也足够大,但是背得动的总重量有一个上限 $ W$ (大家都是人)。假设我们一共有 $ n$ 种物品,每件物品 $ j$ 都有一个非负的重量 $ w_j$(氢气球之类不算)和一个非负的价值 $ v_j$ (我就不举例了)。那么应该怎样挑选,才能让背包中装的物品的总价值最高? 也就是说要让目标函数 $$ \sum_{j=1}^n v_j x_j $$ 最大化。 如果每种物品只有一件,要么带要么不带,那就是最基本的 0-1 背包问题 。这时要满足约束条件 $$ \sum_{j=1}^n w_j x_j \le W, \; x_j \in {0, 1}. $$ 如果物品 $j$ 有最多 $b_j$ 件可拿,就称为多重背包问题或者有界背包问题,约束条件是 $$ \sum_{j=1}^n w_j x_j \le W, \; x_j \in {0, 1, \ldots, b_j}. $$ 如果可以敞开拿呢,就称为完全背包问题或者无界背包问题 。 这问题有啥搞头 我这么宅的人,一年也不会出去背包几趟,我也没那个机会去太阳山拣金子宝石或者去抢珠宝店,犯得着费这个劲研究这东西吗? 但是——比如说你有一笔钱来投资,有 $n$ 种可能的投资选择,投资 $j$ 需要 $w_j$ 的资金,会创造 $v_j$ 的收益(这儿先不说风险的事儿……)。那解决了这个背包问题,不就得出了一个获得最大收益的最优组合吗?还有比如货轮装货,也是类似的问题。 除了前面提到的这些实际应用,0-1 背包问题的意义还在于它是最简单的整数规划问题,并且可以为许多复杂问题提供一个解决的途径。 可是我们有电脑呀,算一下还不快吗?最暴力的方法就是检查所有可能的 $x_j$ 组合,选出来最优的就行了。可是这个组合有 $2^n$ 个,只要 $n$ 稍微大一点点,就已经超出了计算机能够解决的范畴了(你可以算一下比如 $2^{64}$ 会怎么样……)。 多年来,人们对这个看似简单的问题做出了相当多的研究,也得出了若干种能够比较高效解决这个问题的方法,成千上万个物品也不在话下了。我们后面一一加以介绍。 哪能搞法——动态规划初试 现在一共有 $n$ 个物品要考虑,总重量不超过 $W$。我们倒着来考虑一下,最优的背包中,包含不包含第 $n$ 个物品呢? 如果不包含第 $n$ 个物品,那么这个物品就可以直接去掉了,最优解就是只有前 $n-1$ 个物品放在 $W$ 的背包中的结果; 如果包含第 $n$ 个物品,那背包就被它占掉了 $w_n$ ,现在的问题就是要把前 $n-1$ 个物品放在 $W - w_n$ 的背包中怎么放法了。 看出来了吗?这两种情况中,总价值比较高的就是我们所要的最优解。如果用 $f_n(W)$ 来表示总价值的话,那么 $$ f_n(W) = \max \begin{cases} f_{n-1} (W) \ f_{n-1} (W - w_n) + v_n \end{cases} $$ 这就是我们所要的状态转移方程。 空说不太好说,我们来举一个具体的例子吧。为了方便起见,我们规定输入的格式是这样的:第一行的两个数分别是重量上限 $W$ 和物品总数 $n$,接下来的每一行则是物品的价值 $v_j$ 和重量 $w_j$。比方说输入文件是: 16 4 30 4 20 5 40 10 10 3 意思是我们的背包总重不超过 16,有这样几件物品: | 物品 | 价值 | 重量 | | --- | --- | --- | | 1 | 30 | 4 | | 2 | 20 | 5 | | 3 | 40 | 10 | | 4 | 10 | 3 | 我们按照前面的思路,写出来的代码是这样的: End of explanation """ F """ Explanation: 这里利用了 python 允许数组下标为负,即 F[-1] 实际上就是 F[n],可以减少一些边界条件的处理。最后显示加入了哪些物品时,只需对所有物品进行回溯,看看在哪个物品上导致 F 增加,即说明这个物品被加入了背包里。 我顺便打印了 F 好让大家有一个更直观的感觉: End of explanation """ def knapsack(v, w, limit, n): F = [0] * (limit + 1) for i in range(n): for j in range(limit, w[i], -1): F[j] = max(F[j - w[i]] + v[i], F[j]) return F """ Explanation: 这里有一个稍微大一点的例子,运行之后会输出结果: <pre>Max value: 12248 item: 13 value: 3878 weight: 9656 item: 12 value: 1513 weight: 3926 item: 7 value: 2890 weight: 7280 item: 5 value: 1022 weight: 2744 item: 2 value: 2945 weight: 7390</pre> 看到这两个循环,很明显这个算法的时间复杂度是 $O(nW)$,这个数组 F 的空间复杂度也是 $O(nW)$。能不能稍微改得好一点呢? 从上面这个循环中我们发现,最重要的值实际上是 F[n-1],前面的那些值我们并不太关心。那能不能不要保留这些值呢?如果我们原地刷新 F[i][j] = max(F[i][j], F[i][j - w[i]] + v[i]) …… 不行,这样的话,如果左边的 j 先被刷新,右边再碰到同样的 j - w[i] 就已经不是上一轮的值了。那么……要是换从右边开始更新呢?那就没问题啦!咱们的空间复杂度就降到 $O(W)$ 啦! End of explanation """ from collections import deque import sys INF = float("inf") def knapsack(vw, limit, n): vw = sorted(vw, key=lambda x : x[1], reverse=True) # Accelerate A = deque([(0, 0)]) for i in range(0, n): B = deque() # find all possiblities after adding one new item for item in A: if item[1] + vw[i][1] > limit: # A is sorted break B.append((item[0] + vw[i][0], item[1] + vw[i][1])) level, merge = -1, deque() # the bar keeps going up while A or B: # merging the two queues ia, ib = A[0][1] if A else INF, B[0][1] if B else INF x = A.popleft() if (ia < ib) else B.popleft() if x[0] > level: merge.append(x) level = x[0] A = merge return A[-1] if __name__ == "__main__": with open("0-1-Knapsack/test/knapsack_big.txt") as f: limit, n = map(int, f.readline().split()) vw = [tuple(map(int, ln.split())) for ln in f.readlines()] A = knapsack(vw, limit, n) print("Max value:", A[0], "Total weight:", A[1]) """ Explanation: 基本上,一般的书讲背包算法的动态规划也就到此为止了。真的就到此为止了吗?$O(nW)$ 就让我们满意了吗?你想想看,这玩意和 $W$ 相关,可 $W$ 是一个数字啊!我们哪天要是不开心买了个大包包难道就要算好久好久了吗?(有兴趣的可以试试……) 哪能搞法——动态规划再试 不知道有没有读者试过了这个大包包,我试了一下,在我的 2.30GHz i5 5300U 上,这个函数用 cPython 跑了…… 1632.07 秒,将近半个小时。照这个搞法,每次那个包有多大,就得搞出多少项。那个包的大小可是个任意数啊?这 $ W$ 比 $ 2^n$ 大都说不定呢? 咱们再来琢磨琢磨这个事儿。看看刚才那个最小的例子,也就是背包总重不超过 16,有这样几件物品: | 物品 | 价值 | 重量 | | --- | --- | --- | | 0 | 30 | 4 | | 1 | 20 | 5 | | 2 | 40 | 10 | | 3 | 10 | 3 | 我们产生的 $ F$ 是这样的: [0, 0, 0, 0, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30] [0, 0, 0, 0, 30, 30, 30, 30, 30, 50, 50, 50, 50, 50, 50, 50, 50] [0, 0, 0, 0, 30, 30, 30, 30, 30, 50, 50, 50, 50, 50, 70, 70, 70] [0, 0, 0, 10, 30, 30, 30, 40, 40, 50, 50, 50, 60, 60, 70, 70, 70] 你什么感觉,是不是怎么这么多重复项??我们真的需要保留这么多重复项吗? 动态规划说到底,是把 $ n$ 个对象的情况化归成 $ n-x$ 个对象的子问题。我们这个 $ F$ 里面实际的信息量是什么呢? 第一轮,加入 0 号物品,可以用 4 的重量做到 30 的价值。 第二轮,加入 1 号物品,可以用 5 的重量做到 20 的价值——慢着,我们前一轮已经用 4 的重量做到的 30 的价值,你用 5 才做到 20,这完全没有意义嘛!所以它并没有表现在 $ F$ 中。然而如果同时加入 0 号和 1 号物品,我们就可以用 9 的重量做到 50 的价值。这是一个新的信息。 第三轮,再加入 2 号物品,同理,我们只有在重量达到 14,价值达到 70 的时候才是一个新的信息。 第四轮,我们用 3 的重量就做到了 10 的价值,而以前 3 的重量什么也做不出。那么这也是一个新信息。 看出什么端倪了么?我们其实只需要保留有信息量的部分,即用最少的重量做到某一个价值的时刻。如果有两个状态 $ (V_1, W_1)$ 和 $ (V_2, W_2)$,使得 $ W_2 > W_1$,但 $V_2 \leq V_1$,也就是说状态 2 的重量大但价值不高的话,那么就说明状态 2 是占劣势而无需继续考虑的。如果能把这些劣势状态通通舍弃,我们要处理的数据量(通常)可以大大缩减。 道理理解了,如何实现呢?我们可以用元组 (tuple) 来表示价值和重量,并把它们按顺序排起来。仍以上面的例子来说: 最初我们什么也没有,只有一个元组(0, 0) 我们试着加上第 0 号元素,得到了(30, 4)。与最初的元组合并,于是变成了(0, 0), (30, 4)。 再试着对每一种情况加上第 1 号元素,得到了(20, 5), (50, 9)。与上一轮的元组合并,由于(20, 5)处于劣势被舍弃了。合并的结果是(0, 0), (30, 4), (50, 9)。 继续做这个操作,加上第 2 号元素得到(40, 10), (70, 14), (90, 19)。最后一个由于超限被舍弃了,再于上一轮合并,得到(0, 0), (30, 4), (50, 9), (70, 14)。 最后一轮再加上去,终于得到了最终结果(0, 0), (10, 3), (30, 4), (40, 7), (50, 9), (60, 12), (70, 14)。 <img src="http://www.jlao.net/wp-content/uploads/2015/08/deque.png" width="600" height="307" /> 我的这个实现为了合并的方便使用了 python 中的双向链表 collections.deque。它不但写起来方便,而且实测下来比直接用列表和指针还要快一些(原因未知): End of explanation """ import sys def knapsack(vw, limit): def bound(v, w, j): if j >= len(vw) or w > limit: return -1 else: while j < len(vw) and w + vw[j][1] <= limit: v, w, j = v + vw[j][0], w + vw[j][1], j + 1 if j < len(vw): v += (limit - w) * vw[j][0] / (vw[j][1] * 1.0) return v def traverse(v, w, j): nonlocal maxValue if bound(v, w, j) >= maxValue: # promising if w + vw[j][1] <= limit: # w/ j maxValue = max(maxValue, v + vw[j][0]) traverse(v + vw[j][0], w + vw[j][1], j + 1) if j < len(vw) - 1: # w/o j traverse(v, w, j + 1) return maxValue = 0 traverse(0, 0, 0) return maxValue if __name__ == "__main__": with open("0-1-Knapsack/test/knapsack_small.txt") as f: limit, n = map(int, f.readline().split()) vw = [] # value, weight, value density for ln in f.readlines(): vl, wl = tuple(map(int, ln.split())) vw.append([vl, wl, vl / (wl * 1.0)]) print(knapsack(sorted(vw, key=lambda x : x[2], reverse=True), limit)) """ Explanation: 用这个来解决大包包是轻轻松松啦!在我机器上的运行时间大约只有半秒钟,真的可谓是秒杀!我们这个算法的时间复杂度是 $O(ns)$,其中 $s$ 是上面列出来的独立状态数。这个状态数既不可能超过 $W$,也不可能超过 $2^n$。对于分布比较均匀的背包物品来说,这个方法还是相当奏效的。 也许你注意到了第 8 行有一条奇怪的排序……这个算法虽然对数据的顺序本身没有要求,但是我发现,如果对数据按照重量从大到小排序,速度比乱序提升了大约四倍。我也试过其他的排序方法,比如按照价值与重量的比值,但在这个特定例子上均不太理想。 但是呢……遇到更大的包包怎么办……我跑了一下 cPython 需要 677.3 秒…… PyPy 快一些需要 65 秒左右。 其实,我们并不需要在动态规划这一棵树上吊死的呢…… 哪能搞法——分支界限 我们先把动态规划的想法放一下,退一步来看看这个问题。每一个物品都可能放或者不放,如果用一个决策树来表示的话,就得到这样一棵满二叉树: <img src="http://www.jlao.net/wp-content/uploads/2015/08/fulltree.png" alt="fulltree" width="564" height="178" class="aligncenter size-full wp-image-10186" /> 这对于 $n$ 个物品就有 $2^n$ 种放法,总共要计算 $2^{n+1} - 2$ 个结点,当然是太多了。不过,有没有什么办法能让遍历的结点少一点呢? 比较直观的,至少有这么两个: 重量超限的——如果一个结点重量超限,那再往上加东西的也不用看了。 最下面一层的右侧打叉的结点——结果肯定和上一层一样,也不用再算一遍了。 还有吗?比如说,有些结点重量大而价值却很低,再怎么往上加东西也没有竞争力,能不能尽早找出这种“没有希望”的结点呢? 对于一些固定的物品,包包里能装走的价值是有一个上限的。包的大小是一定的,那么为了让总体价值最大,就要让单位重量的价值最大。怎么才能让单位重量的价值最大呢?我们把所有的物品按照价值与重量的比值($v_j / w_j$)由大到小排序,先尽量拣着重量小价值大的东西装,而把又重又不值钱的放在后面。 最后包包还没装满又放不下下一个物品怎么办呢?那我们就假装这个物品可以切开,然后把包包塞满。这样一来,现在得到的这个总价值就是一个上限,因为单位价值最高的那些已经都塞进来啦。 我们把物品排好顺序挨个考虑,如果有个结点已经考虑了前 $i$ 个物品,得到的总价值是 $V_0$,总重量是 $W_0$;我们按照单位价值从大到小挨个往里放,放到第 $k$ 个物品放不下了,这时候包里的总重量是 $$ \tilde{W} = W_0 + \sum_{j=i+1}^{k-1} w_j $$ 而总价值的上限就是 $$ V_{\rm bound} = \left(V_0 + \sum_{j=i+1}^{k-1} v_j \right) + \big(W - \tilde{W}\big)\cdot \frac{v_k}{w_k} $$ 其中前面一项是放进去的物品总价值,后面一项是把第 $k$ 个物品切开后塞满包包的价值。 如果用 $V_{\max}$ 表示已经出现过的最高价值,那遇到 $V_{\rm bound} \leq V_{\max}$ 的结点就可以直接把它剪掉啦。 这个程序写出来可以是这样的: End of explanation """ def knapsack(vw, limit): def bound(v, w, j): if j >= len(vw) or w > limit: return -1 else: while j < len(vw) and w + vw[j][1] <= limit: v, w, j = v + vw[j][0], w + vw[j][1], j + 1 if j < len(vw): v += (limit - w) * vw[j][0] / (vw[j][1] * 1.0) return v stack = [(0, 0, 0)] maxValue = -1 while stack: v, w, j = stack.pop() if bound(v, w, j) >= maxValue: if j < len(vw) - 1: stack.append([v, w, j + 1]) if w + vw[j][1] <= limit: maxValue = max(maxValue, v + vw[j][0]) stack.append([v + vw[j][0], w + vw[j][1], j + 1]) return maxValue if __name__ == "__main__": with open("0-1-Knapsack/test/knapsack_extra.txt") as f: limit, n = map(int, f.readline().split()) vw = [] # value, weight, value density for ln in f.readlines(): vl, wl = tuple(map(int, ln.split())) vw.append([vl, wl, vl / (wl * 1.0)]) %time print(knapsack(sorted(vw, key=lambda x : x[2], reverse=True), limit)) """ Explanation: 简单解释一下,先对 <code>vw</code> 按照单位重量的价值排序,然后利用 <code>bound</code> 函数确定价值上限。如果价值上限超过了已经出现的最大价值,再分别计算加上当前物品和不加当前物品的两种情况,否则就跳过。 我们还拿前面的那个小例子来看,它执行过程可以用下面这个图来表示: <img src="0-1-Knapsack/img/knapsack_tiny.png" alt="knapsack_tiny" width="245" height="281" class="aligncenter size-full wp-image-10196" /> 显然这是一个深度优先搜索,图中每个结点分别列出了它的当前价值、当前重量和价值上限,红色结点表示重量超限,黄色结点表示价值上限不足,加粗的圈是最优解。这个树看起来比上面那个小多了吧! 如果包包大一点,效果就更明显了,比如这个<a href="0-1-Knapsack/test/knapsack_15.txt">有 15 个物品的包包</a>,它的搜索过程是这样的(点击看大图): <img src="0-1-Knapsack/img/knapsack_15.png" alt="knapsack_15" width="641" height="630" class="aligncenter size-full wp-image-10200" /> 看看我们剪掉了多少红圈黄圈!看起来很不错,让我们来试试前面的那些大包包…… 诶怎么回事…… <pre class="toolbar:2 striped:false nums:false nums-toggle:false lang:default decode:true " >RuntimeError: maximum recursion depth exceeded while calling a Python object</pre> 好吧递归太深了……我们可以加大递归深度: ```python import thread ... def main(): ... if name == "main": threading.stack_size(67108864) # 64MB stack sys.setrecursionlimit(20000) thread = threading.Thread(target=main) thread.start() ``` 在 Windows 下面递归深度除了受 recursion limit 约束,还受栈空间限制,所以这里用 <code>thread</code> 来设定栈空间,而它只对新建的线程有效。如果不扩大栈空间,2000 个项目还可以,10000 个就不行了。 当然,所有的递归都是可以解开的,为什么非要用递归呢!直接把后面待处理的结点压进栈里不就行了吗,当然要注意压栈的顺序变了: End of explanation """ from heapq import * def knapsack(vw, limit): def bound(v, w, j): if j >= len(vw) or w > limit: return -1 else: while j < len(vw) and w + vw[j][1] <= limit: v, w, j = v + vw[j][0], w + vw[j][1], j + 1 if j < len(vw): v += (limit - w) * vw[j][0] / (vw[j][1] * 1.0) return v maxValue = 0 PQ = [[-bound(0, 0, 0), 0, 0, 0]] # -bound to keep maxheap while PQ: b, v, w, j = heappop(PQ) if b <= -maxValue: # promising if w + vw[j][1] <= limit: maxValue = max(maxValue, v + vw[j][0]) heappush(PQ, [-bound(v + vw[j][0], w + vw[j][1], j + 1), v + vw[j][0], w + vw[j][1], j + 1]) if j < len(vw) - 1: heappush(PQ, [-bound(v, w, j + 1), v, w, j + 1]) return maxValue if __name__ == "__main__": # with open(sys.argv[1] if len(sys.argv) > 1 else sys.exit(1)) as f: with open("0-1-Knapsack/test/knapsack_extra.txt") as f: limit, n = map(int, f.readline().split()) vw = [] # value, weight, value density for ln in f.readlines(): vl, wl = tuple(map(int, ln.split())) vw.append([vl, wl, vl / (wl * 1.0)]) %time print(knapsack(sorted(vw, key=lambda x : x[2], reverse=True), limit)) """ Explanation: 那个<a href="0-1-Knapsack/test/knapsack_1000000_10000.txt">有 10000 个元素的大包包</a>也完全不在话下,瞬间就可以解开啦! 再接再厉 前面的方法已经非常快了,应该说很满意。那还有没有再优化的余地呢? 我们再来仔细看一下这个过程:深度优先搜索,逐级回溯,典型的二叉树遍历方式。为了方便说明,我标出了结点遍历的顺序: <img class="aligncenter size-full wp-image-10213" src="0-1-Knapsack/img/knapsack_tiny_arrow.png" alt="knapsack_tiny_arrow" width="244" height="284" /> <!--more--> 一切看起来都挺好的。不过再仔细看一下——hmm,似乎还是有些地方可以琢磨琢磨。比如说,遍历到第二个物品,也就是价值 \$50,重量 6,价值上限为 \$60 的那个结点的时候,旁边那个价值 \$30 的结点的价值上限却是 \$76.7。也就是说,我们现在遍历的这个结点,其实“希望”并不如另外的那个大。而最终结果也证明,沿着价值上限为 \$60 的结点找下去似乎是多做了“无用功”,而全局最大值正是在那个价值上限为 \$76.7 的结点之下。 这就给我们一个启发——价值上限最大的结点,是不是更可能是拥有全局最大价值呢?即便不是,价值上限大,也更有可能更快地达到较高的价值,从而更快地提高 <code>maxValue</code>,剪掉更多毫无希望的枝节。 那么如何实现呢?现在下一个遍历结点的是从栈中弹出的,弹出的顺序取决于元素压栈的顺序。有什么办法能让弹出的顺序和价值上限相关呢?价值上限越大的优先级越高——啊哈,<a href="https://zh.wikipedia.org/wiki/%E5%84%AA%E5%85%88%E4%BD%87%E5%88%97">优先队列</a>!Python 里面实现起来再容易不过了,因为它提供了 <code>heapq</code> 这个<a href="https://zh.wikipedia.org/wiki/%E4%BA%8C%E5%8F%89%E5%A0%86">二叉堆</a>。不过这个堆默认是最小堆,也就是堆顶上的元素是最小的。怎么把它变成最大堆呢?只要把元素取相反数就可以了嘛。实现起来只需要在原来的程序上稍作改动: End of explanation """
dborgesr/Euplotid
pipelines/fq2HiCInts.ipynb
gpl-3.0
annotation="/input_dir/mm9" tmp="/input_dir/" input_dir="/input_dir/" output_dir="/output_dir/" input_fq_1="HiC_mesc_1_1M.fq.gz" input_fq_2="HiC_mesc_2_1M.fq.gz" sample_name="test" bin_size="10000" """ Explanation: Call DNA-DNA interactions using raw HiC data Install instructions for HiCPro required after first image pull cd /root/HiC-Pro/ source activate py27 R install.packages("ggplot2") install.packages("RColorBrewer") make configure make install Take fastq in and spit out chilled normalized Hi-C matrix Define data folders, data, and sample name End of explanation """ !mkdir $output_dir"/rawdata/mesc_test" !cp $output_dir"$input_fq_1" $output_dir"/rawdata/mesc_test"test_1_1M.fq.gz !cp $output_dir"$input_fq_2" $output_dir"/rawdata/mesc_test"test_2_1M.fq.gz """ Explanation: Set up directories for HiCPro End of explanation """ %%writefile /root/HiC-Pro_2.8.1_devel/config-hicpro_mesc.txt # %load /root/HiC-Pro_2.8.1_devel/config-hicpro.txt # Please change the variable settings below if necessary ######################################################################### ## Paths and Settings - Do not edit ! ######################################################################### TMP_DIR = $tmp LOGS_DIR = logs BOWTIE2_OUTPUT_DIR = bowtie_results MAPC_OUTPUT = hic_results RAW_DIR = rawdata ####################################################################### ## SYSTEM AND SCHEDULER - Start Editing Here !! ####################################################################### N_CPU = 2 LOGFILE = hicpro.log JOB_NAME = JOB_MEM = JOB_WALLTIME = JOB_QUEUE = JOB_MAIL = ######################################################################### ## Data ######################################################################### PAIR1_EXT = _1 PAIR2_EXT = _2 ####################################################################### ## Alignment options ####################################################################### FORMAT = phred33 MIN_MAPQ = 0 BOWTIE2_IDX_PATH = BOWTIE2_GLOBAL_OPTIONS = --very-sensitive -L 30 --score-min L,-0.6,-0.2 --end-to-end --reorder BOWTIE2_LOCAL_OPTIONS = --very-sensitive -L 20 --score-min L,-0.6,-0.2 --end-to-end --reorder ####################################################################### ## Annotation files ####################################################################### REFERENCE_GENOME = mm9 GENOME_SIZE = chrom_hg19.sizes CAPTURE_TARGET = ####################################################################### ## Allele specific analysis ####################################################################### ALLELE_SPECIFIC_SNP = ####################################################################### ## Digestion Hi-C ####################################################################### GENOME_FRAGMENT = HindIII_resfrag_mm9.bed LIGATION_SITE = AAGCTAGCTT MIN_FRAG_SIZE = MAX_FRAG_SIZE = MIN_INSERT_SIZE = MAX_INSERT_SIZE = ####################################################################### ## Hi-C processing ####################################################################### MIN_CIS_DIST = GET_ALL_INTERACTION_CLASSES = 1 GET_PROCESS_SAM = 0 RM_SINGLETON = 1 RM_MULTI = 1 RM_DUP = 1 ####################################################################### ## Contact Maps ####################################################################### BIN_SIZE = 20000 40000 150000 500000 1000000 MATRIX_FORMAT = upper ####################################################################### ## Normalization ####################################################################### MAX_ITER = 100 FILTER_LOW_COUNT_PERC = 0.02 FILTER_HIGH_COUNT_PERC = 0 EPS = 0.1 """ Explanation: Edit and save HiCPro config file using Jupyter magic End of explanation """ !/root/HiC-Pro_2.8.1_devel/bin/HiC-Pro \ -i $output_dir -c /root/HiC-Pro_2.8.1_devel/config-hicpro_mesc.txt \ -s mapping -s proc_hic -s quality_checks -s merge_persample -s build_contact_maps -s ice_norm \ -o $output_dir """ Explanation: Run HiCPro through all steps End of explanation """
tensorflow/docs-l10n
site/en-snapshot/tensorboard/text_summaries.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2021 The TensorFlow Authors. End of explanation """ try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass # Load the TensorBoard notebook extension. %load_ext tensorboard import tensorflow as tf from datetime import datetime import json from packaging import version import tempfile print("TensorFlow version: ", tf.__version__) assert version.parse(tf.__version__).release[0] >= 2, \ "This notebook requires TensorFlow 2.0 or above." """ Explanation: Displaying text data in TensorBoard <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tensorboard/text_summaries"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/text_summaries.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/text_summaries.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorboard/docs/text_summaries.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview Using the TensorFlow Text Summary API, you can easily log arbitrary text and view it in TensorBoard. This can be extremely helpful to sample and examine your input data, or to record execution metadata or generated text. You can also log diagnostic data as text that can be helpful in the course of your model development. In this tutorial, you will try out some basic use cases of the Text Summary API. Setup End of explanation """ my_text = "Hello world! 😃" # Clear out any prior log data. !rm -rf logs # Sets up a timestamped log directory. logdir = "logs/text_basics/" + datetime.now().strftime("%Y%m%d-%H%M%S") # Creates a file writer for the log directory. file_writer = tf.summary.create_file_writer(logdir) # Using the file writer, log the text. with file_writer.as_default(): tf.summary.text("first_text", my_text, step=0) """ Explanation: Logging a single piece of text To understand how the Text Summary API works, you're going to simply log a bit of text and see how it is presented in TensorBoard. End of explanation """ %tensorboard --logdir logs """ Explanation: Now, use TensorBoard to examine the text. Wait a few seconds for the UI to spin up. End of explanation """ # Sets up a second directory to not overwrite the first one. logdir = "logs/multiple_texts/" + datetime.now().strftime("%Y%m%d-%H%M%S") # Creates a file writer for the log directory. file_writer = tf.summary.create_file_writer(logdir) # Using the file writer, log the text. with file_writer.as_default(): with tf.name_scope("name_scope_1"): for step in range(20): tf.summary.text("a_stream_of_text", f"Hello from step {step}", step=step) tf.summary.text("another_stream_of_text", f"This can be kept separate {step}", step=step) with tf.name_scope("name_scope_2"): tf.summary.text("just_from_step_0", "This is an important announcement from step 0", step=0) %tensorboard --logdir logs/multiple_texts --samples_per_plugin 'text=5' """ Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/text_simple.png?raw=1"/> --> Organizing multiple text streams If you have multiple streams of text, you can keep them in separate namespaces to help organize them, just like scalars or other data. Note that if you log text at many steps, TensorBoard will subsample the steps to display so as to make the presentation manageable. You can control the sampling rate using the --samples_per_plugin flag. End of explanation """ # Sets up a third timestamped log directory under "logs" logdir = "logs/markdown/" + datetime.now().strftime("%Y%m%d-%H%M%S") # Creates a file writer for the log directory. file_writer = tf.summary.create_file_writer(logdir) some_obj_worth_noting = { "tfds_training_data": { "name": "mnist", "split": "train", "shuffle_files": "True", }, "keras_optimizer": { "name": "Adagrad", "learning_rate": "0.001", "epsilon": 1e-07, }, "hardware": "Cloud TPU", } # TODO: Update this example when TensorBoard is released with # https://github.com/tensorflow/tensorboard/pull/4585 # which supports fenced codeblocks in Markdown. def pretty_json(hp): json_hp = json.dumps(hp, indent=2) return "".join("\t" + line for line in json_hp.splitlines(True)) markdown_text = """ ### Markdown Text TensorBoard supports basic markdown syntax, including: preformatted code **bold text** | and | tables | | ---- | ---------- | | among | others | """ with file_writer.as_default(): tf.summary.text("run_params", pretty_json(some_obj_worth_noting), step=0) tf.summary.text("markdown_jubiliee", markdown_text, step=0) %tensorboard --logdir logs/markdown """ Explanation: Markdown interpretation TensorBoard interprets text summaries as Markdown, since rich formatting can make the data you log easier to read and understand, as shown below. (If you don't want Markdown interpretation, see this issue for workarounds to suppress interpretation.) End of explanation """
pycrystem/pycrystem
doc/demos/08 Pair Distribution Function Analysis.ipynb
gpl-3.0
%matplotlib inline import hyperspy.api as hs import pyxem as pxm import numpy as np """ Explanation: PDF Analysis Tutorial Introduction This tutorial demonstrates how to acquire a multidimensional pair distribution function (PDF) from both a flat field electron diffraction pattern and a scanning electron diffraction data set. The data is from an open-source paper by Shanmugam et al. [1] that is used as a reference standard. It is an Amorphous 18nm SiO2 film. The scanning electron diffraction data set is a scan of a polycrystalline gold reference standard with 128x128 real space pixels and 256x256 diffraction space pixels. The implementation also initially followed Shanmugam et al. [1] Shanmugam, J., Borisenko, K. B., Chou, Y. J., & Kirkland, A. I. (2017). eRDF Analyser: An interactive GUI for electron reduced density function analysis. SoftwareX, 6, 185-192. This functionality has been checked to run in pyxem-0.13.0 (March 2021). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here: https://github.com/pyxem/pyxem-demos/issues Contents <a href='#loa'> Loading & Inspection</a> <a href='#rad'> Acquiring a radial profile</a> <a href='#ri'> Acquiring a Reduced Intensity</a> <a href='#dri'> Damping the Reduced Intensity</a> <a href='#pdf'> Acquiring a PDF</a> Import pyXem and other required libraries End of explanation """ rp = hs.load('./data/08/amorphousSiO2.hspy') rp.set_signal_type('electron_diffraction') """ Explanation: <a id='loa'></a> 1. Loading and Inspection Load the diffraction data line profile End of explanation """ rp = pxm.signals.ElectronDiffraction1D([[rp.data]]) """ Explanation: For now, the code requires navigation dimensions in the reduced intensity signal, two size 1 ones are created. End of explanation """ calibration = 0.00167 rp.set_diffraction_calibration(calibration=calibration) """ Explanation: Set the diffraction pattern calibration. Note that pyXem uses a calibration to $s = \frac{1}{d} = 2\frac{\sin{\theta}}{\lambda}$. End of explanation """ rp.plot() """ Explanation: Plot the radial profile End of explanation """ rigen = pxm.generators.ReducedIntensityGenerator1D(rp) """ Explanation: <a id='ri'></a> 2. Acquiring a Reduced Intensity Acquire a reduced intensity (also called a structure factor) from the radial profile. The structure factor is what will subsequently be transformed into a PDF through a fourier transform. The structure factor $\phi(s)$ is acquired by fitting a background scattering factor to the data, and then transforming the data by: $$\phi(s) = \frac{I(s) - N\Delta c_{i}f_{i}^{2}}{N\Delta c_{i}^{2}f_{i}^{2}}$$ where s is the scattering vecot, $c_{i}$ and $f_{i}$ the atomic fraction and scattering factor respectively of each element in the sample, and N is a fitted parameter to the intensity. To acquire the reduced intensity, we first initialise a ReducedIntensityGenerator1D object. End of explanation """ elements = ['Si','O'] fracs = [0.333,0.667] """ Explanation: We then fit an electron scattering factor to the profile. To do this, we need to define a list of elements and their respective atomic fractions. End of explanation """ rigen.fit_atomic_scattering(elements,fracs,scattering_factor='lobato',plot_fit=True,iterpath='serpentine') """ Explanation: Then we will fit a background scattering factor. The scattering factor parametrisation used here is that specified by Lobato and Van Dyck [2]. The plot_fit parameter ensures we check the fitted profile. [2] Lobato, I., & Van Dyck, D. (2014). An accurate parameterization for scattering factors, electron densities and electrostatic potentials for neutral atoms that obey all physical constraints. Acta Crystallographica Section A: Foundations and Advances, 70(6), 636-649. End of explanation """ rigen.set_s_cutoff(s_min=1.5,s_max=4) rigen.fit_atomic_scattering(elements,fracs,scattering_factor='lobato',plot_fit=True,iterpath='serpentine') """ Explanation: That's clearly a terrible fit! This is because we're trying to fit the beam stop. To avoid this, we specify to fit to the 'tail end' of the data by specifying a minimum and maximum scattering angle range. This is generally recommended, as electron scattering factors tend to not include inelastic scattering, which means the factors are rarely perfect fits. End of explanation """ ri = rigen.get_reduced_intensity() ri.plot() """ Explanation: That's clearly much much better. Always inspect your fit. Finally, we calculate the reduced intensity itself. End of explanation """ ri.damp_exponential(b=0.1) ri.plot() ri.damp_lorch(s_max=4) ri.plot() """ Explanation: If it seems like the reduced intensity is not oscillating around 0 at high s, you should try fitting with a larger s_min. This generally speaking solves the issue. <a id='dri'></a> 4. Damping the Reduced Intensity The reduced intensity acquired above does not go to zero at high s as it should because the maximum acquired scattering vector is not very high. This would result in significant oscillation in the PDF due to a discontinuity in the fourier transformed data. To combat this, the reduced intensity is damped. In the X-ray community a common damping functions are the Lorch function and an exponential damping function. Both are supported here. It is worth noting that damping does reduce the resolution in r in the PDF. End of explanation """ ri.damp_low_q_region_erfc(offset=4) ri.plot() """ Explanation: Additionally, it is recommended to damp the low s regime. We use an error function to do that End of explanation """ ri = rigen.get_reduced_intensity() """ Explanation: If the function ends up overdamped, you can simply reacquire the reduced intensity using: End of explanation """ pdfgen = pxm.generators.PDFGenerator1D(ri) """ Explanation: <a id='pdf'></a> 5. Acquiring a PDF Finally, a PDF is acquired from the damped reduced intensity. This is done by a fourier sine transform. To ignore parts of the scattering data that are too noisy, you can set a minimum and maximum scattering angle for the transform. First, we initialise a PDFGenerator1D object. End of explanation """ s_min = 0. s_max = 4. """ Explanation: Secify a minimum and maximum scattering angle. The maximum must be equivalent to the Lorch function s_max if the Lorch function is used to damp. Otherwise the Lorch function damping can cause artifact in the PDF. End of explanation """ pdf = pdfgen.get_pdf(s_min=s_min, s_max=s_max, r_max=10) pdf.plot() """ Explanation: Finally we get the PDF. r_max specifies the maximum real space distance we want to interpret. End of explanation """ pdf.save('Demo-PDF.hspy') """ Explanation: The PDF can then be saved. End of explanation """
datosgobar/pydatajson
samples/caso-uso-2-pydatajson-xlsx-justicia-no-valido.ipynb
mit
import arrow import os, sys sys.path.insert(0, os.path.abspath("..")) from pydatajson import DataJson #lib y clase from pydatajson.readers import read_catalog # lib, modulo ... metodo Lle el catalogo -json o xlsx o (local o url) dicc- y lo transforma en un diccionario de python from pydatajson.writers import write_json_catalog """ Explanation: Caso de uso 2 - Validación, transformación y harvesting con el catálogo del Ministerio de Justicia Caso 2: catálogo no válido por no tener completos los campos "title" de la clase dataset. En esta prueba se realiza el proceso completo de validación, transformación y harvesting a partir de un archivo xlsx que contiene los metadatos pertenecientes al catálogo del Ministerio de Justicia. Nota: Se trata de un catálogo conocido y originalmente válido en cuanto a su estructura y metadatos, al que se le han borrado los valores de la propiedad "dataset_title". Archivo utilizado: catalogo-justicia-con-error-datasets.xlsx Setup Importación de metodos y clases End of explanation """ #completar con lo que corresponda ORGANISMO = 'justicia' catalogo_xlsx = os.path.join("archivos-tests", "excel-no-validos", "catalogo-justicia-con-error-datasets.xlsx") #NO MODIFICAR #Creo la estructura de directorios necesaria si no existe if not os.path.isdir("archivos-generados"): os.mkdir("archivos-generados") for directorio in ["jsons", "reportes", "configuracion"]: path = os.path.join("archivos-generados", directorio) if not os.path.isdir(path): os.mkdir(path) # Declaro algunas variables de interés HOY = arrow.now().format('YYYY-MM-DD-HH_mm') catalogo_a_json = os.path.join("archivos-generados","jsons","catalogo-{}-{}.json".format(ORGANISMO, HOY)) reporte_datasets = os.path.join("archivos-generados", "reportes", "reporte-catalogo-{}-{}.xlsx".format(ORGANISMO, HOY)) archivo_config_sin_reporte = os.path.join("archivos-generados", "configuracion", "archivo-config_-{}-{}-sinr.csv".format(ORGANISMO, HOY)) archivo_config_con_reporte = os.path.join("archivos-generados", "configuracion", "archivo-config-{}-{}-conr.csv".format(ORGANISMO, HOY)) """ Explanation: Declaración de variables y paths End of explanation """ catalogo = read_catalog(catalogo_xlsx) """ Explanation: Validación del archivo xlsx y transformación a json Validación del catálogo en xlsx End of explanation """ write_json_catalog(catalogo, catalogo_a_json) ##write_json_catalog(catalog, target_file) escribe un dicc a un archivo json """ Explanation: Transformación del catálogo, de xlsx a json End of explanation """ dj = DataJson() """ Explanation: Validación del catalogo en json y harvesting Validación del catálogo en json Instanciación de la clase DataJson End of explanation """ dj.is_valid_catalog(catalogo) # obtenemos FALSE """ Explanation: Validación -V/F- del catálogo en json End of explanation """ dj.validate_catalog(catalogo) # el mensaje de Error indica que "title" es una propiedad requerida """ Explanation: Validación detallada del catálogo en json End of explanation """
mne-tools/mne-tools.github.io
0.12/_downloads/plot_sensor_regression.ipynb
bsd-3-clause
# Authors: Tal Linzen <linzen@nyu.edu> # Denis A. Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.stats.regression import linear_regression print(__doc__) data_path = sample.data_path() """ Explanation: Sensor space least squares regression Predict single trial activity from a continuous variable. A single-trial regression is performed in each sensor and timepoint individually, resulting in an Evoked object which contains the regression coefficient (beta value) for each combination of sensor and timepoint. Example also shows the T statistics and the associated p-values. Note that this example is for educational purposes and that the data used here do not contain any significant effect. (See Hauk et al. (2006). The time course of visual word recognition as revealed by linear regression analysis of ERP data. Neuroimage.) End of explanation """ raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.2, 0.5 event_id = dict(aud_l=1, aud_r=2) # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname) events = mne.read_events(event_fname) picks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False, eog=False, exclude='bads') # Reject some epochs based on amplitude reject = dict(mag=5e-12) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=(None, 0), preload=True, reject=reject) """ Explanation: Set parameters and read data End of explanation """ names = ['intercept', 'trial-count'] intercept = np.ones((len(epochs),), dtype=np.float) design_matrix = np.column_stack([intercept, # intercept np.linspace(0, 1, len(intercept))]) # also accepts source estimates lm = linear_regression(epochs, design_matrix, names) def plot_topomap(x, unit): x.plot_topomap(ch_type='mag', scale=1, size=1.5, vmax=np.max, unit=unit, times=np.linspace(0.1, 0.2, 5)) trial_count = lm['trial-count'] plot_topomap(trial_count.beta, unit='z (beta)') plot_topomap(trial_count.t_val, unit='t') plot_topomap(trial_count.mlog10_p_val, unit='-log10 p') plot_topomap(trial_count.stderr, unit='z (error)') """ Explanation: Run regression End of explanation """
shradhaN/python_git_sessiom
Session 4.ipynb
mit
import sqlite3 #import the driver ##psycopg2 for protsgeSQL # pymysql for MySQL conn = sqlite3.connect('example.sqlite3') #connecting to sqlite 3 and makes a new database file if file not already present cur = conn.cursor() #makes a file cursor we can make multiple cursors as well cur.execute('CREATE TABLE countries (id integer, name text, iso3 text)') #creating a new table cur.execute('SELECT * FROM countries') # cur.fetchall() #fetching the whole cur.execute('INSERT INTO countries (id,name,iso3) VALUES (1, "Nepal", "NEP")') cur.execute('SELECT * FROM countries') cur.fetchall() sql = '''INSERT INTO countries (id,name,iso3) VALUES (?,?,?)''' cur.executemany(sql , [(2, 'India','INA'), (3, 'Bhutan','BHU'), (4, 'Afghanistan','AFG')]) cur.execute('SELECT * FROM countries') cur.fetchall() sql = 'INSERT INTO countries (id,name,iso3) VALUES (4, "PAKISTAN", "PAK")' cur.execute(sql) cur.execute('SELECT * FROM countries') cur.fetchall() sql = 'UPDATE countries SET id = 5 WHERE iso3 = "4"' cur.execute(sql) cur.execute('SELECT * FROM countries') cur.fetchall() sql = 'UPDATE countries ' conn.commit() cur.execute('SELECT * FROM countries WHERE id > 2 ') cur.fetchall() cur.execute('SELECT *FROM countries WHERE name LIKE "%an"') cur.fetchall() cur.execute('SELECT *FROM countries WHERE name LIKE "%an%"') cur.fetchall() cur.execute('SELECT *FROM countries WHERE name LIKE "Pa%"') cur.fetchall() cur.execute('DELETE FROM countries') cur.execute('SELECT *FROM countries') cur.fetchall() conn.commit() import csv sql = 'INSERT INTO countries (id,name,iso3) VALUES(?,?,?)' _id = 1 with open('countries.txt','r') as datafile: csvfile = csv.DictReader(datafile) for row in csvfile: if row ['Common Name'] and row['ISO 3166-1 3 Letter Code']: cur.execute(sql, (_id, row['Common Name'], row['ISO 3166-1 3 Letter Code'])) _id +=1 conn.commit() cur.execute('DELETE FROM country_list') cur.execute('SELECT *FROM country_list') cur.fetchall() sql = '''CREATE TABLE country_list(id integer primary key autoincrement, country_name text not null, iso3 text not null unique)''' cur.execute(sql) sql = 'INSERT INTO country_list (country_name,iso3) VALUES(?,?)' with open('countries.txt','r') as datafile: csvfile = csv.DictReader(datafile) for row in csvfile: if row ['Formal Name'] and row['Formal Name']: cur.execute(sql, (row['Formal Name'], row['Formal Name'])) conn.commit() cur.execute('SELECT *FROM country_list') cur.fetchall() """ Explanation: Database RDMS(Relational Database Management System Open Sourse - MySQL(php and web application) - PostgreSQl(huge web applications) - SQLITE (android applications) Proprietary - MSSQL - Oracle https://docs.python.org/3.6/library/sqlite3.html End of explanation """ connn = sqlite3.connect("Library Management System.txt") curs = connn.cursor() sql = '''CREATE TABLE books(book_id text, isbn integer not null unique, book_name text)''' curs.execute(sql) sql2 = '''CREATE TABLE Student( roll number integer not null unique, name text not null, faculty text)''' curs.execute(sql2) sql3 = '''CREATE TABLE Teacher(name text not null, faculty text) ''' curs.execute(sql3) """ Explanation: Data base for library management system End of explanation """
mattilyra/gensim
docs/notebooks/word2vec.ipynb
lgpl-2.1
# import modules & set up logging import gensim, logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) sentences = [['first', 'sentence'], ['second', 'sentence']] # train word2vec on the two sentences model = gensim.models.Word2Vec(sentences, min_count=1) """ Explanation: Word2Vec Tutorial In case you missed the buzz, word2vec is a widely featured as a member of the “new wave” of machine learning algorithms based on neural networks, commonly referred to as "deep learning" (though word2vec itself is rather shallow). Using large amounts of unannotated plain text, word2vec learns relationships between words automatically. The output are vectors, one vector per word, with remarkable linear relationships that allow us to do things like vec(“king”) – vec(“man”) + vec(“woman”) =~ vec(“queen”), or vec(“Montreal Canadiens”) – vec(“Montreal”) + vec(“Toronto”) resembles the vector for “Toronto Maple Leafs”. Word2vec is very useful in automatic text tagging, recommender systems and machine translation. Check out an online word2vec demo where you can try this vector algebra for yourself. That demo runs word2vec on the Google News dataset, of about 100 billion words. This tutorial In this tutorial you will learn how to train and evaluate word2vec models on your business data. Preparing the Input Starting from the beginning, gensim’s word2vec expects a sequence of sentences as its input. Each sentence is a list of words (utf8 strings): End of explanation """ # create some toy data to use with the following example import smart_open, os if not os.path.exists('./data/'): os.makedirs('./data/') filenames = ['./data/f1.txt', './data/f2.txt'] for i, fname in enumerate(filenames): with smart_open.smart_open(fname, 'w') as fout: for line in sentences[i]: fout.write(line + '\n') from smart_open import smart_open class MySentences(object): def __init__(self, dirname): self.dirname = dirname def __iter__(self): for fname in os.listdir(self.dirname): for line in smart_open(os.path.join(self.dirname, fname), 'rb'): yield line.split() sentences = MySentences('./data/') # a memory-friendly iterator print(list(sentences)) # generate the Word2Vec model model = gensim.models.Word2Vec(sentences, min_count=1) print(model) print(model.wv.vocab) """ Explanation: Keeping the input as a Python built-in list is convenient, but can use up a lot of RAM when the input is large. Gensim only requires that the input must provide sentences sequentially, when iterated over. No need to keep everything in RAM: we can provide one sentence, process it, forget it, load another sentence… For example, if our input is strewn across several files on disk, with one sentence per line, then instead of loading everything into an in-memory list, we can process the input file by file, line by line: End of explanation """ # build the same model, making the 2 steps explicit new_model = gensim.models.Word2Vec(min_count=1) # an empty model, no training new_model.build_vocab(sentences) # can be a non-repeatable, 1-pass generator new_model.train(sentences, total_examples=new_model.corpus_count, epochs=new_model.iter) # can be a non-repeatable, 1-pass generator print(new_model) print(model.wv.vocab) """ Explanation: Say we want to further preprocess the words from the files — convert to unicode, lowercase, remove numbers, extract named entities… All of this can be done inside the MySentences iterator and word2vec doesn’t need to know. All that is required is that the input yields one sentence (list of utf8 words) after another. Note to advanced users: calling Word2Vec(sentences, iter=1) will run two passes over the sentences iterator. In general it runs iter+1 passes. By the way, the default value is iter=5 to comply with Google's word2vec in C language. 1. The first pass collects words and their frequencies to build an internal dictionary tree structure. 2. The second pass trains the neural model. These two passes can also be initiated manually, in case your input stream is non-repeatable (you can only afford one pass), and you’re able to initialize the vocabulary some other way: End of explanation """ # Set file names for train and test data test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep lee_train_file = test_data_dir + 'lee_background.cor' class MyText(object): def __iter__(self): for line in open(lee_train_file): # assume there's one document per line, tokens separated by whitespace yield line.lower().split() sentences = MyText() print(sentences) """ Explanation: More data would be nice For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim): End of explanation """ # default value of min_count=5 model = gensim.models.Word2Vec(sentences, min_count=10) """ Explanation: Training Word2Vec accepts several parameters that affect both training speed and quality. min_count min_count is for pruning the internal dictionary. Words that appear only once or twice in a billion-word corpus are probably uninteresting typos and garbage. In addition, there’s not enough data to make any meaningful training on those words, so it’s best to ignore them: End of explanation """ # default value of size=100 model = gensim.models.Word2Vec(sentences, size=200) """ Explanation: size size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto. Bigger size values require more training data, but can lead to better (more accurate) models. Reasonable values are in the tens to hundreds. End of explanation """ # default value of workers=3 (tutorial says 1...) model = gensim.models.Word2Vec(sentences, workers=4) """ Explanation: workers workers, the last of the major parameters (full list here) is for training parallelization, to speed up training: End of explanation """ model.accuracy('./datasets/questions-words.txt') """ Explanation: The workers parameter only has an effect if you have Cython installed. Without Cython, you’ll only be able to use one core because of the GIL (and word2vec training will be miserably slow). Memory At its core, word2vec model parameters are stored as matrices (NumPy arrays). Each array is #vocabulary (controlled by min_count parameter) times #size (size parameter) of floats (single precision aka 4 bytes). Three such matrices are held in RAM (work is underway to reduce that number to two, or even one). So if your input contains 100,000 unique words, and you asked for layer size=200, the model will require approx. 100,000*200*4*3 bytes = ~229MB. There’s a little extra memory needed for storing the vocabulary tree (100,000 words would take a few megabytes), but unless your words are extremely loooong strings, memory footprint will be dominated by the three matrices above. Evaluating Word2Vec training is an unsupervised task, there’s no good way to objectively evaluate the result. Evaluation depends on your end application. Google has released their testing set of about 20,000 syntactic and semantic test examples, following the “A is to B as C is to D” task. It is provided in the 'datasets' folder. For example a syntactic analogy of comparative type is bad:worse;good:?. There are total of 9 types of syntactic comparisons in the dataset like plural nouns and nouns of opposite meaning. The semantic questions contain five types of semantic analogies, such as capital cities (Paris:France;Tokyo:?) or family members (brother:sister;dad:?). Gensim supports the same evaluation set, in exactly the same format: End of explanation """ model.evaluate_word_pairs(test_data_dir + 'wordsim353.tsv') """ Explanation: This accuracy takes an optional parameter restrict_vocab which limits which test examples are to be considered. In the December 2016 release of Gensim we added a better way to evaluate semantic similarity. By default it uses an academic dataset WS-353 but one can create a dataset specific to your business based on it. It contains word pairs together with human-assigned similarity judgments. It measures the relatedness or co-occurrence of two words. For example, 'coast' and 'shore' are very similar as they appear in the same context. At the same time 'clothes' and 'closet' are less similar because they are related but not interchangeable. End of explanation """ from tempfile import mkstemp fs, temp_path = mkstemp("gensim_temp") # creates a temp file model.save(temp_path) # save the model new_model = gensim.models.Word2Vec.load(temp_path) # open the model """ Explanation: Once again, good performance on Google's or WS-353 test set doesn’t mean word2vec will work well in your application, or vice versa. It’s always best to evaluate directly on your intended task. For an example of how to use word2vec in a classifier pipeline, see this tutorial. Storing and loading models You can store/load models using the standard gensim methods: End of explanation """ model = gensim.models.Word2Vec.load(temp_path) more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']] model.build_vocab(more_sentences, update=True) model.train(more_sentences, total_examples=model.corpus_count, epochs=model.iter) # cleaning up temp os.close(fs) os.remove(temp_path) """ Explanation: which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing. In addition, you can load models created by the original C tool, both using its text and binary formats: model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False) # using gzipped/bz2 input works too, no need to unzip: model = gensim.models.KeyedVectors.load_word2vec_format('/tmp/vectors.bin.gz', binary=True) Online training / Resuming training Advanced users can load a model and continue training it with more sentences and new vocabulary words: End of explanation """ model.most_similar(positive=['human', 'crime'], negative=['party'], topn=1) model.doesnt_match("input is lunch he sentence cat".split()) print(model.similarity('human', 'party')) print(model.similarity('tree', 'murder')) """ Explanation: You may need to tweak the total_words parameter to train(), depending on what learning rate decay you want to simulate. Note that it’s not possible to resume training with models generated by the C tool, KeyedVectors.load_word2vec_format(). You can still use them for querying/similarity, but information vital for training (the vocab tree) is missing there. Using the model Word2Vec supports several word similarity tasks out of the box: End of explanation """ print(model.predict_output_word(['emergency', 'beacon', 'received'])) """ Explanation: You can get the probability distribution for the center word given the context words as input: End of explanation """ model['tree'] # raw NumPy vector of a word """ Explanation: The results here don't look good because the training corpus is very small. To get meaningful results one needs to train on 500k+ words. If you need the raw output vectors in your application, you can access these either on a word-by-word basis: End of explanation """ # instantiating and training the Word2Vec model model_with_loss = gensim.models.Word2Vec(sentences, min_count=1, compute_loss=True, hs=0, sg=1, seed=42) # getting the training loss value training_loss = model_with_loss.get_latest_training_loss() print(training_loss) """ Explanation: …or en-masse as a 2D NumPy matrix from model.wv.syn0. Training Loss Computation The parameter compute_loss can be used to toggle computation of loss while training the Word2Vec model. The computed loss is stored in the model attribute running_training_loss and can be retrieved using the function get_latest_training_loss as follows : End of explanation """ input_data_files = [] def setup_input_data(): # check if test data already present if os.path.isfile('./text8') is False: # download and decompress 'text8' corpus import zipfile ! wget 'http://mattmahoney.net/dc/text8.zip' ! unzip 'text8.zip' # create 1 MB, 10 MB and 50 MB files ! head -c1000000 text8 > text8_1000000 ! head -c10000000 text8 > text8_10000000 ! head -c50000000 text8 > text8_50000000 # add 25 KB test file input_data_files.append(os.path.join(os.getcwd(), '../../gensim/test/test_data/lee_background.cor')) # add 1 MB test file input_data_files.append(os.path.join(os.getcwd(), 'text8_1000000')) # add 10 MB test file input_data_files.append(os.path.join(os.getcwd(), 'text8_10000000')) # add 50 MB test file input_data_files.append(os.path.join(os.getcwd(), 'text8_50000000')) # add 100 MB test file input_data_files.append(os.path.join(os.getcwd(), 'text8')) setup_input_data() print(input_data_files) """ Explanation: Benchmarks to see effect of training loss compuation code on training time We first download and setup the test data used for getting the benchmarks. End of explanation """ logging.getLogger().setLevel(logging.ERROR) # using 25 KB and 50 MB files only for generating o/p -> comment next line for using all 5 test files input_data_files = [input_data_files[0], input_data_files[-2]] print(input_data_files) import time import numpy as np import pandas as pd train_time_values = [] seed_val = 42 sg_values = [0, 1] hs_values = [0, 1] for data_file in input_data_files: data = gensim.models.word2vec.LineSentence(data_file) for sg_val in sg_values: for hs_val in hs_values: for loss_flag in [True, False]: time_taken_list = [] for i in range(3): start_time = time.time() w2v_model = gensim.models.Word2Vec(data, compute_loss=loss_flag, sg=sg_val, hs=hs_val, seed=seed_val) time_taken_list.append(time.time() - start_time) time_taken_list = np.array(time_taken_list) time_mean = np.mean(time_taken_list) time_std = np.std(time_taken_list) train_time_values.append({'train_data': data_file, 'compute_loss': loss_flag, 'sg': sg_val, 'hs': hs_val, 'mean': time_mean, 'std': time_std}) train_times_table = pd.DataFrame(train_time_values) train_times_table = train_times_table.sort_values(by=['train_data', 'sg', 'hs', 'compute_loss'], ascending=[False, False, True, False]) print(train_times_table) """ Explanation: We now compare the training time taken for different combinations of input data and model training parameters like hs and sg. End of explanation """ logging.getLogger().setLevel(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) most_similars_precalc = {word : model.wv.most_similar(word) for word in model.wv.index2word} for i, (key, value) in enumerate(most_similars_precalc.iteritems()): if i==3: break print key, value """ Explanation: Adding Word2Vec "model to dict" method to production pipeline Suppose, we still want more performance improvement in production. One good way is to cache all the similar words in a dictionary. So that next time when we get the similar query word, we'll search it first in the dict. And if it's a hit then we will show the result directly from the dictionary. otherwise we will query the word and then cache it so that it doesn't miss next time. End of explanation """ import time words = ['voted','few','their','around'] """ Explanation: Comparison with and without caching for time being lets take 4 words randomly End of explanation """ start = time.time() for word in words: result = model.wv.most_similar(word) print(result) end = time.time() print(end-start) """ Explanation: Without caching End of explanation """ start = time.time() for word in words: if 'voted' in most_similars_precalc: result = most_similars_precalc[word] print(result) else: result = model.wv.most_similar(word) most_similars_precalc[word] = result print(result) end = time.time() print(end-start) """ Explanation: Now with caching End of explanation """ from sklearn.decomposition import IncrementalPCA # inital reduction from sklearn.manifold import TSNE # final reduction import numpy as np # array handling from plotly.offline import init_notebook_mode, iplot, plot import plotly.graph_objs as go def reduce_dimensions(model, plot_in_notebook = True): num_dimensions = 2 # final num dimensions (2D, 3D, etc) vectors = [] # positions in vector space labels = [] # keep track of words to label our data again later for word in model.wv.vocab: vectors.append(model[word]) labels.append(word) # convert both lists into numpy vectors for reduction vectors = np.asarray(vectors) labels = np.asarray(labels) # reduce using t-SNE vectors = np.asarray(vectors) logging.info('starting tSNE dimensionality reduction. This may take some time.') tsne = TSNE(n_components=num_dimensions, random_state=0) vectors = tsne.fit_transform(vectors) x_vals = [v[0] for v in vectors] y_vals = [v[1] for v in vectors] # Create a trace trace = go.Scatter( x=x_vals, y=y_vals, mode='text', text=labels ) data = [trace] logging.info('All done. Plotting.') if plot_in_notebook: init_notebook_mode(connected=True) iplot(data, filename='word-embedding-plot') else: plot(data, filename='word-embedding-plot.html') reduce_dimensions(model) """ Explanation: Clearly you can see the improvement but this difference will be even larger when we take more words in the consideration. Visualising the Word Embeddings The word embeddings made by the model can be visualised by reducing dimensionality of the words to 2 dimensions using tSNE. Visualisations can be used to notice semantic and syntactic trends in the data. Example: Semantic- words like cat, dog, cow, etc. have a tendency to lie close by Syntactic- words like run, running or cut, cutting lie close together. Vector relations like vKing - vMan = vQueen - vWoman can also be noticed. Additional dependencies : - sklearn - numpy - plotly The function below can be used to plot the embeddings in an ipython notebook. It requires the model as the necessary parameter. If you don't have the model, you can load it by model = gensim.models.Word2Vec.load('path/to/model') If you don't want to plot inside a notebook, set the plot_in_notebook parameter to False. Note: the model used for the visualisation is trained on a small corpus. Thus some of the relations might not be so clear Beware : This sort dimension reduction comes at the cost of loss of information. End of explanation """
Murali-group/2017-ICSB-graphspace-tutorial
session2.ipynb
gpl-3.0
!pip install graphspace_python==0.8.3 """ Explanation: Session 2: Integrating GraphSpace into network analysis projects Presenters: Aditya Bharadwaj, Jeffrey N. Law and T. M. Murali Introduction Required files for today Clone or download this repository: http://bit.ly/2017icsb IPython/Jupyter notebooks Datasets in the data subdirectory Required software for today Both Python 2 and 3 are welcome Jupyter or IPython (and their dependencies) Anaconda distribution of Python is an easy way to install these About Me PhD Student, CS@VT Website: adityabharadwaj.in Email: adb@vt.edu Twitter: @adbcoder Agenda Setup required softwares Introduction to python programming Creating and uploading graphs Basics of NetworkX API Visualizing networks on GraphSpace Adding style to the networks Specifying weights on edges Laying out nodes programmatically Managing groups and sharing graphs Create groups Add/Remove group members Share graphs with groups Managing layouts Sharing layouts Set default layout Publishing graphs Searching graphs on GraphSpace RESTful APIs Exploring and finding the right API Using RESTful APIs Part 1. Setup required softwares Install Jupyter/IPython notebook The Jupyter Notebook is an interactive computing environment that enables users to author notebook documents that include: - Live code - Interactive widgets - Plots - Narrative text - Equations - Images - Video. Install Jupyter Go to Jupyter and follow the instructions. You can also install Jupyter and Python using the following Anaconda distributions of Python (recommended): Windows MacOS Linux Start Jupyter Open command line and go the directory where you installed tutorial repository and start jupyter using the following command: jupyter notebook You should see the notebook open in your browser. Install graphspace-python package There are multiple ways to install graphspace_python package. a. Use pip (recommended) pip install graphspace_python b. Install manually from PyPi package https://pypi.python.org/pypi/graphspace_python c. Install the latest development version from GitHub git clone https://github.com/adbharadwaj/graphspace-python.git End of explanation """ print("Hello World") """ Explanation: Part 2: Introduction to python programming Python is an interpreted, general-purpose high-level programming language whose design philosophy emphasises code readability End of explanation """ l = [] # l = list() l = ['apple', 'orange', 123] print(l) """ Explanation: Lists Lists are the most commonly used data structure. Think of it as a sequence of data that is enclosed in square brackets and data are separated by a comma. Each of these data can be accessed by calling it's index value. End of explanation """ print(l[0], l[1]) """ Explanation: In python, Indexing starts from 0. Thus now the list l, which has three elements will have apple at 0 index, orange at 1 index and 123 at 2 index. End of explanation """ tup = () #tup = tuple() """ Explanation: Tuple Tuples are similar to lists but only big difference is the elements inside a list can be changed but in tuple it cannot be changed. End of explanation """ tup3 = tuple([1,2,3]) print(tup3) tup4 = tuple('Hello') print(tup4) """ Explanation: Values can be assigned while declaring a tuple. It takes a list as input and converts it into a tuple or it takes a string and converts it into a tuple. End of explanation """ data = {} # data = dict() data['firstname'] = 'Aditya' data['lastname'] = 'Bharadwaj' data['age'] = 25 print(data) """ Explanation: Dictionaries Dictionaries are used like set of key-value pairs. End of explanation """ for i in [1,2,3,4,5]: print(i) """ Explanation: Loops for variable in something: algorithm End of explanation """ import networkx as nx G = nx.DiGraph() # Add a node G.add_node('a') # Add multiple nodes G.add_nodes_from(['b', 'c', 'd']) G.nodes() # Remove node from the graph G.remove_node('d') G.nodes() # Add edges to the graph G.add_edge('a', 'b') G.add_edges_from([('b','c'), ('c', 'a')]) G.edges() # Remove edge from the graph G.remove_edge('c', 'a') G.edges() # Get Graph Info print(nx.info(G)) """ Explanation: Part 3: Creating and uploading graphs Basic concepts in NetworkX NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. Documentation is available at https://networkx.readthedocs.io/en/stable/ Create an empty graph End of explanation """ %matplotlib inline nx.draw(G, with_labels=True) """ Explanation: NetworkX with Matplotlib¶ Pros: - Easy - Some customization Cons: - Looks "outdated" (not great for publication / productizing) - Not interactive - Few Layout Options - Offline End of explanation """ from graphspace_python.api.client import GraphSpace graphspace = GraphSpace('user6@example.com', 'user6') """ Explanation: Uploading your network to GraphSpace First you need to connect to GraphSpace using your username and password. End of explanation """ from graphspace_python.graphs.classes.gsgraph import GSGraph graph = graphspace.post_graph(GSGraph(G)) print(graph.url) """ Explanation: Once you are connected, you can use this connection to post/upload your graphs to GraphSpace. End of explanation """ # Update the name of the graph graph.set_name('My First Graph') graph = graphspace.update_graph(graph) print(graph.url) """ Explanation: Updating your network on GraphSpace End of explanation """ G = nx.DiGraph() # add at creation # nodes G.add_node('a', favorite_color='yellow') G.add_nodes_from([('b', {'favorite_color' : 'green'}), ('c', {'favorite_color' :'red'})]) # edges G.add_edge('a', 'b', {'relationship' : 'friends'}) G.add_edge('b', 'c', {'relationship' : 'enemy'}) # accessing node attributes print("Node 'a' attributes:", G.node['a']) # accessing edge attributes print("Edge a-b attributes:", G.edge['a']['b']) """ Explanation: Adding and Inspecting Attributes End of explanation """ label = { 'a' : 'A', 'b' : 'B', 'c' : 'C' } nx.set_node_attributes(G, 'label', label) print("Node a's label is %s" % G.node['a']['label']) graph = graphspace.update_graph(GSGraph(G), graph_id=graph.id) print(graph.url) """ Explanation: Adding node labels label is a text attribute that is displayed inside of the node. GraphSpace uses it to search for nodes with a matching name. Refer to Node Data Attributes Attributes Treated Specially by GraphSpace for more details. End of explanation """ graph.set_name('My First Graph') # Add tags graph.set_tags(['icsb2017', 'tutorial']) # Add any number of attributes that best characterize the network graph.set_data({ 'author': 'Aditya Bharadwaj', 'contact_email': 'adb@vt.edu', 'description': "This graph was posted during ICSB 2017 workshop on GraphSpace" }) graph = graphspace.update_graph(graph, graph_id=graph.id) print(graph.url) """ Explanation: Adding graph information GraphSpace gives users the freedom to include any attributes such as name, title, description, and tags that best characterize the network. Refer to Graph Data Attributes to learn more about adding information about the graph. End of explanation """ for n in graph.nodes(): graph.add_node_style(n, shape='rectangle', color=G.node[n]['favorite_color'], width=100, height=100) graph = graphspace.update_graph(graph) print(graph.url) """ Explanation: Adding visual styles to nodes color (str, optional): Hexadecimal representation of the color (e.g., #FFFFFF) or color name. Defaults to white. shape (str, optional): Shape of node. Defaults to 'ellipse' List of allowed node shapes ellipse (default) rectangle roundrectangle triangle pentagon hexagon heptagon octagon star diamond vee rhomboid End of explanation """ graph.add_edge_style('a', 'b', directed=False, edge_style='solid', width=10.0, color='blue') graph.add_edge_style('b', 'c', directed=False, edge_style='dashed', width=10.0, color='red') graph = graphspace.update_graph(graph) print(graph.url) """ Explanation: Adding visual style to edges edge_style (str, optional): Style of edge. Defaults to 'solid'. List of allowed edge styles solid (default) dotted dashed End of explanation """ graph.node['a']['popup'] = 'Node A' graph.node['b']['popup'] = 'Node B' graph.node['c']['popup'] = 'Node C' graph['a']['b']['popup'] = 'Friends' graph['b']['c']['popup'] = 'Enemy' graph = graphspace.update_graph(graph) print(graph.url) """ Explanation: Adding annotations to nodes and edges 'popup' attribute A string that will be displayed in a popup window when the user clicks the node/edge. This string can be HTML-formatted information, e.g., Gene Ontology annotations and database links for a protein; or types, mechanism, and database sources for an interaction. End of explanation """ # HTML formated popups graph['a']['b']['popup'] = '<h3>Edge between A and B</h3> <br/> Relationship: Friends' graph['b']['c']['popup'] = '<h3>Edge between B and C</h3> <br/> Relationship: Enemy' graph.node['a']['popup'] = '<h3>Node A</h3> <br/> <img height="250" src="http://images.clipartpanda.com/boy-20clip-20art-blond-boy.png" alt="Image A" />' graph.node['b']['popup'] = '<h3>Node B</h3> <br/> <img height="250" src="http://images.clipartpanda.com/boy-clipart-birthday-boy.png" alt="Image B" />' graph.node['c']['popup'] = '<h3>Node C</h3> <br/> <img height="250" src="http://images.clipartpanda.com/boy-clipart-9a67af7554253b6a9b7014c36c348f09.jpg" alt="Image C" />' graph = graphspace.update_graph(graph) print(graph.url) # You can also embed websites in popups as well. This could be useful for embedding sites like genecards. graph['a']['b']['popup'] = '<h3>Edge between A and B</h3> <br/> Relationship: <a href="https://en.m.wikipedia.org/wiki/Friendship">Friends</a> <iframe src="https://en.m.wikipedia.org/wiki/Friendship" scrolling="auto" width="100%" height="500"></iframe>' graph['b']['c']['popup'] = '<h3>Edge between B and C</h3> <br/> Relationship: <a href="https://en.m.wikipedia.org/wiki/Enemy">Enemy</a> <iframe src="https://en.m.wikipedia.org/wiki/Enemy" scrolling="auto" width="100%" height="500"></iframe>' graph = graphspace.update_graph(graph) print(graph.url) """ Explanation: End of explanation """ # Position nodes in a vertical alignment. graph.set_node_position('a', y=0, x=0) graph.set_node_position('b', y=250, x=0) graph.set_node_position('c', y=500, x=0) graph = graphspace.update_graph(graph) print(graph.url) """ Explanation: Laying out nodes End of explanation """ # G.add_edge('a', 'b', weight=5) # G.add_edge('b', 'c', weight=10) graph['a']['b']['weight'] = 5 graph['b']['c']['weight'] = 10 graph.edges(data=True) graph = graphspace.update_graph(graph, graph_id=graph.id) print(graph.url) """ Explanation: Specifying weights on edges End of explanation """ # Laying out edges with width propotional to edge weight. graph.add_edge_style('a', 'b', directed=False, edge_style='solid', width=graph['a']['b']['weight'], color='blue') graph.add_edge_style('b', 'c', directed=False, edge_style='dashed', width=graph['b']['c']['weight'], color='red') graph = graphspace.update_graph(graph) print(graph.url) """ Explanation: Note: If you add weights as a edge property, it will not reflect on graph visualization automatically. End of explanation """ from graphspace_python.graphs.classes.gsgroup import GSGroup group = graphspace.post_group(GSGroup(name='My first group', description='sample group')) print(group.url) """ Explanation: Part 4: Managing collaborative groups and sharing graphs A group is a collection of GraphSpace users. For example, if there are multiple researchers who are collaborating a project, a group may be created containing all of them. A group owner is the creator of the group. Any GraphSpace user can create a group by visiting the Groups page and clicking the “Create group” button. The group owner may Invite any GraphSpace user that has an account to be a member of their group. Remove any member from the group. Unshare any graph that has already been shared by the members of the group A group member is a user who is a part of a group. (A group owner is trivially a member of the group.) A group member may Share a graph owned by him or her with a group. Unshare a previously shared graph. Share a layout for a previously shared graph. Unshare a previously shared layout. Creating groups End of explanation """ # Initially a group is created with the group owner as a member. for member in graphspace.get_group_members(group=group): print(member.email) # Group owner can add existing users by their GraphSpace usernames for email in ['adb@vt.edu', 'adb@cs.vt.edu']: graphspace.add_group_member(group=group, member_email=email) # Getting the list of all group members for member in graphspace.get_group_members(group=group): print(member.email, member.id) # Send the following invitation link to your collaborators (with/without GraphSpace accounts) print(group.invite_link) graphspace.delete_group_member(group=group, member_id=70) for member in graphspace.get_group_members(group=group): print(member.email) """ Explanation: Adding and removing group members End of explanation """ graphspace.share_graph(graph=graph, group=group) for shared_graph in graphspace.get_group_graphs(group=group): print(shared_graph.owner_email, shared_graph.name) """ Explanation: Sharing graphs with the groups A user can share one or more graphs with groups to which the user belongs. End of explanation """ graphspace.unshare_graph(graph=graph, group=group) # No graph is shared with the group at this point. for shared_graph in graphspace.get_group_graphs(group=group): print(shared_graph.owner_email, shared_graph.name) """ Explanation: Unsharing graphs End of explanation """ from graphspace_python.graphs.classes.gslayout import GSLayout L = GSLayout() # Assign different colors to nodes L.add_node_style('a', shape='ellipse', color='yellow', width=100, height=100) L.add_node_style('b', shape='triangle', color='green', width=100, height=100) L.add_node_style('c', shape='rectangle', color='red', width=100, height=100) L.add_edge_style('a', 'b', directed=False, edge_style='solid', width=5.0, color='blue') L.add_edge_style('b', 'c', directed=False, edge_style='dashed', width=5.0, color='red') L.set_name('My First Layout') layout = graphspace.post_graph_layout(graph_id=graph.id, layout=L) # Go to the following url o visualize the layout. print(layout.url) """ Explanation: Part 5: Managing layouts Creating and uploading layouts End of explanation """ layout.set_node_position('a', y=0, x=0) layout.set_node_position('b', y=0, x=250) layout.set_node_position('c', y=0, x=500) layout = graphspace.update_graph_layout(graph_id=graph.id, layout=layout) # Go to the following url to visualize the layout. print(layout.url) """ Explanation: End of explanation """ # Setting font style layout.add_node_style('a', attr_dict={'font-size':24, 'font-family': 'Lucida Console, Courier, monospace'}, shape='ellipse', color=graph.node['a']['favorite_color'], width=100, height=100) layout.add_node_style('b', attr_dict={'font-size':24, 'font-family': 'Lucida Console, Courier, monospace'}, shape='triangle', color=graph.node['b']['favorite_color'], width=100, height=100) layout.add_node_style('c', attr_dict={'font-size':24, 'font-family': 'Lucida Console, Courier, monospace'}, shape='rectangle', color=graph.node['c']['favorite_color'], width=100, height=100) layout = graphspace.update_graph_layout(graph_id=graph.id, layout=layout) print(layout.url) # Setting image backgrounds import pickle tutorial_node_images = pickle.load( open( "data/tutorial_node_images.p", "rb" ) ) layout.add_node_style('a', attr_dict={ 'background-image': tutorial_node_images['a'], 'background-clip': 'none', 'background-fit': 'contain', 'background-opacity': 0, 'border-opacity': 0, 'text-margin-y': 5 }, width=100, height=100) layout.add_node_style('b', attr_dict={ 'background-image': tutorial_node_images['b'], 'background-clip': 'none', 'background-fit': 'contain', 'background-opacity': 0, 'border-opacity': 0, 'text-margin-y': 5 }, width=100, height=100) layout.add_node_style('c', attr_dict={ 'background-image': tutorial_node_images['c'], 'background-clip': 'none', 'background-fit': 'contain', 'background-opacity': 0, 'border-opacity': 0, 'text-margin-y': 5 }, width=100, height=100) layout = graphspace.update_graph_layout(graph_id=graph.id, layout=layout) print(layout.url) """ Explanation: More examples Users can also define more advanced style for the graph using attr_dict parameter in add_node_style and add_edge_style methods. The attr_dict is dictionary of style properties supported by Cytoscape.js. End of explanation """ graph = graphspace.set_default_graph_layout(graph=graph, layout=layout) # You should see the above layout by default for the given graph. print(graph.url) """ Explanation: Set the layout as default End of explanation """ for mylayout in graphspace.get_my_graph_layouts(graph_id=graph.id): print(mylayout.name, mylayout.id) """ Explanation: Sharing layouts End of explanation """ layouts = graphspace.get_shared_graph_layouts(graph=graph) print(layouts) layout.set_is_shared(1) layout = graphspace.update_graph_layout(graph=graph, layout=layout) print([l.name for l in graphspace.get_shared_graph_layouts(graph=graph)]) """ Explanation: Similarily you can also get the list of shared layouts using get_shared_graph_layouts method. End of explanation """ # Sharing the graph with everyone. This graph will show up in Public Graphs list. graph = graphspace.publish_graph(graph=graph) print(graph.url) # Unpublishing graphs graph = graphspace.unpublish_graph(graph_id=graph.id) print(graph.url) """ Explanation: Part 6: Publishing graphs End of explanation """ # Getting a list of public graphs with 'pathlinker' as a subtring in atleast one of their tags. for g in graphspace.get_public_graphs(tags=['%pathlinker%'],limit=100, offset=0): print(g.owner_email, g.name) # Getting a list of my graphs. for g in graphspace.get_my_graphs(limit=100, offset=0): print(g.owner_email, g.name) """ Explanation: Part 7: Searching graphs on GraphSpace You can search for graphs based on their visibility. - Graphs posted by you - Graphs shared with one of your groups - Graphs shared with everyone End of explanation """ graphspace.delete_group(group=group) graphspace.delete_graph(graph=graph) """ Explanation: Part 8: RESTful APIs The GraphSpace REST API provides endpoints for entities such as graphs, layouts, and groups that allow developers to interact with the GraphSpace website remotely by sending and receiving JSON objects. This API enables developers to create, read, and update GraphSpace content from client-side JavaScript or from applications written in any language. Finding the right API for you API Reference Testing GraphSpace APIs Postman is a Google Chrome app for interacting with HTTP APIs. It provides a friendly GUI for constructing requests and reading responses. Postman makes it easy to test, develop and document APIs by allowing users to quickly put together both simple and complex HTTP requests. Postman Installation Postman is available as a native app (recommended) for Mac / Windows / Linux, and as a Chrome App. The Postman Chrome app can only run on the Chrome browser. To use the Postman Chrome app, you need to: Install Google Chrome: Install Chrome. If you already have Chrome installed, head over to Postman’s page on the Chrome Webstore – https://chrome.google.com/webstore/detail/postman-rest-client-packa/fhbjgbiflinjbdggehcddcbncdddomop?hl=en, and click ‘Add to Chrome’. After the download is complete, launch the app. Download Postman Collection Importing the postman collection: Click Import button in the top menu. Choose the Import File in the pop up window. Provide the Authorization details for the imported requests (as Authorization details have been removed for security concern) Deleting groups,graphs and layouts posted during this tutorial End of explanation """ ## Map values to node color g = nx.erdos_renyi_graph(10, 0.1, seed=10, directed=True) g = GSGraph(nx.relabel_nodes(g, {n: str(n) for n in g.nodes()})) nx.set_node_attributes(g, 'degree_centrality', nx.degree_centrality(g)) for n in g.nodes(): g.add_node_style(n, attr_dict={'background-opacity': g.node[n]['degree_centrality']}, color='blue', width=50, height=50) g.set_name('Random Graph with degree centrality mapped to node color') graph = graphspace.post_graph(g) print(graph.url) """ Explanation: More examples End of explanation """ graphspace.delete_graph(graph=graph) ## Map values to node size g = nx.erdos_renyi_graph(10, 0.1, seed=10, directed=True) g = GSGraph(nx.relabel_nodes(g, {n: str(n) for n in g.nodes()})) nx.set_node_attributes(g, 'degree_centrality', nx.degree_centrality(g)) for n in g.nodes(): g.add_node_style(n, color='blue', width=200*g.node[n]['degree_centrality']+20, height=200*g.node[n]['degree_centrality']+20) g.set_name('Random Graph with degree centrality mapped to node size') graph = graphspace.post_graph(g) print(graph.url) """ Explanation: End of explanation """ graphspace.delete_graph(graph=graph) """ Explanation: End of explanation """
mlamoureux/PIMS_YRC
Using_Python.ipynb
mit
2+2 2/3 (1+2j)*(2+3j) """ Explanation: Using Python in Jupyter This is a typical notebook in Jupyter. It is organized as a series of cells. Each cell could contain text, code, or some raw format (Raw NBConvert). You can select which type of code you want to run. For this notebook, we are using Python 3. You could also choose to use Python 2, Julia, or R. Additional languages can be installed. Running your cells You edit a cell by clicking on it and typing as you wish. To run it (either code, or to format text), you must hit "Shift-Enter" on your keyboard. Using Latex and such The text is formatted using something called the Markdown language. This language is useful for simple formatting like making lists, adding bold face or italics, and using basic Latex syntax. A list is formed using dashes at the start of each line. Hit "Shift-return" to get the formatted version - list item one - list item two - and so on. Italics are formed with single asterix, bold with double. Like this and like this. Latex works fine, except you don't have access to fancy packages. Use single dollar signs for inline, double for display. This is an integral formula inline $\sin(x) = \int_0^x \cos(t) dt$, while here it is again in display mode: $$\sin(x) = \int_0^x \cos(t) dt.$$ Creating code This is as simple as typing some legal Python code into a cell, and hitting "Shift-Enter" to run it. Or use the menu commands. Python knows about integers, floating point numbers, even complex numbers like $1+2j$: End of explanation """ "This is a string" "This is a string" + "added to another string" """ Explanation: Python knows about strings, and can manipulate them. End of explanation """ def doubleIt(x): return x+x doubleIt(2) doubleIt(2.0) doubleIt("Hi there ") """ Explanation: You can define functions in Python. Notice that it is not fussy about declaring the type of the arguments. (Unlike C or Fortran.) End of explanation """ for i in {1,2,3}: print(i) i = 0 while i<5: i = i+1 print(i) """ Explanation: You can define for loops, and while loops, using a simple syntax. Notice the colon, the indenting, and the fact the loop definition ends with a blank line. That's all part of the syntax. End of explanation """ %run manystring """ Explanation: You can run python code that you saved on disk as a file. For instance, in this folder we have a text file called manysring.py . The code looks like this s = 'x' while len(s)&lt;10: print(s) s = s+'x' To run it, use the magic command called %run End of explanation """ from numpy import sin sin(.1) """ Explanation: If you want to do something more complicated, you will have to load in various packages and functions within them. For instance, to compute a sine function, you need to use Numerical Python, also known as Numpy. End of explanation """ from numpy import * [sin(.1),cos(.1),tan(.1),arcsin(.1),arccos(.1),arctan(.1)] """ Explanation: You will find it to import everything in numpy, then you have access to lots of useful functions, arrays, etc. End of explanation """ import numpy as np [np.sin(.1),np.cos(.1),np.tan(.1),np.arcsin(.1),np.arccos(.1),np.arctan(.1)] """ Explanation: Namespaces Actually, for big projects, you should not use the import-star command, as the many function names may clash with other function definitions in your code, or other packages. It is considered safer to import the functions with the name of the package attached to them, or an appropriate abbreviation. Like this: End of explanation """
DJCordhose/ai
notebooks/tf2/rnn-add-example.ipynb
mit
!pip install -q tf-nightly-gpu-2.0-preview import tensorflow as tf print(tf.__version__) """ Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tf2/rnn-add-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Addition as a Sequence to Sequence Translation Adapted from https://github.com/keras-team/keras/blob/master/examples/addition_rnn.py End of explanation """ class CharacterTable(object): """Given a set of characters: + Encode them to a one hot integer representation + Decode the one hot integer representation to their character output + Decode a vector of probabilities to their character output """ def __init__(self, chars): """Initialize character table. # Arguments chars: Characters that can appear in the input. """ self.chars = sorted(set(chars)) self.char_indices = dict((c, i) for i, c in enumerate(self.chars)) self.indices_char = dict((i, c) for i, c in enumerate(self.chars)) def encode(self, C, num_rows): """One hot encode given string C. # Arguments num_rows: Number of rows in the returned one hot encoding. This is used to keep the # of rows for each data the same. """ x = np.zeros((num_rows, len(self.chars))) for i, c in enumerate(C): x[i, self.char_indices[c]] = 1 return x def decode(self, x, calc_argmax=True): if calc_argmax: x = x.argmax(axis=-1) return ''.join(self.indices_char[x] for x in x) class colors: ok = '\033[92m' fail = '\033[91m' close = '\033[0m' import numpy as np # Parameters for the model and dataset. TRAINING_SIZE = 50000 DIGITS = 3 # REVERSE = True REVERSE = False # Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of # int is DIGITS. MAXLEN = DIGITS + 1 + DIGITS # All the numbers, plus sign and space for padding. chars = '0123456789+ ' ctable = CharacterTable(chars) questions = [] expected = [] seen = set() print('Generating data...') while len(questions) < TRAINING_SIZE: f = lambda: int(''.join(np.random.choice(list('0123456789')) for i in range(np.random.randint(1, DIGITS + 1)))) a, b = f(), f() # Skip any addition questions we've already seen # Also skip any such that x+Y == Y+x (hence the sorting). key = tuple(sorted((a, b))) if key in seen: continue seen.add(key) # Pad the data with spaces such that it is always MAXLEN. q = '{}+{}'.format(a, b) query = q + ' ' * (MAXLEN - len(q)) ans = str(a + b) # Answers can be of maximum size DIGITS + 1. ans += ' ' * (DIGITS + 1 - len(ans)) if REVERSE: # Reverse the query, e.g., '12+345 ' becomes ' 543+21'. (Note the # space used for padding.) query = query[::-1] questions.append(query) expected.append(ans) print('Total addition questions:', len(questions)) questions[0] expected[0] print('Vectorization...') x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool) y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool) for i, sentence in enumerate(questions): x[i] = ctable.encode(sentence, MAXLEN) for i, sentence in enumerate(expected): y[i] = ctable.encode(sentence, DIGITS + 1) len(x[0]) len(questions[0]) """ Explanation: Step 1: Generate sample equations End of explanation """ x[0] """ Explanation: Input is encoded as one-hot, 7 digits times 12 possibilities End of explanation """ y[0] # Shuffle (x, y) in unison as the later parts of x will almost all be larger # digits. indices = np.arange(len(y)) np.random.shuffle(indices) x = x[indices] y = y[indices] """ Explanation: Same for output, but at most 4 digits End of explanation """ # Explicitly set apart 10% for validation data that we never train over. split_at = len(x) - len(x) // 10 (x_train, x_val) = x[:split_at], x[split_at:] (y_train, y_val) = y[:split_at], y[split_at:] print('Training Data:') print(x_train.shape) print(y_train.shape) print('Validation Data:') print(x_val.shape) print(y_val.shape) """ Explanation: Step 2: Training/Validation Split End of explanation """ # input shape: 7 digits, each being 0-9, + or space (12 possibilities) MAXLEN, len(chars) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, GRU, SimpleRNN, Dense, RepeatVector # Try replacing LSTM, GRU, or SimpleRNN. # RNN = LSTM RNN = SimpleRNN # should be enough since we do not have long sequences and only local dependencies # RNN = GRU HIDDEN_SIZE = 128 BATCH_SIZE = 128 model = Sequential() # encoder model.add(RNN(units=HIDDEN_SIZE, input_shape=(MAXLEN, len(chars)))) # latent space encoding_dim = 32 model.add(Dense(units=encoding_dim, activation='relu', name="encoder")) # decoder: have 4 temporal outputs one for each of the digits of the results model.add(RepeatVector(DIGITS + 1)) # return_sequences=True tells it to keep all 4 temporal outputs, not only the final one (we need all four digits for the results) model.add(RNN(units=HIDDEN_SIZE, return_sequences=True)) model.add(Dense(name='classifier', units=len(chars), activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() """ Explanation: Step 3: Create Model End of explanation """ %%time # Train the model each generation and show predictions against the validation # dataset. merged_losses = { "loss": [], "val_loss": [], "accuracy": [], "val_accuracy": [], } for iteration in range(1, 50): print() print('-' * 50) print('Iteration', iteration) iteration_history = model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=1, validation_data=(x_val, y_val)) merged_losses["loss"].append(iteration_history.history["loss"]) merged_losses["val_loss"].append(iteration_history.history["val_loss"]) merged_losses["accuracy"].append(iteration_history.history["accuracy"]) merged_losses["val_accuracy"].append(iteration_history.history["val_accuracy"]) # Select 10 samples from the validation set at random so we can visualize # errors. for i in range(10): ind = np.random.randint(0, len(x_val)) rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])] preds = model.predict_classes(rowx, verbose=0) q = ctable.decode(rowx[0]) correct = ctable.decode(rowy[0]) guess = ctable.decode(preds[0], calc_argmax=False) print('Q', q[::-1] if REVERSE else q, end=' ') print('T', correct, end=' ') if correct == guess: print(colors.ok + '☑' + colors.close, end=' ') else: print(colors.fail + '☒' + colors.close, end=' ') print(guess) import matplotlib.pyplot as plt plt.ylabel('loss') plt.xlabel('epoch') plt.yscale('log') plt.plot(merged_losses['loss']) plt.plot(merged_losses['val_loss']) plt.legend(['loss', 'validation loss']) plt.ylabel('accuracy') plt.xlabel('epoch') # plt.yscale('log') plt.plot(merged_losses['accuracy']) plt.plot(merged_losses['val_accuracy']) plt.legend(['accuracy', 'validation accuracy']) """ Explanation: Step 4: Train End of explanation """
mne-tools/mne-tools.github.io
0.18/_downloads/0cd97a6d68ec19255d6658b4ecac3774/plot_artifacts_correction_ssp.ipynb
bsd-3-clause
import numpy as np import mne from mne.datasets import sample from mne.preprocessing import compute_proj_ecg, compute_proj_eog # getting some data ready data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname, preload=True) """ Explanation: Artifact Correction with SSP This tutorial explains how to estimate Signal Subspace Projectors (SSP) for correction of ECG and EOG artifacts. See sphx_glr_auto_examples_io_plot_read_proj.py for how to read and visualize already present SSP projection vectors. End of explanation """ projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, n_eeg=0, average=True) print(projs) ecg_projs = projs[-2:] mne.viz.plot_projs_topomap(ecg_projs) """ Explanation: Compute SSP projections First let's do ECG. End of explanation """ projs, events = compute_proj_eog(raw, n_grad=1, n_mag=1, n_eeg=1, average=True) print(projs) eog_projs = projs[-3:] mne.viz.plot_projs_topomap(eog_projs, info=raw.info) """ Explanation: Now let's do EOG. Here we compute an EEG projector, and need to pass the measurement info so the topomap coordinates can be created. End of explanation """ raw.info['projs'] += eog_projs + ecg_projs """ Explanation: Apply SSP projections MNE is handling projections at the level of the info, so to register them populate the list that you find in the 'proj' field End of explanation """ events = mne.find_events(raw, stim_channel='STI 014') reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6) # this can be highly data dependent event_id = {'auditory/left': 1} epochs_no_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj=False, baseline=(None, 0), reject=reject) epochs_no_proj.average().plot(spatial_colors=True, time_unit='s') epochs_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj=True, baseline=(None, 0), reject=reject) epochs_proj.average().plot(spatial_colors=True, time_unit='s') """ Explanation: Yes this was it. Now MNE will apply the projs on demand at any later stage, so watch out for proj parameters in functions or to it explicitly with the .apply_proj method Demonstrate SSP cleaning on some evoked data End of explanation """ evoked = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj='delayed', baseline=(None, 0), reject=reject).average() # set time instants in seconds (from 50 to 150ms in a step of 10ms) times = np.arange(0.05, 0.15, 0.01) fig = evoked.plot_topomap(times, proj='interactive', time_unit='s') """ Explanation: Looks cool right? It is however often not clear how many components you should take and unfortunately this can have bad consequences as can be seen interactively using the delayed SSP mode: End of explanation """
jhillairet/scikit-rf
doc/source/examples/vectorfitting/vectorfitting_ex2_190ghz_active.ipynb
bsd-3-clause
import skrf import numpy as np import matplotlib.pyplot as mplt """ Explanation: Ex2: Measured 190 GHz Active 2-Port The Vector Fitting feature is demonstrated using a 2-port S-matrix of an active circuit measured from 140 GHz to 220 GHz. Additional explanations and background information can be found in the Vector Fitting tutorial. End of explanation """ nw = skrf.network.Network('./190ghz_tx_measured.S2P') vf = skrf.VectorFitting(nw) """ Explanation: This example is a lot more tricky to fit, because the responses contain a few "bumps" and noise from the measurement. In such a case, finding a good number of initial poles can take a few iterations. Load the Network from a Touchstone file and create the Vector Fitting instance: End of explanation """ vf.vector_fit(n_poles_real=4, n_poles_cmplx=4) """ Explanation: First attempt: Perform the fit using 4 real poles and 4 complex-conjugate poles: (Note: In a previous version of this example, the order of the two attempts was reversed. Also see the comment at the end.) End of explanation """ vf.plot_convergence() """ Explanation: The function plot_convergence() can be helpful to examine the convergence and see if something was going wrong. End of explanation """ vf.get_rms_error() # plot frequency responses fig, ax = mplt.subplots(2, 2) fig.set_size_inches(12, 8) vf.plot_s_mag(0, 0, ax=ax[0][0]) # s11 vf.plot_s_mag(0, 1, ax=ax[0][1]) # s12 vf.plot_s_mag(1, 0, ax=ax[1][0]) # s21 vf.plot_s_mag(1, 1, ax=ax[1][1]) # s22 fig.tight_layout() mplt.show() """ Explanation: Checking the results by comparing the model responses to the original sampled data indicates a successful fit, which is also indicated by a small rms error (less than 0.05): End of explanation """ freqs = np.linspace(0, 500e9, 501) # plot model response from dc to 500 GHz fig, ax = mplt.subplots(2, 2) fig.set_size_inches(12, 8) vf.plot_s_mag(0, 0, freqs=freqs, ax=ax[0][0]) # s11 vf.plot_s_mag(0, 1, freqs=freqs, ax=ax[0][1]) # s12 vf.plot_s_mag(1, 0, freqs=freqs, ax=ax[1][0]) # s21 vf.plot_s_mag(1, 1, freqs=freqs, ax=ax[1][1]) # s22 fig.tight_layout() mplt.show() """ Explanation: It is a good idea to also check the model response well outside the original frequency range. End of explanation """ vf.vector_fit(n_poles_real=3, n_poles_cmplx=4) vf.plot_convergence() """ Explanation: Second attempt: Maybe an even better fit without that large dc "spike" can be achieved, so let's try again. Unwanted spikes at frequencies outside the fitting band are often caused by unnecessary or badly configured poles. Predictions about the fitting quality outside the fitting band are somewhat speculative and are not exactly controllable without additional samples at those frequencies. Still, let's try to decrease the number of real starting poles to 3 and see if the dc spike is removed: (Note: One could also reduce the real poles and/or increase the complex-conjugate poles. Also see the comment at the end.) End of explanation """ vf.get_rms_error() fig, ax = mplt.subplots(2, 2) fig.set_size_inches(12, 8) vf.plot_s_mag(0, 0, freqs=freqs, ax=ax[0][0]) # s11 vf.plot_s_mag(0, 1, freqs=freqs, ax=ax[0][1]) # s12 vf.plot_s_mag(1, 0, freqs=freqs, ax=ax[1][0]) # s21 vf.plot_s_mag(1, 1, freqs=freqs, ax=ax[1][1]) # s22 fig.tight_layout() mplt.show() """ Explanation: This fit took more iterations, but it converged nevertheless and it matches the network data very well inside the fitting band. Again, a small rms error is achieved: End of explanation """ vf.vector_fit(n_poles_real=0, n_poles_cmplx=5) vf.plot_convergence() """ Explanation: This looks good, so let's export the model as a SPICE subcircuit. For example: vf.write_spice_subcircuit_s('/home/vinc/Desktop/190ghz_tx.sp') The subcircuit can then be simulated in SPICE with the same AC simulation setup as in the ring slot example: <img src="./ngspice_190ghz_tx_sp_mag.svg" /> <img src="./ngspice_190ghz_tx_sp_smith.svg" /> <div id="comment"></div> Comment on starting poles: During the pole relocation process (first step in the fitting process), the starting poles are sucessively moved to frequencies where they can best match all target responses. Additionally, the type of poles can change from real to complex-conjugate: two real poles can become one complex-conjugate pole (and vise versa). As a result, there are multiple combinations of starting poles which can produce the same final set of poles. However, certain setups will converge faster than others, which also depends on the initial pole spacing. In extreme cases, the algorithm can even be "undecided" if two real poles behave exactly like one complex-conjugate pole and it gets "stuck" jumping back and forth without converging to a final solution. Equivalent setups for the first attempt with n_poles_real=3, n_poles_cmplx=4 (i.e. 3+4): 1+5 3+4 5+3 7+2 9+1 11+0 Equivalent setups for the second attempt with n_poles_real=4, n_poles_cmplx=4 (i.e. 4+4): 0+6 2+5 4+4 6+3 8+2 10+1 12+0 Examples for problematic setups that do not converge properly due to an oscillation between two (equally good) solutions: 0+5 <--> 2+4 <--> ... 0+7 <--> 2+5 <--> ... End of explanation """ vf.get_rms_error() fig, ax = mplt.subplots(2, 2) fig.set_size_inches(12, 8) vf.plot_s_mag(0, 0, freqs=freqs, ax=ax[0][0]) # s11 vf.plot_s_mag(0, 1, freqs=freqs, ax=ax[0][1]) # s12 vf.plot_s_mag(1, 0, freqs=freqs, ax=ax[1][0]) # s21 vf.plot_s_mag(1, 1, freqs=freqs, ax=ax[1][1]) # s22 fig.tight_layout() mplt.show() """ Explanation: Even though the pole relocation process oscillated between two (or more?) solutions and did not converge, the fit was still successful, because the solutions themselves did converge: End of explanation """
AllenDowney/ModSim
python/soln/examples/wall_soln.ipynb
gpl-2.0
# install Pint if necessary try: import pint except ImportError: !pip install pint # download modsim.py if necessary from os.path import basename, exists def download(url): filename = basename(url) if not exists(filename): from urllib.request import urlretrieve local, _ = urlretrieve(url, filename) print('Downloaded ' + local) download('https://raw.githubusercontent.com/AllenDowney/' + 'ModSim/main/modsim.py') # import functions from modsim from modsim import * """ Explanation: Thermal behavior of a wall Modeling and Simulation in Python Copyright 2021 Allen Downey License: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International End of explanation """ download('https://raw.githubusercontent.com/AllenDowney/' + 'ModSim/main/data/DataOWall.csv') data = pd.read_csv('DataOWall.csv', parse_dates=[0], index_col=0, header=0, skiprows=[1,2]) data.head() """ Explanation: This case study is based on Gori, Marincioni, Biddulph, Elwell, "Inferring the thermal resistance and effective thermal mass distribution of a wall from in situ measurements to characterise heat transfer at both the interior and exterior surfaces", Energy and Buildings, Volume 135, 15 January 2017, Pages 398-409, which I downloaded here. The authors put their paper under a Creative Commons license, and make their data available here. I thank them for their commitment to open, reproducible science, which made this case study possible. The goal of their paper is to model the thermal behavior of a wall as a step toward understanding the "performance gap between the expected energy use of buildings and their measured energy use". The wall they study is identified as the exterior wall of an office building in central London, not unlike this one. The following figure shows the scenario and their model: On the interior and exterior surfaces of the wall, they measure temperature and heat flux over a period of three days. They model the wall using two thermal masses connected to the surfaces, and to each other, by thermal resistors. The primary methodology of the paper is a Bayesian method for inferring the parameters of the system (two thermal masses and three thermal resistances). The primary result is a comparison of two models: the one shown here with two thermal masses, and a simpler model with only one thermal mass. They find that the two-mass model is able to reproduce the measured fluxes substantially better. Tempting as it is, I will not replicate their method for estimating the parameters. Rather, I will implement their model and run it with their estimated parameters. The following cells download and read the data. End of explanation """ timestamp_0 = data.index[0] timestamp_0 """ Explanation: The index contains Pandas Timestamp objects, which is good for dealing with real-world dates and times, but not as good for running the simulations, so I'm going to convert to seconds. End of explanation """ time_deltas = data.index - timestamp_0 time_deltas.dtype """ Explanation: Subtracting the first Timestamp yields Timedelta objects: End of explanation """ data.index = time_deltas.days * 86400 + time_deltas.seconds data.head() """ Explanation: Then we can convert to seconds and replace the index. End of explanation """ np.all(np.diff(data.index) == 300) """ Explanation: The timesteps are all 5 minutes: End of explanation """ data.Q_in.plot(color='C2') data.Q_out.plot(color='C0') decorate(xlabel='Time (s)', ylabel='Heat flux (W/$m^2$)') """ Explanation: Plot the measured fluxes. End of explanation """ data.T_int.plot(color='C2') data.T_ext.plot(color='C0') decorate(xlabel='Time (s)', ylabel='Temperature (degC)') """ Explanation: Plot the measured temperatures. End of explanation """ R1 = 0.076 # m**2 * K / W, R2 = 0.272 # m**2 * K / W, R3 = 0.078 # m**2 * K / W, C1 = 212900 # J / m**2 / K, C2 = 113100 # J / m**2 / K params = R1, R2, R3, C1, C2 """ Explanation: Making the System object params is a sequence with the estimated parameters from the paper. End of explanation """ def make_system(params, data): """Makes a System object for the given conditions. params: Params object returns: System object """ R1, R2, R3, C1, C2 = params init = State(T_C1 = 16.11, T_C2 = 15.27) t_end = data.index[-1] return System(init=init, R=(R1, R2, R3), C=(C1, C2), T_int_func=interpolate(data.T_int), T_ext_func=interpolate(data.T_ext), t_end=t_end) """ Explanation: We'll pass params to make_system, which computes init, packs the parameters into Series objects, and computes the interpolation functions. End of explanation """ system = make_system(params, data) """ Explanation: Make a System object End of explanation """ system.T_ext_func(0), system.T_ext_func(150), system.T_ext_func(300) """ Explanation: Test the interpolation function: End of explanation """ def compute_flux(t, state, system): """Compute the fluxes between the walls surfaces and the internal masses. state: State with T_C1 and T_C2 t: time in seconds system: System with interpolated measurements and the R Series returns: Series of fluxes """ # unpack the temperatures T_C1, T_C2 = state # compute a series of temperatures from inside out T_int = system.T_int_func(t) T_ext = system.T_ext_func(t) T = [T_int, T_C1, T_C2, T_ext] # compute differences of adjacent temperatures T_diff = np.diff(T) # compute fluxes between adjacent compartments Q = T_diff / system.R return Q """ Explanation: Implementing the model Next we need a slope function that takes instantaneous values of the two internal temperatures and computes their time rates of change. The slope function gets called two ways. When we call it directly, state is a State object and the values it contains have units. When run_solve_ivp calls it, state is an array and the values it contains don't have units. In the second case, we have to apply the units before attempting the computation. require_units applies units if necessary: The following function computes the fluxes between the four zones. End of explanation """ compute_flux(0, system.init, system) """ Explanation: We can test it like this. End of explanation """ def slope_func(t, state, system): """Compute derivatives of the state. state: position, velocity t: time system: System object returns: derivatives of y and v """ Q = compute_flux(t, state, system) # compute the net flux in each node Q_diff = np.diff(Q) # compute the rate of change of temperature dQdt = Q_diff / system.C return dQdt """ Explanation: Here's a slope function that computes derivatives of T_C1 and T_C2 End of explanation """ slopes = slope_func(0, system.init, system) slopes """ Explanation: Test the slope function with the initial conditions. End of explanation """ results, details = run_solve_ivp(system, slope_func, t_eval=data.index) details.message """ Explanation: Now let's run the simulation, generating estimates for the time steps in the data. End of explanation """ results.head() def plot_results(results, data): data.T_int.plot(color='C2') results.T_C1.plot(color='C3') results.T_C2.plot(color='C1') data.T_ext.plot(color='C0') decorate(xlabel='Time (s)', ylabel='Temperature (degC)') plot_results(results, data) """ Explanation: Here's what the results look like. End of explanation """ def recompute_fluxes(results, system): """Compute fluxes between wall surfaces and internal masses. results: Timeframe with T_C1 and T_C2 system: System object returns: Timeframe with Q_in and Q_out """ Q_frame = TimeFrame(index=results.index, columns=['Q_in', 'Q_out']) for t, row in results.iterrows(): Q = compute_flux(t, row, system) Q_frame.loc[t] = (-Q[0], -Q[2]) return Q_frame Q_frame = recompute_fluxes(results, system) Q_frame.head() """ Explanation: These results are similar to what's in the paper: . To get the estimated fluxes, we have to go through the results and basically do the flux calculation again. End of explanation """ def plot_Q_in(frame, data): frame.Q_in.plot(color='gray') data.Q_in.plot(color='C2') decorate(xlabel='Time (s)', ylabel='Heat flux (W/$m^2$)') plot_Q_in(Q_frame, data) def plot_Q_out(frame, data): frame.Q_out.plot(color='gray') data.Q_out.plot(color='C0') decorate(xlabel='Time (s)', ylabel='Heat flux (W/$m^2$)') plot_Q_out(Q_frame, data) """ Explanation: Let's see how the estimates compare to the data. End of explanation """
adityaka/misc_scripts
python-scripts/data_analytics_learn/ipython_notebook_tutorial.ipynb
bsd-3-clause
# Hit shift + enter or use the run button to run this cell and see the results print 'hello world' # The last line of every code cell will be displayed by default, # even if you don't print it. Run this cell to see how this works. 2 + 2 # The result of this line will not be displayed 3 + 3 # The result of this line will be displayed, because it is the last line of the cell """ Explanation: Text Using Markdown If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. Hit shift + enter or shift + return on your keyboard to show the formatted text again. This is called "running" the cell, and you can also do it using the run button in the toolbar. Code cells One great advantage of IPython notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. The following cell is a code cell. End of explanation """ # If you run this cell, you should see the values displayed as a table. # Pandas is a software library for data manipulation and analysis. You'll learn to use it later in this course. import pandas as pd df = pd.DataFrame({'a': [2, 4, 6, 8], 'b': [1, 3, 5, 7]}) df # If you run this cell, you should see a scatter plot of the function y = x^2 %pylab inline import matplotlib.pyplot as plt xs = range(-30, 31) ys = [x ** 2 for x in xs] plt.scatter(xs, ys) """ Explanation: Nicely formatted results IPython notebooks allow you to display nicely formatted results, such as plots and tables, directly in the notebook. You'll learn how to use the following libraries later on in this course, but for now here's a preview of what IPython notebook can do. End of explanation """ class_name = "Intro to Data Analysis" message = class_name + " is awesome!" message """ Explanation: Creating cells To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created. To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons. Re-running cells If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!" End of explanation """ import unicodecsv ## Longer version of code (replaced with shorter, equivalent version below) # enrollments = [] # f = open(enrollments_filename, 'rb') # reader = unicodecsv.DictReader(f) # for row in reader: # enrollments.append(row) # f.close() def read_csv(filename): with open(filename, 'rb') as f: reader = unicodecsv.DictReader(f) lines = list(reader) return lines ### Write code similar to the above to load the engagement ### and submission data. The data is stored in files with ### the given filenames. Then print the first row of each ### table to make sure that your code works. You can use the ### "Test Run" button to see the output of your code. enrollments_filename = 'enrollments.csv' engagement_filename = 'daily_engagement.csv' submissions_filename = 'project_submissions.csv' enrollments = read_csv(enrollments_filename) daily_engagement = read_csv(engagement_filename) project_submissions = read_csv(submissions_filename) enrollments[0] daily_engagement[0] project_submissions[0] """ Explanation: Once you've run all three cells, try modifying the first one to set class_name to your name, rather than "Intro to Data Analysis", so you can print that you are awesome. Then rerun the first and third cells without rerunning the second. You should have seen that the third cell still printed "Intro to Data Analysis is awesome!" That's because you didn't rerun the second cell, so even though the class_name variable was updated, the message variable was not. Now try rerunning the second cell, and then the third. You should have seen the output change to "your name is awesome!" Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below". One final thing to remember: if you shut down the kernel after saving your notebook, the cells' output will still show up as you left it at the end of your session when you start the notebook back up. However, the state of the kernel will be reset. If you are actively working on a notebook, remember to re-run your cells to set up your working environment to really pick up where you last left off. Excercise 6 in parsing CSVs End of explanation """
liganega/Gongsu-DataSci
previous/notes2017/old/NB-15-Recursion.ipynb
gpl-3.0
def factorial(n): return n * factorial(n-1) """ Explanation: 재귀함수 이번에 공부할 주제는 재귀(recursion)이다. 재귀는 한자용어로 "본래 있던 곳으로 다시 돌아온다"의 의미를 갖는다. 재귀를 이용하여 구현한 함수를 _재귀함수(recursive function)_라 부른다. 재귀함수 용법 재귀함수를 사용하면 복잡한 코드를 매우 간단하게 구현할 수 있다는 장점이 있다. 하지만 재귀함수를 호출하면 메모리 내부에서 어떤 변화가 어떻게 발생하는가를 이해하는 일이 경우에 따라 간단하지 않다. 또한 시간 및 공간 복잡도가 증가해서 런타임 오류(runtime error)가 발생하기가 쉽다. 특히 무한루프가 발생하지 않도록 조심해야 한다. 조심해서 잘 사용하면 간단하면서도 훌륭한 코드를 구현할 수 있다. 세 개의 예제를 통해 재귀함수의 전형적인 사용법을 익히고자 한다. 또한 각 예제를 자세히 살펴보면서 앞서 언급한 재귀함수 활용의 장단점을 확인할 것이다. 이번에 다를 함수는 아래 주제와 관련되어 있다. 팩토리얼 계산 피보나찌 수열 계산 시에르핀스키 삼각형 계산 이후 연습문제를 통해 보다 재귀의 보다 다양한 활용을 살펴볼 것이다. 퀵정렬 함수 구현 예제: 팩토리얼 계산 재귀함수의 가장 단순한 활용은 팩토리얼을 계산하는 함수를 작성하는 것이다. n 팩토리얼(factorial)은 1부터 n까지의 연속된 자연수를 차례로 곱한 값이다. 기호로는 n!과 같이 느낌표(!)를 사용한다. 1! = 1 2! = 2 * 1 3! = 3 * 2 * 1 4! = 4 * 3 * 2 * 1 등등. 하지만 위와 같이 작동하는 코드를 구현할 수 없다. 예를 들어 아래와 같은 코드는 문법에 맞지 않는다. def factorial(n): return n * (n-1) * ... * 2 * 1 위 코드의 문제점은 바로 '...'에 있다. 왜냐하면 n에 따라 길이가 달라지기 때문이다. 그럼 어떻게 팩토리얼 함수를 구현할 수 있을까? 어떤 규칙을 찾아내야 한다. n 이 무엇이건 변하지 않는 규칙이 필요하다. 예를 들어 다음과 같이 생각할 수도 있다. 1! = 1 2! = 2 * 1! 3! = 3 * 2! 4! = 4 * 3! 이렇게 하면 앞서의 경우와는 달리 등호 오른편의 식의 모양이 n에 따라 변하지 않으며, 단지 사용되는 숫자만 변할 뿐이다. 따라서 변하는 숫자를 인자로, 변하지 않는 모양을 리턴값으로 사용하면 된다. 즉, n!를 계산하는 함수를 다음가 같이 구현할 수 있다. 잠시 쉬어 가기: '규칙'이란 말 자체가 사실 변하지 않는 무언가를 찾아 내어 명시한 것들을 의미한다. 프로그래밍 관련 전문가들은 인베리언트(invariant)라 부른다. 따라서 프로그래밍의 핵심이 바로 인베리언트를 찾아내는 것이라 해도 과장이 아니다. End of explanation """ def factorial(n): if n <= 0: return 1 else: return n * factorial(n-1) """ Explanation: 그런데 위와 같이 구현한 factorial 함수를 호출하면 어떤 n에 대해서도 무한루프가 발생한다. 이점이 바로 재귀함수를 사용할 경우 가장 많이 발생할 수 있는 오류를 설명한다. 즉, 재귀함수를 구현하고자 할 경우 반드시 어떤 경우에도 무한루프가 발생하지 않도록 제어장치를 마련해야 한다. 예를 들어 n 값이 특별한 값을 갖게 되면 멈추도록 하면 된다. 아래 정의를 살펴보면 n의 값이 양의 정수가 아니면 1을 리턴하면서 더 이상의 함수호출을 하지 않는 장치가 마련되어 있다. End of explanation """ factorial(10) """ Explanation: 이제 10!을 계산해보자. End of explanation """ %load_ext tutormagic %%tutor --lang python2 def factorial(n): if n <= 0: return 1 else: return n * factorial(n-1) factorial(10) """ Explanation: 그런데 factorial 함수를 호출하면 메모리 내부에서 어떤 변화가 일어나는지를 이해하는 것이 중요하다. 바로 이점을 이해할 수 있으면 예를 들어 일반적으로 개인 가정에서 사용하는 PC 또는 노트북에서는 factorial(1000)이 제대로 계산되지 못하는 이유도 이해할 수 있다. factorial(1000)을 호출하면 아마도 약간의 시간이 흐른 뒤 RuntimeError가 발생하면서 계산이 중지되는 현상을 경험하게 될 것이다. 이것을 이해하기 위해서는 factorial 함수를 호출할 경우 메모리 내부에서 일어나는 일을 살펴볼 필요가 있다. 예를 들어 factorial(10)를 호출할 때 메모리 내부에서 일어나는 일들을 Python Tutor를 이용하여 살펴보자. End of explanation """ from IPython.display import Image Image(filename='images/factorial-tree-recursion.jpg', width=300) """ Explanation: factorial(10)를 호출하면 메모리 내부에서는 벌어지는 일을 설명하면 다음과 같다. 먼저 factorial(10) 호출을 담당하는 메모리를 스택영역에 할당한다. 이제 10 * factorial(9)를 실행하기 위해 다시 factorial(9)를 호출한다. 즉, factorial(9) 호출을 담당하는 메모리를 스택영역에 할당한다. 2번 과정과 동일하게 factorial(8) 호출을 담당하는 메모리를 스택영역에 할당한다. 이와 같은 작업을 factorial(0)이 호출될 때까지 반복한다. factorial(0)은 1을 바로 리턴하기 때문에, 그 리턴값은 이제 factorial(1) 의 리턴값을 계산하는 데 사용된다. 그리고 factorial(0)의 호출을 위해 사용된 메모리 공간은 삭제된다. 이제 factorial(1)은 1 * 1을 계산하여 1을 리턴한다. 그리고 factorial(1)의 호출을 위해 사용된 메모리 공간도 삭제된다. 5번 과정과 동일한 과정이 factorial(10)의 리턴값이 계산될 때까지 반복되며, 리턴값이 주어지면 이제 마지막으로 factorial(10)의 호출을 위해 사용된 메모리 공간이 삭제된다. 좀 더 간략하게 요약정리하면 다음과 같다. factorial(10), factorial(9), ..., factorial(0) 의 호출을 위해 필요한 공간이 호출 순서대로 메모리의 스택영역에서 만들어진다. 이제 역순으로 factorial(0), factorial(1), ...., factorial(10)의 리턴값이 결정되며 동시에 나열된 순서대로 사용된 스택영역의 메모리 공간이 삭제된다. 재귀함수를 호출할 때 사용되는 메모리 영역을 '스택'이라 부르는 이유가 바로 앞서 설명한 이유 때문이다. 즉, First In Last Out(FILO) 원칙이 성립하기 공간이 바로 스택이다. factorial(10)을 호출하였을 때 메모리에서 일어나는 일들을 도식화하면 아래와 같다. End of explanation """ def Fib(n): if n == 0: return 0 elif n == 1: return 1 else: return (Fib(n-1) + Fib(n-2)) Fib(10) """ Explanation: 위 그림에서 계단 아래로 향하는 화살표들은 연속된 factorial 함수의 호출을 의미하고, 계단 위로 향하는 화살표들은 리턴값을 역순으로 넘겨주는 과정을 의미한다. 그런데 factorial(1000)의 경우 위 계단 같은 현상이 너무 많이 일어나서 스택영역에 과부하가 걸리게 되어 어느 순간 런타임 에러가 발생하게 된다. 이와같은 문제점을 해결하기 위한 해결책은 연습문제를 통해 살펴볼 예정이다. 예제: 피보나찌 수열 피보나치 수는 0과 1로 시작하며, 새로운 피보나치 수는 바로 앞의 두 피보나치 수의 합이 된다. 피보나찌 수열의 처음 20개는 아래와 같다. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181 n번째 피노나찌 수를 구해주는 함수는 아래와 같다. End of explanation """ %%tutor --lang python2 def Fib(n): if n == 0: return 0 elif n == 1: return 1 else: return (Fib(n-1) + Fib(n-2)) Fib(5) """ Explanation: 그런데 Fib 함수는 시간복잡도가 매우 크다. 사실 너무 커서 실질적으로 사용할 수 없다. 인자로 양의 정수 n에 대해 2 ** n의 복잡도를 갖는다. Fib(100)도 제대로 구하지 못하는 이유가 여기에 있다. 시간복잡도가 어느정도인지를 살펴보기 위해 Fib(5)를 호출하였을 경우 메모리 내부에서 어떤 변화가 발생하는지를 알아보자. 먼저 python tutor를 활용해보자. End of explanation """ from IPython.display import Image Image(filename='images/fib5-tree.jpg', width=550) """ Explanation: 그런데 'Forward' 단추를 누르면서 스택영역에서 벌어지는 일들을 추적하기가 어렵다. 74라는 스텝수에서 알 수 있듯이 Fib(5)를 계사하는데 매우 많은 변화가 일어나기 때문이다. Fib(10)의 경우는 스텝수가 299로 변한다. 즉, 인자값이 조금 늘 때마다 메모리 내부에서의 변화는 훨씬 많이 발생한다. 그렇다면 Fib(5)를 계산하기 위해 Fib 함수가 몇 번 호출될까? 아래 그림을 참고해보자. End of explanation """ from IPython.display import Image Image(filename='images/fib5-tree-recursion.jpg', width=400) """ Explanation: 위 그림에서 Fib 함수가 15번 호출되었음을 확인할 수 있다. 그렇다면 Fib(10)을 계산하기 위해서는 Fib 함수를 몇 번 호출할까? 대략 200번 정도 호출할 것이다. 왜냐하면 Fib 한 번 호출할 때마다 두 번 더 Fib 함수를 호출하기 때문에 2의 지수승에 비례해서 호출 횟수가 증가한다. 그런데 위 그림에서 호출되는 Fib 함수들의 순서는 어떻게 될까? factorial의 경우는 가지치기를 하지 않기 때문에 계단 모양으로, 즉, 숫자가 감소하는 것과 동일하게 차례대로 호출이 이루어졌다. 하지만 Fib의 경우는 좀 더 복잡하다. 위 그림에서 화살표에 달린 숫자들이 Fib 함수가 호출되는 순서를 의미한다. 왜 그럴까? Fib(5)를 호출해 보자. 그러면 아래와 같이 스택영역 상태가 변한다. ===== 먼저 Fib(5) 호출 그러면 Fib(4) + Fib(3)를 실행해야 하는데 이를 위해 먼저 Fib(4) 호출. 그러면 Fib(3) + Fib(2)를 실행해야 하는데 이를 위해 먼저 Fib(3) 호출 그러면 Fib(2) + Fib(1)를 실행해야 하는 이를 위해 먼저 Fib(2) 호출 그러면 Fib(1) + Fib(0)을 실행해야 하는데 이를 위해 먼저 Fib(1)을 호출하여 1을 리턴값으로 저장한다. Fib(0)을 호출하여 1을 리턴값으로 저장한다. 1+1을 계산하여 Fib(2)의 리턴값으로 2를 저장한다. 이제 Fib(1) 호출하여 1을 리턴값으로 저장한다. 2 + 1을 계산하여 Fib(3)의 리턴값으로 3을 저장한다. 이제 Fib(2) 호출. 그러면 Fib(1) + Fib(0)을 실행해야 하는데 이를 위해 먼저 Fib(1)을 호출하여 1을 리턴값으로 저장한다. Fib(0)을 호출하여 1을 리턴값으로 저장한다. 1+1을 계산하여 Fib(2)의 리턴값으로 2를 저장한다. 3 + 2를 계산하여 Fib(4)의 리턴값으로 5를 저장한다. 이제 Fib(3) 호출. 그러면 Fib(2) + Fib(1)를 실행해야 하는 이를 위해 먼저 Fib(2) 호출 그러면 Fib(1) + Fib(0)을 실행해야 하는데 이를 위해 먼저 Fib(1)을 호출하여 1을 리턴값으로 저장한다. Fib(0)을 호출하여 1을 리턴값으로 저장한다. 1+1을 계산하여 Fib(2)의 리턴값으로 2를 저장한다. 이제 Fib(1) 호출하여 1을 리턴값으로 저장한다. 2 + 1을 계산하여 Fib(3)의 리턴값으로 3을 저장한다. 5 + 3를 계산하여 8을 Fib(5)의 리턴값으로 돌려준다. ===== 위와 같이 말로하면 따라가면서 이해하기가 쉽지 않다. 아래 그림을 참조하면 좀 더 쉬울 것이다. End of explanation """ Image(filename='images/sierpinski.jpg', width=300) """ Explanation: 위 그림에서 화살표는 앞서의 그림에서처럼 호출되는 순서를 나타낸다. 위 그림이 마치 나무 뿌리 또는 가지가 뻗어나가는 것과 비슷하다고 해서 나무 재귀(tree recursion) 형식으로 Fib 함수가 구현되었다고 말하기도 한다. 재귀함수를 잘 활용하려면 바로 이 '나무 재귀'를 잘 이해해야 한다. 주의: 아래로 향하는 화살표는 함수호출을 의미하는 반면에 위로 향하는 화살표는 호출된 함수의 리턴값이 결정되었음을 의미한다. 예제: 시에르핀스키 삼각형 앞서 다룬 피보나찌 수열의 경우처럼 나무재귀를 이용하여 구현할 수 있는 시에르핀스키 삼각형을 살펴보도록 하자. 피보나찌의 경우와 비슷하지만 이해가 좀 더 어려울 수 있다. 먼저, 시에르핀스키 삼각형(Sierpiński triangle)은 19세기 초반에 활동안 폴란드 수학자 바츠와프 시에르핀스키(Wacław F. Sierpiński)의 이름을 딴 프랙털 도형이다. 시에르핀스키 삼각형은 다음과 같은 방법을 통해 얻을 수 있다: 정삼각형 하나에서 시작한다. 정삼각형의 세 변의 중점을 이으면 원래의 정삼각형 안에 작은 정삼각형이 만들어진다. 이 작은 정삼각형을 제거한다. 남은 정삼각형들에 대해서도 2.를 실행한다. 3.을 무한히 반복한다. 아래 그림 참조: End of explanation """ %matplotlib inline from __future__ import division # 그래프를 그리기 위해 pyplot 모듈이 필요 import matplotlib.pyplot as plt # pyplot의 fill 함수를 이용하여 파랑색 및 흰색 삼각형 그리는 함수 def drawBlue(p1,p2,p3): plt.fill([p1[0],p2[0],p3[0]],[p1[1],p2[1],p3[1]],facecolor='b',edgecolor='none') def drawWhite(p1,p2,p3): plt.fill([p1[0],p2[0],p3[0]],[p1[1],p2[1],p3[1]],facecolor='w',edgecolor='none') # 각 변의 중점을 구하는 함수 def midpoint(p1,p2): return ((p1[0]+p2[0])/2,(p1[1]+p2[1])/2) # 시에르핀스키 삼각형을 구하는 함수 구현 # p1, p2, p3는 처음에 주어지는 삼각형의 세 꼭지점들의 좌표값이다. # repeat 값은 삼각형을 쪼개는 과정을 몇 번 했는지를 세어 준다. # limit 값은 repeat의 값의 한도를 정해준다. # 즉, limit 값만큼 삼각형을 쪼개는 과정을 반복한다. def sierpinski(p1,p2,p3,repeat,limit): drawWhite(midpoint(p1,p2),midpoint(p2,p3),midpoint(p3,p1)) if (repeat <= limit): sierpinski(p1,midpoint(p1,p2),midpoint(p1,p3),repeat+1, limit) sierpinski(p2,midpoint(p1,p2),midpoint(p2,p3),repeat+1, limit) sierpinski(p3,midpoint(p1,p3),midpoint(p2,p3),repeat+1, limit) """ Explanation: 앞서 설명에서 보았듯이 시에르핀스키 삼각형은 각 변의 길이를 반으로 나누어 세 개의 새로운 작은 삼각형을 만드는 과정을 무한 반복하는 것이다. 즉, "각 변의 모서리를 반으로 나누어 세 개의 삼각형을 새로 만든다" 과정이 반복된다. 이것을 재귀로 구현한 파이썬 코드는 아래아 같다. End of explanation """ # pyplot 에서 그림을 그리기 위한 준비 작업 필요 plt.figure() plt.subplot(1,1,1) plt.axis('off') # 먼저 하나의 파란색 삼각형을 그린다. drawBlue((0,0),(7,0),(3.5,6.0621778265)) # 이제 주어진 파란색 삼각형을 세 개의 작은 삼각형으로 쪼개어 흰색으로 칠하는 과정을 반복한다. # 아래 예제는 limit 값을 4으로 주었다. sierpinski((0,0),(7,0),(3.5,6.0621778265),0,4) # 이제 그림을 보여달라고 해야 한다. plt.show() """ Explanation: 위 코드에서 마지막에 정의된 sierpinski 함수가 삼각형을 쪼개어 새로운 세 개의 삼각형을 반복적으로 만들어주는 기능을 수행한다. 나머지 코드는 아래 내용을 담고 있다. 1번 줄: ipython notebook에서 그림을 보여주도록 하는 명령문임. 파이썬 코드와 별개임. 3번 줄: 파이썬2에서 float형 나눗셈을 위해 필요함. 7번 줄: matplotlib.pyplot 모듈이 필요함. plt라는 약칭 사용 선언. 11번 줄: matplotlib.pyplot 모듈에 포함되어 있는 fill 함수를 이용하여 세 꼬지점의 좌표가 주어질 경우 파랑색 삼각형을 그려주는 함수. 14번 줄: matplotlib.pyplot 모듈에 포함되어 있는 fill 함수를 이용하여 세 꼬지점의 좌표가 주어질 경우 흰색 삼각형을 그려주는 함수. 19번 줄: 두 꼭지점의 중점을 찾아주는 함수. 삼각형 모서리의 중점을 찾기위해 사용함. 시에르핀스키 삼각형을 만들기 위해 필요함. 29번 줄: 시에르핀스키 함수 정의 필요한 인자 p1, p2, p3: 시작할 때 필요한 삼각형의 세 꼭지점 좌표 repeat: 삼각형 쪼개기 과정을 몇 번 반복하였는지 기억한다. limit: 삼각형 쪼개기 과정을 반복할 수 있는 횟수의 최대 허용치를 선언하기 위해 사용된다. 이 함수를 호출하면 제일먼저 '30번 줄'이 실행되어 삼각형을 한 번 쪼개어 중간 부분의 역삼각형을 흰색으로 칠한다. 이제 '31번 줄'로 넘어와서 repeat 값이 limit 값을 초과했는지를 확인한다. 초과하였을 경우: 작동을 멈춘다. 즉, 더이상의 쪼개기 반복이 없다. 초과하지 않았을 경우: '32번 줄'로 넘어와서 sierpinski 함수를 다시 호출한다. 대신에 새로 생성된 작은 삼각형 중에서 왼쪽 아래에 위치한 삼각형에 대해서, 그리고 repeat 값을 하나 증가시킨 값에 대해서 '30번 줄'부터 다시 실행한다. 즉, repeat값이 limit 값을 초과할 때 까지 위에서 설명한 작업을 반복한다. 반복할 때마다 sierpinski 함수 호출에 사용되는 인자들의 값이 변함에 주의해야 한다. '32번 줄'에서 호출한 sierpinski 함수가 중단된 후에야 '33번 줄'로 넘어와서 다시 sierpinski 함수를 호출한다. 이번에 사용되는 인자는 새로 생성된 작은 삼각형 중에서 오른쪽 아래에 위치한 삼각형의 꼭지점이며 나머지 인자들의 값과 행동방식은 a.의 경우와 동일하다. '34번 줄'은 앞선 '33번 줄'의 sierpinski 함수의 호출이 완료된 후에야 실행되며 새로 생성된 작은 삼각형 중에서 가운데 위에 위치한 삼각형의 꼭지점이 사용된다. 나머지 인자들의 값과 행동방식은 a. 의 경우와 동일하다. 이제 sierpinski 함수를 이용하여 시에르핀스키 삼각형을 그려보자. limit 값을 이용하여 쪼개기 반복 횟수를 제한해야 하는데 6정도가 좋다. 그 이상은 시간이 너무 올래 걸리며 사람 눈으로는 섬세함 정도가 구분되지 않는다. pyplot 모듈을 이용하여 그린 그림을 확인하려면 아래 코드에서 '3번 ~ 5번' 줄과 '18번' 줄에 있는 명령어를 실행해야 한다. 보다 자세한 설명은 이후에 다뤄질 예정이다. 우선 따라하는 것만으로도 충분하다. End of explanation """ Image(filename='images/sierpinski-cases.jpg', width=400) """ Explanation: 위 그림이 만들어지는 과정은 아래와 같다. End of explanation """ Image(filename='images/sierpinski-tree-recursion.jpg', width=350) """ Explanation: 피보나찌 수열의 경우와 마친가지로 sierpinski 함수가 호출되는 순서, 즉 각각의 삼각형이 만들어지는 순서를 이해할 수 있어야 한다. 아래 그림을 참조할 수 있다. End of explanation """ def quicksort(array): if len(array) < 2: return array else: pivot = array[0] less = [] greater = [] for i in range(len(array))[1:]: if array[i] <= pivot: less.append(array[i]) else: greater.append(array[i]) return quicksort(less) + [pivot] + quicksort(greater) quicksort([3, 2, 5, 1, 8, 20, 18, 100, 70]) """ Explanation: Fib함수의 경우와는 달리 sierpinski 함수가 호출될 때마다 세 개의 가지를 새로 친다. 하지만 Fib 함수의 경우처럼 각각의 함수가 호출되어 리턴값이 결정될 때 까지 다른 가지의 호출은 기다려야 한다. 즉,Fib의 경우보다 좀 더 많은 가지를 가진 나무재귀 모양을 띄지만 작동하는 방식은 기본적으로 동일하다. 재귀함수 활용 요약 정리 세 개의 예제를 활용해 재귀의 활용을 살펴보았다. 재귀함수를 호출하였을 경우 메모리 스택영역에서 함수호출이 어떤 순서로 영향을 미치는가를 살펴보았다. 특히, 나무재귀(tree recursion)에 대한 이해가 제일 중요하다는 것을 명심해야 한다. 연습문제 연습: 퀵정렬 함수 구현 퀵 정렬은 아래에 설명된 분할 정복(divide and conquer) 방법을 통해 리스트를 정렬한다. 리스트 가운데서 하나의 원소를 임의로 고른다. 이렇게 고른 원소를 _피벗_이라 하며 리스트의 첫번 째 항목을 선택해도 일반적으로 무난하다. 기존의 리스트를 두 개의 새로운 리스트로 쪼갠다. 처음 리스트는 피벗보다 값이 크지 않은 모든 원소들로 구성되고, 둘째 리스트는 피벗보다 큰 모든 원소들로 구성된다. 피벗을 기분으로 처음 리스트는 피벗의 왼편에, 둘째 리스트는 피벗의 오른편에 위치시킨다. 분할된 두 개의 작은 리스트에 대해 재귀(Recursion)적으로 이 과정을 반복한다. 재귀는 리스트의 길이가 0이나 1이 될 때까지 반복된다. 재귀 호출이 한번 진행될 때마다 인자로 사용되는 리스트의 길이가 최소한 하나씩 줄게되므로, 이 알고리즘은 반드시 끝난다. 이제 리스트 자료형을 인자로 입력받아 해당 리스트를 오름차순으로 정렬하는 함수 quicksort를 구현해보자. 힌트: Wikipedia에서 퀵정렬을 설명하면서 사용한 유사코드는 다음과 같다. function quicksort(array) var list less, greater if length(array) &lt; 2 return array select and remove a pivot value pivot from array for each x in array if x &lt; pivot + 1 append x to less else append x to greater return concatenate(quicksort(less), pivot, quicksort(greater)) 견본답안 End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/end_to_end_ml/solutions/sample_babyweight.ipynb
apache-2.0
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst !pip install --user google-cloud-bigquery==1.25.0 """ Explanation: Creating a Sampled Dataset Learning Objectives Setup up the environment Sample the natality dataset to create train, eval, test sets Preprocess the data in Pandas dataframe Introduction In this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample. We will set up the environment, sample the natality dataset to create train, eval, test splits, and preprocess the data in a Pandas dataframe. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. Set up environment variables and load necessary libraries End of explanation """ from google.cloud import bigquery import pandas as pd """ Explanation: Note: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries. End of explanation """ %%bash # TODO 1 export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT PROJECT = "cloud-training-demos" # Replace with your PROJECT """ Explanation: Set environment variables so that we can use them throughout the notebook. End of explanation """ bq = bigquery.Client(project = PROJECT) """ Explanation: Create ML datasets by sampling using BigQuery We'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab. End of explanation """ modulo_divisor = 100 train_percent = 80.0 eval_percent = 10.0 train_buckets = int(modulo_divisor * train_percent / 100.0) eval_buckets = int(modulo_divisor * eval_percent / 100.0) """ Explanation: We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash within the module. Feel free to play around with these values to get the perfect combination. End of explanation """ def display_dataframe_head_from_query(query, count=10): """Displays count rows from dataframe head from query. Args: query: str, query to be run on BigQuery, results stored in dataframe. count: int, number of results from head of dataframe to display. Returns: Dataframe head with count number of results. """ df = bq.query( query + " LIMIT {limit}".format( limit=count)).to_dataframe() return df.head(count) """ Explanation: We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows. End of explanation """ # Get label, features, and columns to hash and split into buckets hash_cols_fixed_query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, year, month, CASE WHEN day IS NULL THEN CASE WHEN wday IS NULL THEN 0 ELSE wday END ELSE day END AS date, IFNULL(state, "Unknown") AS state, IFNULL(mother_birth_state, "Unknown") AS mother_birth_state FROM publicdata.samples.natality WHERE year > 2000 AND weight_pounds > 0 AND mother_age > 0 AND plurality > 0 AND gestation_weeks > 0 """ display_dataframe_head_from_query(hash_cols_fixed_query) """ Explanation: For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results. End of explanation """ data_query = """ SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, FARM_FINGERPRINT( CONCAT( CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING) ) ) AS hash_values FROM ({CTE_hash_cols_fixed}) """.format(CTE_hash_cols_fixed=hash_cols_fixed_query) display_dataframe_head_from_query(data_query) """ Explanation: Using COALESCE would provide the same result as the nested CASE WHEN. This is preferable when all we want is the first non-null instance. To be precise the CASE WHEN would become COALESCE(wday, day, 0) AS date. You can read more about it here. Next query will combine our hash columns and will leave us just with our label, features, and our hash values. End of explanation """ # Get the counts of each of the unique hash of our splitting column first_bucketing_query = """ SELECT hash_values, COUNT(*) AS num_records FROM ({CTE_data}) GROUP BY hash_values """.format(CTE_data=data_query) display_dataframe_head_from_query(first_bucketing_query) """ Explanation: The next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY. End of explanation """ # Get the number of records in each of the hash buckets second_bucketing_query = """ SELECT ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index, SUM(num_records) AS num_records FROM ({CTE_first_bucketing}) GROUP BY ABS(MOD(hash_values, {modulo_divisor})) """.format( CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor) display_dataframe_head_from_query(second_bucketing_query) """ Explanation: The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records. End of explanation """ # Calculate the overall percentages percentages_query = """ SELECT bucket_index, num_records, CAST(num_records AS FLOAT64) / ( SELECT SUM(num_records) FROM ({CTE_second_bucketing})) AS percent_records FROM ({CTE_second_bucketing}) """.format(CTE_second_bucketing=second_bucketing_query) display_dataframe_head_from_query(percentages_query) """ Explanation: The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query. End of explanation """ # Choose hash buckets for training and pull in their statistics train_query = """ SELECT *, "train" AS dataset_name FROM ({CTE_percentages}) WHERE bucket_index >= 0 AND bucket_index < {train_buckets} """.format( CTE_percentages=percentages_query, train_buckets=train_buckets) display_dataframe_head_from_query(train_query) """ Explanation: We'll now select the range of buckets to be used in training. End of explanation """ # Choose hash buckets for validation and pull in their statistics eval_query = """ SELECT *, "eval" AS dataset_name FROM ({CTE_percentages}) WHERE bucket_index >= {train_buckets} AND bucket_index < {cum_eval_buckets} """.format( CTE_percentages=percentages_query, train_buckets=train_buckets, cum_eval_buckets=train_buckets + eval_buckets) display_dataframe_head_from_query(eval_query) """ Explanation: We'll do the same by selecting the range of buckets to be used evaluation. End of explanation """ # Choose hash buckets for testing and pull in their statistics test_query = """ SELECT *, "test" AS dataset_name FROM ({CTE_percentages}) WHERE bucket_index >= {cum_eval_buckets} AND bucket_index < {modulo_divisor} """.format( CTE_percentages=percentages_query, cum_eval_buckets=train_buckets + eval_buckets, modulo_divisor=modulo_divisor) display_dataframe_head_from_query(test_query) """ Explanation: Lastly, we'll select the hash buckets to be used for the test split. End of explanation """ # Union the training, validation, and testing dataset statistics union_query = """ SELECT 0 AS dataset_id, * FROM ({CTE_train}) UNION ALL SELECT 1 AS dataset_id, * FROM ({CTE_eval}) UNION ALL SELECT 2 AS dataset_id, * FROM ({CTE_test}) """.format(CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query) display_dataframe_head_from_query(union_query) """ Explanation: In the below query, we'll UNION ALL all of the datasets together so that all three sets of hash buckets will be within one table. We added dataset_id so that we can sort on it in the query after. End of explanation """ # Show final splitting and associated statistics split_query = """ SELECT dataset_id, dataset_name, SUM(num_records) AS num_records, SUM(percent_records) AS percent_records FROM ({CTE_union}) GROUP BY dataset_id, dataset_name ORDER BY dataset_id """.format(CTE_union=union_query) display_dataframe_head_from_query(split_query) """ Explanation: Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to that we were hoping to get. End of explanation """ # TODO 2 # every_n allows us to subsample from each of the hash values # This helps us get approximately the record counts we want every_n = 1000 splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor) def create_data_split_sample_df(query_string, splitting_string, lo, up): """Creates a dataframe with a sample of a data split. Args: query_string: str, query to run to generate splits. splitting_string: str, modulo string to split by. lo: float, lower bound for bucket filtering for split. up: float, upper bound for bucket filtering for split. Returns: Dataframe containing data split sample. """ query = "SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}".format( query_string, splitting_string, int(lo), int(up)) df = bq.query(query).to_dataframe() return df train_df = create_data_split_sample_df( data_query, splitting_string, lo=0, up=train_percent) eval_df = create_data_split_sample_df( data_query, splitting_string, lo=train_percent, up=train_percent + eval_percent) test_df = create_data_split_sample_df( data_query, splitting_string, lo=train_percent + eval_percent, up=modulo_divisor) print("There are {} examples in the train dataset.".format(len(train_df))) print("There are {} examples in the validation dataset.".format(len(eval_df))) print("There are {} examples in the test dataset.".format(len(test_df))) """ Explanation: Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train, eval, test sets do not overlap and takes a subsample of our global splits. End of explanation """ train_df.head() """ Explanation: Preprocess data using Pandas We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male field be Unknown. Also, if there is more than child we'll change the plurality to Multiple(2+). While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. Let's start by examining the training dataset as is. End of explanation """ train_df.describe() """ Explanation: Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data) End of explanation """ # TODO 3 def preprocess(df): """ Preprocess pandas dataframe for augmented babyweight data. Args: df: Dataframe containing raw babyweight data. Returns: Pandas dataframe containing preprocessed raw babyweight data as well as simulated no ultrasound data masking some of the original data. """ # Clean up raw data # Filter out what we don"t want to use for training df = df[df.weight_pounds > 0] df = df[df.mother_age > 0] df = df[df.gestation_weeks > 0] df = df[df.plurality > 0] # Modify plurality field to be a string twins_etc = dict(zip([1,2,3,4,5], ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"])) df["plurality"].replace(twins_etc, inplace=True) # Clone data and mask certain columns to simulate lack of ultrasound no_ultrasound = df.copy(deep=True) # Modify is_male no_ultrasound["is_male"] = "Unknown" # Modify plurality condition = no_ultrasound["plurality"] != "Single(1)" no_ultrasound.loc[condition, "plurality"] = "Multiple(2+)" # Concatenate both datasets together and shuffle return pd.concat( [df, no_ultrasound]).sample(frac=1).reset_index(drop=True) """ Explanation: It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect. End of explanation """ train_df = preprocess(train_df) eval_df = preprocess(eval_df) test_df = preprocess(test_df) train_df.head() train_df.tail() """ Explanation: Let's process the train, eval, test set and see a small sample of the training data after our preprocessing: End of explanation """ train_df.describe() """ Explanation: Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality does not show up. End of explanation """ # Define columns columns = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"] # Write out CSV files train_df.to_csv( path_or_buf="train.csv", columns=columns, header=False, index=False) eval_df.to_csv( path_or_buf="eval.csv", columns=columns, header=False, index=False) test_df.to_csv( path_or_buf="test.csv", columns=columns, header=False, index=False) %%bash wc -l *.csv %%bash head *.csv %%bash tail *.csv """ Explanation: Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers. End of explanation """
coolharsh55/advent-of-code
2016/python3/Day17.ipynb
mit
with open('../inputs/day17.txt', 'r') as f: path_string = f.readline().strip() TEST_DATA = ( 'ihgpwlah', 'kglvqrro', 'ulqzkmiv' ) """ Explanation: Day 17: Two Steps Forward author: Harshvardhan Pandit license: MIT link to problem statement You're trying to access a secure vault protected by a 4x4 grid of small rooms connected by doors. You start in the top-left room (marked S), and you can access the vault (marked V) once you reach the bottom-right room: ######### #S| | | # #-#-#-#-# # | | | # #-#-#-#-# # | | | # #-#-#-#-# # | | | ####### V Fixed walls are marked with #, and doors are marked with - or |. The doors in your current room are either open or closed (and locked) based on the hexadecimal MD5 hash of a passcode (your puzzle input) followed by a sequence of uppercase characters representing the path you have taken so far (U for up, D for down, L for left, and R for right). Only the first four characters of the hash are used; they represent, respectively, the doors up, down, left, and right from your current position. Any b, c, d, e, or f means that the corresponding door is open; any other character (any number or a) means that the corresponding door is closed and locked. To access the vault, all you need to do is reach the bottom-right room; reaching this room opens the vault and all doors in the maze. For example, suppose the passcode is hijkl. Initially, you have taken no steps, and so your path is empty: you simply find the MD5 hash of hijkl alone. The first four characters of this hash are ced9, which indicate that up is open (c), down is open (e), left is open (d), and right is closed and locked (9). Because you start in the top-left corner, there are no "up" or "left" doors to be open, so your only choice is down. Next, having gone only one step (down, or D), you find the hash of hijklD. This produces f2bc, which indicates that you can go back up, left (but that's a wall), or right. Going right means hashing hijklDR to get 5745 - all doors closed and locked. However, going up instead is worthwhile: even though it returns you to the room you started in, your path would then be DU, opening a different set of doors. After going DU (and then hashing hijklDU to get 528e), only the right door is open; after going DUR, all doors lock. (Fortunately, your actual passcode is not hijkl). Passcodes actually used by Easter Bunny Vault Security do allow access to the vault if you know the right path. For example: If your passcode were ihgpwlah, the shortest path would be DDRRRD. With kglvqrro, the shortest path would be DDUDRLRRUDRD. With ulqzkmiv, the shortest would be DRURDRUDDLLDLUURRDULRLDUUDDDRR. Given your vault's passcode, what is the shortest path (the actual path, not just the length) to reach the vault? Solution logic This is quite similar to Day 13, where we had to find a short path through the maze. This time around, whether the next point is valid or not (its door is open or closed) depends on the path taken to reach there (so like life!). At any point, whether the doors are open or not is dependant on the path taken to reach that room. In specific, the MD5 checksum of the input string suffixed by the path - as directions abbreviated: U UP D DOWN L LEFT R RIGHT Therefore, the strategy to solve this must be the same as that used on Day 13, where we use a queue to store the paths we have traversed. There is a case that must be stated before we begin - this time, if we move back to a previous point, the doors that are open might change as the path itself has changed. So instead of a queue, we'll use a priority heap to store that paths and work only on those paths that have not been iterated yet. The priority will be the length of the path - this way, the agorithm will try to iterate over all the shorter paths first - of do a BFS (breadth-first-search). Another option to use instead of a heap is to use a sorted list. Python has the bisect module that can maintain a sorted list. import bisect items = [] bisect.insort(items, element) items.pop(0) &lt;-- smallest element Algorithm: initialize priority heap with the starting state while heap has states: pop state <-- this will be the one with the most 'priority' check if state is end state: if yes, print answer and exit get all open doors: if next point is not valid, discard it append the direction to the current path push it back on the heap Input and Test data End of explanation """ from collections import namedtuple Point = namedtuple('Point', ('x', 'y', 'path')) """ Explanation: Point As with before, having a namedtuple makes the code cleaner and easier to read. End of explanation """ import hashlib def md5(string): md5 = hashlib.md5() md5.update(string.encode('ascii')) return md5.hexdigest() """ Explanation: Checksum Calculating the checksum is simply taking the MD5 hash of it. End of explanation """ directions = ( Point(0, -1, 'U'), Point(0, 1, 'D'), Point(-1, 0, 'L'), Point(1, 0, 'R') ) """ Explanation: Directions We maintain a tuple of points for directions. This allows the direction to match the specific character in the checksum. 1st character signals open door --&gt; 1st element in tuple gives direction End of explanation """ start_point = Point(0, 0, path_string) end_point = Point(3, 3, None) """ Explanation: Start and End states It is very important to know what exactly is the starting and ending state of the algorithm. In this case, the ending state is the room at the very bottom right, with the co-ordinates (3, 3). Since we do not know its path, and it is not required to declare it, we use None instead. End of explanation """ def valid_point(point): if 0 <= point.x < 4 and 0 <= point.y < 4: return True """ Explanation: Validating points / moves A point is valid if it stays within the boundaries of the 4 x 4 matrix. End of explanation """ def open_doors(checksum): def is_open(character): if character > 'a': return True return False return [ directions[index] for index, character in enumerate(checksum[:4]) if is_open(character)] """ Explanation: Checking for open doors We take the first four characters of the checksum, compare them against the given condition of being less than 10 or 'a' which signlas a closed door. Return only directions that are valid - open doors. 'a' or less --&gt; closed others --&gt; open End of explanation """ from heapq import heappush, heappop states = [] heappush(states, (len(start_point.path), start_point)) """ Explanation: Heap We declare a states list for holding the heap. The heap contains only the starting point at this time. While the length of the path string at this point is the length of the input, it is not a valid answer nor reflects the final path string. I chose to use the length of the entire string so as to avoid calculating the difference between the input string and the direction string everytime. It also makes code easier to read, IMHO. End of explanation """ while states: _, current_state = heappop(states) if current_state.x == end_point.x and current_state.y == end_point.y: print('answer', current_state.path[len(path_string):]) break moves = open_doors(md5(current_state.path)) for move in moves: new_path = Point( current_state.x + move.x, current_state.y + move.y, current_state.path + move.path) if not valid_point(new_path): continue heappush(states, (len(new_path.path), new_path)) """ Explanation: Run the algorithm End of explanation """ states = [(0, start_point)] paths = [] while states: _, current_state = states.pop() if current_state.x == end_point.x and current_state.y == end_point.y: # print(len(current_state.path)) paths.append(len(current_state.path)) continue moves = open_doors(md5(current_state.path)) for move in moves: new_path = Point( current_state.x + move.x, current_state.y + move.y, current_state.path + move.path) if not valid_point(new_path): continue states.append((len(new_path.path), new_path)) print('answer', max(paths) - len(path_string)) """ Explanation: Part Two You're curious how robust this security solution really is, and so you decide to find longer and longer paths which still provide access to the vault. You remember that paths always end the first time they reach the bottom-right room (that is, they can never pass through it, only end in it). For example: If your passcode were ihgpwlah, the longest path would take 370 steps. With kglvqrro, the longest path would be 492 steps long. With ulqzkmiv, the longest path would be 830 steps long. What is the length of the longest path that reaches the vault? Solution logic Now instead of finding the shortest path, we have to find the longest path. While earlier we used BFS, which was great for iterating over shorter path lengths, this time, we must use DFS (depth-first-search) to iterate over the longer branches first. This won't make much of a difference, as we have to iterate all paths anyways. In depth first search, instead of a heap or queue, we use a stack where we insert and remove elements from the end. This allows consecutive iteration of the last state - hence giving it depth. We store the lengths of all paths reaching the end point, and later retrieve the largest of them. Note: The paths stored are prefixed with the input, therefore for the final answer, it is necessary to subtract the length of the input string from it. End of explanation """
AshivDhondea/SORADSIM
example_notebooks/notebook_005_orbitpropa_sgp4_local_topo_visualization.ipynb
mit
from IPython.display import Image Image(filename='ashivorbit2017.png') # Note that this image belongs to me. I have created it myself. # Load the libraries required # These two are mine import AstroFunctions as AstFn import UnbiasedConvertedMeasurements as UCM import math import numpy as np # Libraries needed for time keeping and formatting import datetime as dt import pytz # Importing what's needed for nice plots. import matplotlib.pyplot as plt from matplotlib import rc rc('font', **{'family': 'serif', 'serif': ['Helvetica']}) rc('text', usetex=True) params = {'text.latex.preamble' : [r'\usepackage{amsmath}', r'\usepackage{amssymb}']} plt.rcParams.update(params) from mpl_toolkits.axes_grid.anchored_artists import AnchoredText import matplotlib as mpl from mpl_toolkits.mplot3d import Axes3D # Module for SGP4 orbit propagation from sgp4.earth_gravity import wgs84 from sgp4.io import twoline2rv """ Explanation: Notebook_005_orbitpropa_sgp4_local_topo_visualization.ipynb Demonstrate orbit propagation (OP) using the SGP4 method Plot the ground track Transform the trajectory into the local (topocentric) frame Example: ISS TLE file downloaded from celestrak.com on 22 August 2017 at 04:40 UTC. ISS (ZARYA) 1 25544U 98067A 17233.89654113 .00001846 00000-0 35084-4 0 9996 2 25544 51.6406 70.7550 0005080 161.3029 292.5190 15.54181235 71970 The local frame is centered on the ground station. Ground station: Cape Town (33.9249 deg S, 18.4241 deg E) @author: Ashiv Dhondea Created on 22 August 2017 Our work is based on the theory, standards and conventions adopted in the book "Fundamentals of Astrodynamics" (2013) by David Vallado. We employ the WGS84 geodetic standard. The ECI (Earth-Centered Inertial), ECEF (Earth-Centered, Earth-Fixed) and SEZ (South-East-Zenith) frames are defined in the book cited. The figure below shows a space object (SO) in orbit around the Earth. The ECEF and SEZ frames are defined in the figure below. The label 'Rx' denotes the receiver in a radar system. Here it denotes the location of the ground station on the surface of the Earth. The longitude $\lambda$ and geocentric latitude $\phi_\text{gc}$ define the location of a point on the surface of the WGS84 oblate spheroid. The geocentric latitude $\phi_\text{gc}$ is calculated from the geodetic latitude $\phi_\text{gd}$ which is usually quoted online (Google Earth, heavens-above etc.) In radar coordinates, aka spherical coordinates, the position of the SO with respect to the radar is expressed as $(\rho,\theta,\psi)$. The slant-range from the observer to the target is denoted by $\rho$ while the look angles, i.e., the elevation angle and azimuth angle are denoted by $\theta$ and $\psi$, respectively. It is extremely important to take note of how these two angles are defined. Different authors in different fields employ different definitions for the look angles. The azimuth angle $\psi$ is measured from the positive $x$-axis to the positive $y$-axis in this project. Vallado (2013) defines his azimuth angle from the negative $x$-axis to the positive $y$-axis. End of explanation """ # Location of observer [Cape Town, ZA] lat_station = -33.9249; # [deg] lon_station = 18.4241; # [deg] altitude_station = 0; # [m] """ Explanation: The observer or ground station is identified by its latitude, longitude and height above the Mean Sea Level (MSL). We have assumed that the ground station is at sea level below. End of explanation """ ## ISS (ZARYA) tle_line1 = '1 25544U 98067A 17233.89654113 .00001846 00000-0 35084-4 0 9996'; tle_line2 = '2 25544 51.6406 70.7550 0005080 161.3029 292.5190 15.54181235 71970'; so_name = 'ISS (ZARYA)' # Read TLE to extract Keplerians and epoch. a,e,i,BigOmega,omega,E,nu,epoch = AstFn.fnTLEtoKeps(tle_line1,tle_line2); # Create satellite object satellite_obj = twoline2rv(tle_line1, tle_line2, wgs84); line1 = (tle_line1); line2 = (tle_line2); # Figure out the TLE epoch year,dayy, hrs, mins, secs, millisecs = AstFn.fn_Calculate_Epoch_Time(epoch); todays_date = AstFn.fn_epoch_date(year,dayy); print "TLE epoch date is", todays_date print "UTC time = ",hrs,"h",mins,"min",secs+millisecs,"s" timestamp_tle_epoch = dt.datetime(year=todays_date.year,month=todays_date.month,day=todays_date.day,hour=hrs,minute=mins,second=secs,microsecond=int(millisecs),tzinfo= pytz.utc); """ Explanation: The Two Line Element (TLE) data for the Internation Space Station is obtained from celestrak.com. We want to use a recent TLE set to avoid inaccurate orbit propagation results caused by outdated initial values. End of explanation """ # Find start and end of observation window. # Find the time elapsed between the epoch of the TLE and the start and end of the observation window observation_epoch= dt.datetime(year=todays_date.year,month=todays_date.month,day=22,hour=3,minute=8,second=38,microsecond=0,tzinfo= pytz.utc); simulation_duration_dt_obj = observation_epoch - timestamp_tle_epoch; simulation_duration_secs = simulation_duration_dt_obj.total_seconds(); start_observation_epoch = dt.datetime(year=todays_date.year,month=todays_date.month,day=22,hour=3,minute=7,second=50,microsecond=0,tzinfo= pytz.utc) start_simulation_duration_dt_obj = start_observation_epoch - timestamp_tle_epoch; start_simulation_duration_secs = start_simulation_duration_dt_obj.total_seconds() """ Explanation: According to heavens-above.com, the ISS will be visible in Cape Town (SA) from 05:07:50 to 05:08:38 (local time). The local time (SAST) is 2 hours ahead of UTC. So the observation epoch is 03:07:50 to 03:08:38 End of explanation """ # Declare time and state vector variables. delta_t = 1; #[s] print 'Propagation time step = %d' %delta_t, '[s]' duration = simulation_duration_secs; #[s] print 'Duration of simulation = %d' %duration, '[s]' timevec = np.arange(0,duration+delta_t,delta_t,dtype=np.float64); x_state_sgp4 = np.zeros([6,len(timevec)],dtype=np.float64); xecef_sgp4 = np.zeros([3,len(timevec)],dtype=np.float64); # Declare variables to store latitude and longitude values of the ground track lat_sgp4 = np.zeros([len(timevec)],dtype=np.float64); lon_sgp4 = np.zeros([len(timevec)],dtype=np.float64); # Identify the indices corresponding to start and end of the observation window obsv_window_start_index = int(start_simulation_duration_secs/delta_t) obsv_window_end_index = len(timevec) - 1; print 'observation window starts at index ' print obsv_window_start_index print 'and ends at ' print obsv_window_end_index obsv_window_duration = (obsv_window_end_index - obsv_window_start_index);# should be int, not float print 'Duration of observation window in [min]' print obsv_window_duration*delta_t/60 obsv_window_timestamps = [None]*(obsv_window_duration+1); # Initialize empty list to hold time stamps print 'Number of data points in observation window: ' print len(obsv_window_timestamps) """ Explanation: The time step used in orbit propagation does influence the accuracy of the OP results. However, we do not want to worry about this too much. For most applications, a time step of 1 second for a simulation lasting less than 24 hours should be reasonably fine. End of explanation """ R_SEZ = np.zeros([3,len(timevec)],dtype=np.float64); V_SEZ = np.zeros([3,len(timevec)],dtype=np.float64); x_target = np.zeros([6,len(timevec)],dtype=np.float64); # spherical measurements from the Rx y_sph_rx = np.zeros([3,len(timevec)],dtype=np.float64); index = 0; current_time = timevec[index]; hrs,mins,secs = AstFn.fnSeconds_To_Hours(current_time + (satellite_obj.epoch.hour*60*60) + (satellite_obj.epoch.minute*60)+ satellite_obj.epoch.second); dys = satellite_obj.epoch.day + int(math.ceil(hrs/24)); if hrs >= 24: hrs = hrs - 24*int(math.ceil(hrs/24)) ; satpos,satvel = satellite_obj.propagate(satellite_obj.epoch.year,satellite_obj.epoch.month,dys,hrs,mins,secs+(1e-6)*satellite_obj.epoch.microsecond); x_state_sgp4[0:3,index] = np.asarray(satpos); x_state_sgp4[3:6,index] = np.asarray(satvel); theta_GMST = math.radians(AstFn.fn_Convert_Datetime_to_GMST(timestamp_tle_epoch)); # Rotate ECI position vector by GMST angle to get ECEF position theta_GMST = AstFn.fnZeroTo2Pi(theta_GMST); xecef_sgp4[:,index] = AstFn.fnECItoECEF(x_state_sgp4[0:3,index],theta_GMST); lat_sgp4[index],lon_sgp4[index] = AstFn.fnCarts_to_LatLon(xecef_sgp4[:,index]); for index in range(1,len(timevec)): # Find the current time current_time = timevec[index]; hrs,mins,secs = AstFn.fnSeconds_To_Hours(current_time + (timestamp_tle_epoch.hour*60*60) + (timestamp_tle_epoch.minute*60)+ timestamp_tle_epoch.second); dys = timestamp_tle_epoch.day + int(math.ceil(hrs/24)); if hrs >= 24: hrs = hrs - 24*int(math.ceil(hrs/24)) ; # SGP4 propagation satpos,satvel = satellite_obj.propagate(satellite_obj.epoch.year,satellite_obj.epoch.month,dys,hrs,mins,secs+(1e-6)*satellite_obj.epoch.microsecond); x_state_sgp4[0:3,index] = np.asarray(satpos); x_state_sgp4[3:6,index] = np.asarray(satvel); # From the epoch, find the GMST angle. tle_epoch_test = dt.datetime(year=timestamp_tle_epoch.year,month=timestamp_tle_epoch.month,day=int(dys),hour=int(hrs),minute=int(mins),second=int(secs),microsecond=0,tzinfo= pytz.utc); theta_GMST = math.radians(AstFn.fn_Convert_Datetime_to_GMST(tle_epoch_test)); # Rotate ECI position vector by GMST angle to get ECEF position theta_GMST = AstFn.fnZeroTo2Pi(theta_GMST); xecef_sgp4[:,index] = AstFn.fnECItoECEF(x_state_sgp4[0:3,index],theta_GMST); lat_sgp4[index],lon_sgp4[index] = AstFn.fnCarts_to_LatLon(xecef_sgp4[:,index]); if index >= obsv_window_start_index: # We store away timestamps for the observation window current_time_iso = tle_epoch_test.isoformat() + 'Z' obsv_window_timestamps[index-obsv_window_start_index] =current_time_iso; # We find the position and velocity vector for the target in the local frame. # We then create the measurement vector consisting of range and look angles to the target. R_ECI = x_state_sgp4[0:3,index]; V_ECI = x_state_sgp4[3:6,index]; R_SEZ[:,index] = AstFn.fnRAZEL_Cartesian(math.radians(lat_station),math.radians(lon_station),altitude_station,R_ECI,theta_GMST); R_ECEF = AstFn.fnECItoECEF(R_ECI,theta_GMST); V_SEZ[:,index] = AstFn.fnVel_ECI_to_SEZ(V_ECI,R_ECEF,math.radians(lat_station),math.radians(lon_station),theta_GMST); x_target[:,index] = np.hstack((R_SEZ[:,index],V_SEZ[:,index])); # state vector in SEZ frame # Calculate range and angles for system modelling. y_sph_rx[:,index] = UCM.fnCalculate_Spherical(R_SEZ[:,index]); # slant-range and look angles wrt to Rx """ Explanation: Perform Orbit Propagation (OP) by calling an SGP4 method. End of explanation """ %matplotlib inline title_string = str(timestamp_tle_epoch.isoformat())+ 'Z/'+str(obsv_window_timestamps[-1]); coastline_data= np.loadtxt('Coastline.txt',skiprows=1) w, h = plt.figaspect(0.5) fig = plt.figure(figsize=(w,h)) ax = fig.gca() plt.rc('text', usetex=True) plt.rc('font', family='serif'); plt.rc('font',family='helvetica'); params = {'legend.fontsize': 8, 'legend.handlelength': 2} plt.rcParams.update(params) fig.suptitle(r"\textbf{%s ground track over the interval %s}" %(so_name,title_string),fontsize=12) plt.plot(coastline_data[:,0],coastline_data[:,1],'g'); ax.set_xlabel(r'Longitude $[\mathrm{^\circ}]$',fontsize=12) ax.set_ylabel(r'Latitude $[\mathrm{^\circ}]$',fontsize=12) plt.xlim(-180,180); plt.ylim(-90,90); plt.yticks([-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90]); plt.xticks([-180,-150,-120,-90,-60,-30,0,30,60,90,120,150,180]); plt.plot(math.degrees(lon_sgp4[0]),math.degrees(lat_sgp4[0]),'yo',markersize=5,label=timestamp_tle_epoch.isoformat() + 'Z'); for index in range(1,obsv_window_start_index-1): plt.plot(math.degrees(lon_sgp4[index]),math.degrees(lat_sgp4[index]),'b.',markersize=1); plt.annotate(r'%s' %obsv_window_timestamps[0], xy=(math.degrees(lon_sgp4[index+1]),math.degrees(lat_sgp4[index+1])), xycoords='data', xytext=(math.degrees(lon_sgp4[index+1])-40,math.degrees(lat_sgp4[index+1])+30), arrowprops=dict(facecolor='black',shrink=0.05,width=0.1,headwidth=2)) for index in range(obsv_window_start_index,obsv_window_end_index+1): plt.plot(math.degrees(lon_sgp4[index]),math.degrees(lat_sgp4[index]),color='crimson',marker='.',markersize=1); plt.plot(math.degrees(lon_sgp4[index]),math.degrees(lat_sgp4[index]),'mo',markersize=5,label=current_time_iso); plt.annotate(r'%s' %obsv_window_timestamps[obsv_window_duration], xy=(math.degrees(lon_sgp4[index]),math.degrees(lat_sgp4[index])), xycoords='data', xytext=(math.degrees(lon_sgp4[index])+10,math.degrees(lat_sgp4[index])+5), arrowprops=dict(facecolor='black', shrink=0.05,width=0.1,headwidth=2) ) ax.grid(True); plt.plot(lon_station,lat_station,marker='.',color='gray'); # station lat lon ax.annotate(r'Cape Town', (18, -36)); at = AnchoredText("AshivD",prop=dict(size=5), frameon=True,loc=4) at.patch.set_boxstyle("round,pad=0.,rounding_size=0.2") ax.add_artist(at) fig.savefig('notebook_005_orbitpropa_sgp4_local_topo_visualization_groundtrack.pdf',format='pdf',bbox_inches='tight',pad_inches=0.01,dpi=100); """ Explanation: Plot the ground track. This is an equirectangular projection, by the way. I have created this ground track plot from a MATLAB ground track plotter developed by a Mr. Richard Rieber, found here End of explanation """ from IPython.display import Image Image(filename='site_geometry_ashivd.png') """ Explanation: Radar people usually only deal with targets moving in a local frame. This is where they do their target motion analysis, radar geometry analysis, radar system design and whatnot. For a space object problem, we have to transform the scenario into the local/topocentric frame, which is shown below. End of explanation """ title_string_obsv = str(obsv_window_timestamps[0])+'/'+str(obsv_window_timestamps[-1]); fig = plt.figure(2); plt.rc('text', usetex=True) plt.rc('font', family='serif'); plt.rc('font',family='helvetica'); params = {'legend.fontsize': 8, 'legend.handlelength': 2} plt.rcParams.update(params); ax = fig.gca(projection='3d'); plt.hold(True) fig.suptitle(r"\textbf{%s track in the local (topocentric) frame over %s}" %(so_name,title_string_obsv),fontsize=12) ax.plot(R_SEZ[0,obsv_window_start_index:obsv_window_end_index+1],R_SEZ[1,obsv_window_start_index:obsv_window_end_index+1],R_SEZ[2,obsv_window_start_index:obsv_window_end_index+1],label=r'Target trajectory'); ax.scatter(0,0,0,c='darkgreen',marker='o'); ax.text(0,0,0,r'ground station',color='k') ax.legend(); ax.set_xlabel(r'x [$\mathrm{km}$]') ax.set_ylabel(r'y [$\mathrm{km}$]') ax.set_zlabel(r'z [$\mathrm{km}$]') fig.savefig('notebook_005_orbitpropa_sgp4_local_topo_visualization_SEZtrack.pdf',format='pdf',bbox_inches='tight',pad_inches=0.08,dpi=100); """ Explanation: People who merely want to catch a glimpse of the ISS and not worry about radar stuff, here's a quick explanation. When you are standing outside, the horizontal plane with respect to your eyes is the $x-y$ plane labelled as $x_\text{SEZ}, y_\text{SEZ}$ in the figure above. $x$ points in the direction of the South and $y$, the East. Therefore $z$ is straight up skywards. The figure below shows the ISS's trajectory in the observer's frame. End of explanation """ f, axarr = plt.subplots(3,sharex=True); plt.rc('text', usetex=True) plt.rc('font', family='serif'); plt.rc('font',family='helvetica'); f.suptitle(r"\textbf{Radar observation vectors from Cape Town to %s over %s}" %(so_name,title_string_obsv),fontsize=12) axarr[0].plot(timevec[obsv_window_start_index:obsv_window_end_index+1],y_sph_rx[0,obsv_window_start_index:obsv_window_end_index+1]) axarr[0].set_ylabel(r'$\rho$'); axarr[0].set_title(r'Slant-range $\rho [\mathrm{km}]$') axarr[1].plot(timevec[obsv_window_start_index:obsv_window_end_index+1],np.degrees(y_sph_rx[1,obsv_window_start_index:obsv_window_end_index+1])) axarr[1].set_title(r'Elevation angle $\theta~[\mathrm{^\circ}]$') axarr[1].set_ylabel(r'$\theta$'); axarr[2].plot(timevec[obsv_window_start_index:obsv_window_end_index+1],np.degrees(y_sph_rx[2,obsv_window_start_index:obsv_window_end_index+1])) axarr[2].set_title(r'Azimuth angle $\psi~[\mathrm{^\circ}]$') axarr[2].set_ylabel(r'$ \psi$'); axarr[2].set_xlabel(r'Time $t~[\mathrm{s}$]'); axarr[0].grid(True,which='both',linestyle=(0,[0.7,0.7]),lw=0.4,color='black') axarr[1].grid(True,which='both',linestyle=(0,[0.7,0.7]),lw=0.4,color='black') axarr[2].grid(True,which='both',linestyle=(0,[0.7,0.7]),lw=0.4,color='black') at = AnchoredText(r"$\Delta_t = %f ~\mathrm{s}$" %delta_t,prop=dict(size=6), frameon=True,loc=4) at.patch.set_boxstyle("round,pad=0.05,rounding_size=0.2") axarr[2].add_artist(at) # Fine-tune figure; hide x ticks for top plots and y ticks for right plots plt.setp([a.get_xticklabels() for a in axarr[0:2]], visible=False) plt.subplots_adjust(hspace=0.4) f.savefig('notebook_005_orbitpropa_sgp4_local_topo_visualization_radarvec.pdf',bbox_inches='tight',pad_inches=0.05,dpi=100) """ Explanation: The figure shown below illustrates the 3 components of the radar vector during the ISS's passage. End of explanation """ from IPython.display import Image Image(filename='heavens_above_iss_passage.png') """ Explanation: The top plot shows the evolution of the slant-range to the target, $\rho$, over time. The middle plot shows that the elevation angle is about $16^\circ$ above the horizon at the beginning of the observation window. The target disappears at about $10^\circ$ above the horizon at the end of the observation window. The target's azimuth angle is always about $120^\circ$, which is roughly in the direction of the North-East (NE), according to the azimuth angle defined in the previous figures. These values match closely to the ones found on the heavens-above page. Here's a screenshot End of explanation """
mathLab/RBniCS
tutorials/17_navier_stokes/tutorial_navier_stokes_1_deim.ipynb
lgpl-3.0
from ufl import transpose from dolfin import * from rbnics import * """ Explanation: Tutorial 17 - Navier Stokes equations Keywords: DEIM, supremizer operator 1. Introduction In this tutorial, we will study the Navier-Stokes equations over the two-dimensional backward-facing step domain $\Omega$ shown below: <img src="data/backward_facing_step.png" width="80%"/> A Poiseuille flow profile is imposed on the inlet boundary, and a no-flow (zero velocity) condition is imposed on the walls. A homogeneous Neumann condition of the Cauchy stress tensor is applied at the outflow boundary. The inflow velocity boundary condition is characterized by $$\boldsymbol{u}(\boldsymbol{x};\mu)=\mu\bigg {\frac{1}{2.25}(x_1-2)(5-x_1),0\bigg } \quad \forall \boldsymbol{x}=(x_0,x_1) \in \Omega$$ This problem is characterized by one parameter $\mu$, which characterizes the inlet velocity. The range of $\mu$ is the following $$\mu \in [1.0, 80.0].$$ Thus, the parameter domain is $$\mathbb{P}=[1.0,80.0].$$ In order to obtain a faster approximation of the problem, we pursue a model reduction by means of a POD-Galerkin reduced order method. 2. Parametrized formulation Let $\boldsymbol{u}(\mu)$ be the velocity vector and $p(\mu)$ be the pressure in the domain $\Omega$. We will directly provide a weak formulation for this problem: <center>for a given parameter $\mu \in \mathbb{P},$ find $u(\mu) \in \mathbb{V}(\mu), \; p \in\mathbb{M}$ such that </center> <center> $ \begin{cases} \nu \int_{\Omega} \nabla \boldsymbol{u} : \nabla \boldsymbol{v} \ d\Omega + \int_{\Omega} [(\boldsymbol{u} \cdot \nabla) \boldsymbol{u}] \cdot \boldsymbol{v} \ d\Omega - \int_{\Omega} p \nabla \cdot \boldsymbol{v} \ d\Omega = \int_{\Omega} \boldsymbol{f} \cdot \boldsymbol{v} \ d\Omega, \quad \forall \boldsymbol{v} \in\mathbb{V}, \ \int_{\Omega} q \nabla \cdot \boldsymbol{u} \ d\Omega = 0, \quad \forall q \in\mathbb{M} \end{cases} $ </center> where $\nu$ represents kinematic viscosity the functional space $\mathbb{V}(\mu)$ is defined as $\mathbb{V}=[H^1_{\Gamma_{wall}}(\Omega)]^2$ the functional space $\mathbb{M}(\mu)$ is defined as $\mathbb{M}=L^2(\Omega)$ Since this problem utilizes mixed finite element discretization with the velocity and pressure as solution variables, the inf-sup condition is necessary for the well posedness of this problem. Thus, the supremizer operator $T^{\mu}: \mathbb{M}_h \rightarrow \mathbb{V}_h$ will be used. End of explanation """ @DEIM("online", basis_generation="Greedy") @ExactParametrizedFunctions("offline") class NavierStokes(NavierStokesProblem): # Default initialization of members def __init__(self, V, **kwargs): # Call the standard initialization NavierStokesProblem.__init__(self, V, **kwargs) # ... and also store FEniCS data structures for assembly assert "subdomains" in kwargs assert "boundaries" in kwargs self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"] dup = TrialFunction(V) (self.du, self.dp) = split(dup) (self.u, _) = split(self._solution) vq = TestFunction(V) (self.v, self.q) = split(vq) self.dx = Measure("dx")(subdomain_data=self.subdomains) self.ds = Measure("ds")(subdomain_data=self.boundaries) # self.inlet = Expression(("1. / 2.25 * (x[1] - 2) * (5 - x[1])", "0."), degree=2) self.f = Constant((0.0, 0.0)) self.g = Constant(0.0) # Customize nonlinear solver parameters self._nonlinear_solver_parameters.update({ "linear_solver": "mumps", "maximum_iterations": 20, "report": True }) # Return custom problem name def name(self): return "NavierStokesDEIM1" # Return theta multiplicative terms of the affine expansion of the problem. @compute_theta_for_derivatives @compute_theta_for_supremizers def compute_theta(self, term): mu = self.mu if term == "a": theta_a0 = 1. return (theta_a0,) elif term in ("b", "bt"): theta_b0 = 1. return (theta_b0,) elif term == "c": theta_c0 = 1. return (theta_c0,) elif term == "f": theta_f0 = 1. return (theta_f0,) elif term == "g": theta_g0 = 1. return (theta_g0,) elif term == "dirichlet_bc_u": theta_bc00 = mu[0] return (theta_bc00,) else: raise ValueError("Invalid term for compute_theta().") # Return forms resulting from the discretization of the affine expansion of the problem operators. @assemble_operator_for_derivatives @assemble_operator_for_supremizers def assemble_operator(self, term): dx = self.dx if term == "a": u = self.du v = self.v a0 = inner(grad(u) + transpose(grad(u)), grad(v)) * dx return (a0,) elif term == "b": u = self.du q = self.q b0 = - q * div(u) * dx return (b0,) elif term == "bt": p = self.dp v = self.v bt0 = - p * div(v) * dx return (bt0,) elif term == "c": u = self.u v = self.v c0 = inner(grad(u) * u, v) * dx return (c0,) elif term == "f": v = self.v f0 = inner(self.f, v) * dx return (f0,) elif term == "g": q = self.q g0 = self.g * q * dx return (g0,) elif term == "dirichlet_bc_u": bc0 = [DirichletBC(self.V.sub(0), self.inlet, self.boundaries, 1), DirichletBC(self.V.sub(0), Constant((0.0, 0.0)), self.boundaries, 2)] return (bc0,) elif term == "inner_product_u": u = self.du v = self.v x0 = inner(grad(u), grad(v)) * dx return (x0,) elif term == "inner_product_p": p = self.dp q = self.q x0 = inner(p, q) * dx return (x0,) else: raise ValueError("Invalid term for assemble_operator().") # Customize the resulting reduced problem @CustomizeReducedProblemFor(NavierStokesProblem) def CustomizeReducedNavierStokes(ReducedNavierStokes_Base): class ReducedNavierStokes(ReducedNavierStokes_Base): def __init__(self, truth_problem, **kwargs): ReducedNavierStokes_Base.__init__(self, truth_problem, **kwargs) self._nonlinear_solver_parameters.update({ "report": True, "line_search": "wolfe" }) return ReducedNavierStokes """ Explanation: 3. Affine Decomposition End of explanation """ mesh = Mesh("data/backward_facing_step.xml") subdomains = MeshFunction("size_t", mesh, "data/backward_facing_step_physical_region.xml") boundaries = MeshFunction("size_t", mesh, "data/backward_facing_step_facet_region.xml") """ Explanation: 4. Main program 4.1. Read the mesh for this problem The mesh was generated by the data/generate_mesh.ipynb notebook. End of explanation """ element_u = VectorElement("Lagrange", mesh.ufl_cell(), 2) element_p = FiniteElement("Lagrange", mesh.ufl_cell(), 1) element = MixedElement(element_u, element_p) V = FunctionSpace(mesh, element, components=[["u", "s"], "p"]) """ Explanation: 4.2. Create Finite Element Space (Taylor-Hood P2-P1) End of explanation """ problem = NavierStokes(V, subdomains=subdomains, boundaries=boundaries) mu_range = [(1.0, 80.0)] problem.set_mu_range(mu_range) """ Explanation: 4.3. Allocate an object of the NavierStokes class End of explanation """ reduction_method = PODGalerkin(problem) reduction_method.set_Nmax(10, DEIM=20) """ Explanation: 4.4. Prepare reduction with a POD-Galerkin method End of explanation """ lifting_mu = (1.0,) problem.set_mu(lifting_mu) reduction_method.initialize_training_set(100, DEIM=144, sampling=EquispacedDistribution()) reduced_problem = reduction_method.offline() """ Explanation: 4.5. Perform the offline phase End of explanation """ online_mu = (10.0,) reduced_problem.set_mu(online_mu) reduced_solution = reduced_problem.solve() plot(reduced_solution, reduced_problem=reduced_problem, component="u") plot(reduced_solution, reduced_problem=reduced_problem, component="p") """ Explanation: 4.6. Perform an online solve End of explanation """ reduction_method.initialize_testing_set(16, DEIM=25, sampling=EquispacedDistribution()) reduction_method.error_analysis() """ Explanation: 4.7. Perform an error analysis End of explanation """ reduction_method.speedup_analysis() """ Explanation: 4.8. Perform a speedup analysis End of explanation """
iutzeler/Introduction-to-Python-for-Data-Sciences
3-2_Dataframes.ipynb
mit
import numpy as np import pandas as pd """ Explanation: <table> <tr> <td width=15%><img src="./img/UGA.png"></img></td> <td><center><h1>Introduction to Python for Data Sciences</h1></center></td> <td width=15%><a href="http://www.iutzeler.org" style="font-size: 16px; font-weight: bold">Franck Iutzeler</a> </td> </tr> </table> <br/><br/> <center><a style="font-size: 40pt; font-weight: bold">Chap. 3 - Data Handling with Pandas </a></center> <br/><br/> 2- Dataframes Operations End of explanation """ df = pd.DataFrame(np.random.randint(0, 10, (3, 4)), columns=['A', 'B', 'C', 'D']) df np.cos(df * np.pi/2 ) - 1 """ Explanation: Numpy operations If we apply a NumPy function on a Pandas datframe, the result will be another Pandas dataframe with the indices preserved. End of explanation """ A = pd.DataFrame(np.random.randint(0, 20, (2, 2)), columns=list('AB')) A B = pd.DataFrame(np.random.randint(0, 10, (3, 3)), columns=list('BAC')) B A+B """ Explanation: Arithmetic operations Arithmetic operations can also be performed either with <tt>+ - / *</tt> or with dedicated <tt>add multiply</tt> etc methods End of explanation """ A.add(B, fill_value=0.0) """ Explanation: The pandas arithmetic functions also have an option to fill missing values by replacing the missing one in either of the dataframes by some value. End of explanation """ A = pd.DataFrame(np.random.randint(0, 20, (2, 2)), columns=list('AB')) A2 = pd.DataFrame(np.random.randint(0, 20, (3, 2)), columns=list('AB')) print("A:\n",A,"\nA2:\n",A2) A.append(A2) # this does not "append to A" but creates a new dataframe """ Explanation: Appending, Concatenating, and Merging Thanks to naming, dataframes can be easily added, merged, etc. However, if some entries are missing (columns or indices), the operations may get complicated. Here the most standard situations are covered, take a look at the documentation (notably this one on merging, appending, and concatenating ) Appending is for adding the lines of one dataframe with another one with the same columns. End of explanation """ A.append(A2,ignore_index=True) """ Explanation: Sometimes, indexes do not matter, they can be resetted using <tt>ignore_index=True</tt>. End of explanation """ A = pd.DataFrame(np.random.randint(0, 20, (2, 2)), columns=list('AB')) A2 = pd.DataFrame(np.random.randint(0, 20, (3, 2)), columns=list('AB')) A3 = pd.DataFrame(np.random.randint(0, 20, (1, 3)), columns=list('CAD')) print("A:\n",A,"\nA2:\n",A2,"\nA3:\n",A3) """ Explanation: Concatenating is for adding lines and/or columns of multiples datasets (it is a generalization of appending) End of explanation """ pd.concat([A,A2,A3],ignore_index=True) pd.concat([A,A2,A3],axis=1) pd.concat([A,A2,A3],axis=1,ignore_index=True,join='inner') """ Explanation: The most important settings of the <tt>concat</tt> function are <tt>pd.concat(objs, axis=0, join='outer',ignore_index=False)</tt> where <br/> . objs is the list of dataframes to concatenate <br/> . axis is the axis on which to concatenate 0 (default) for the lines and 1 for the columns <br/> . join is to decide if we keep all columns/indices on the other axis ('outer' ,default), or the intersection ( 'inner') <br/> . ignore_index is to decide is we keep the previous names (False, default) or give new ones (True) For a detailed view see this doc on merging, appending, and concatenating End of explanation """ df1 = pd.DataFrame({'employee': ['Bob', 'Jake', 'Lisa', 'Sue'], 'group': ['Accounting', 'Engineering', 'Engineering', 'HR']}) df1 df2 = pd.DataFrame({'employee': ['Lisa', 'Bob', 'Jake', 'Sue'], 'hire_date': [2004, 2008, 2012, 2014]}) df2 df3 = pd.merge(df1,df2) df3 df4 = pd.DataFrame({'group': ['Accounting', 'Engineering', 'HR'], 'supervisor': ['Carly', 'Guido', 'Steve']}) df4 pd.merge(df3,df4) """ Explanation: Merging is for putting together two dataframes with hopefully common data For a detailed view see this doc on merging, appending, and concatenating End of explanation """ ratings = pd.read_csv('data/ml-small/ratings_mess.csv') ratings.head(7) # enables to display the top n lines of a dataframe, 5 by default """ Explanation: Preparing the Data Before exploring the data, it is primordial to verify its soundness, indeed if it has missing or replicated data, the results of our test may not be accurate. Pandas provides a collection of methodes to verify the sanity of the data (recall that when data is missing for an entry, it is noted as NaN, and thus any further operation including this will be NaN). To explore some typical problems in a dataset, I messed with a small part of the MovieLens dataset. The ratings_mess.csv file contains 4 columns: * userId id of the user, integer greater than 1 * movieId id of the user, integer greater than 1 * rating rating of the user to the movie, float between 0.0 and 5.0 * timestamp timestamp, integer and features (man-made!) errors, some of them minor some of them major. End of explanation """ ratings.isnull().head(5) """ Explanation: Missing values Pandas provides functions that check if the values are missing: isnull(): Generate a boolean mask indicating missing values notnull(): Opposite of isnull() End of explanation """ ratings.dropna(subset=["userId","movieId","rating"],inplace=True) ratings.head(5) """ Explanation: Carefully pruning data Now that we have to prune lines of our data, this will be done using dropna() through dataframe.dropna(subset=["col_1","col_2"],inplace=True) which drops all rows with at least one missing value in the columns col1, col2 of dataframe in place that is without copy. Warning: this function deletes any line with at least one missing data, which is not always wishable. Also, with inplace=True, it is applied in place, meaning that they modify the dataframe it is applied to, it is thus an irreversible operation; drop inplace=True to create a copy or see the result before apllying it. For instance here, userId,movieId,rating are essential whereas the timestamp is not (it can be dropped for the prediciton process). Thus, we will delete the lines where one of userId,movieId,rating is missing and fill the timestamp with 0 when it is missing. End of explanation """ ratings["timestamp"].fillna(0,inplace=True) ratings.head(7) """ Explanation: To fill missing data (from a certain column), the recommended way is to use fillna() through dataframe["col"].fillna(value,inplace=True) which replace all missing values in the column col of dataframe by value in place that is without copy (again this is irreversible, to use the copy version use inplace=False). End of explanation """ ratings.reset_index(inplace=True,drop=True) ratings.head(7) """ Explanation: This indeed gives the correct result, however, the line indexing presents missing number. The indexes can be resetted with reset_index(inplace=True,drop=True) End of explanation """ ratings[ratings["userId"]<1] # Identifying a problem """ Explanation: Improper values Even without the missing values, some lines are problematic as they feature values outside of prescribed range (userId id of the user, integer greater than 1; movieId id of the user, integer greater than 1; rating rating of the user to the movie, float between 0.0 and 5.0; imestamp timestamp, integer ) End of explanation """ ratings.drop(ratings[ratings["userId"]<1].index, inplace=True) ratings.head(7) pb_rows = ratings[ratings["movieId"]<1] pb_rows ratings.drop(pb_rows.index, inplace=True) """ Explanation: Now, we drop the corresponding line, with drop by drop(problematic_row.index, inplace=True). Warning: Do not forget .index and inplace=True End of explanation """ pb_rows = ratings[ratings["rating"]<0] pb_rows2 = ratings[ratings["rating"]>5] tot_pb_rows = pb_rows.append(pb_rows2 ) tot_pb_rows ratings.drop(tot_pb_rows.index, inplace=True) ratings.reset_index(inplace=True,drop=True) """ Explanation: And finally the ratings. End of explanation """ ratings.to_csv("data/ml-small/ratings_cured.csv",index=False) """ Explanation: We finally have our dataset cured! Let us save it for further use. to_csv saves as CSV into some file, index=False drops the index names as we did not specify it. End of explanation """ ratings = pd.read_csv('data/ml-small/ratings_cured.csv') ratings.head() """ Explanation: Basic Statistics With our cured dataset, we can begin exploring. End of explanation """ ratings.describe() """ Explanation: The following table summarizes some other built-in Pandas aggregations: | Aggregation | Description | |--------------------------|---------------------------------| | count() | Total number of items | | first(), last() | First and last item | | mean(), median() | Mean and median | | min(), max() | Minimum and maximum | | std(), var() | Standard deviation and variance | | mad() | Mean absolute deviation | | prod() | Product of all items | | sum() | Sum of all items | These are all methods of DataFrame and Series objects, and description also provides a quick overview. End of explanation """ ratings.drop("timestamp",axis=1,inplace=True) ratings.head() ratings["rating"].describe() """ Explanation: We see that these statistics do not make sense for all rows. Let us drop the timestamp and examine the ratings. End of explanation """ ratings.head() """ Explanation: GroupBy These ratings are linked to users and movies, in order to have a separate view per user/movie, grouping has to be used. The GroupBy operation (that comes from SQL) accomplishes: The split step involves breaking up and grouping a DataFrame depending on the value of the specified key. The apply step involves computing some function, usually an sum, median, means etc within the individual groups. The combine step merges the results of these operations into an output array. <img src="img/GroupBy.png"> <p style="text-align: right">Source: [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas</p> End of explanation """ ratings.groupby("userId")["rating"].mean() """ Explanation: So to get the mean of the ratings per user, the command is End of explanation """ ratings.groupby("userId")["rating"].count() def filter_func(x): return x["rating"].count() >= 2 filtered = ratings.groupby("userId").filter(filter_func) filtered filtered.groupby("userId")["rating"].count() """ Explanation: Filtering Filtering is the action of deleting rows depending on a boolean function. For instance, the following removes the user with a rating of only one movie. End of explanation """ ratings.groupby("userId")["rating"].mean() def center_ratings(x): x["rating"] = x["rating"] - x["rating"].mean() return x centered = ratings.groupby("userId").apply(center_ratings) centered.groupby("userId")["rating"].mean() """ Explanation: Transformations Transforming is the actions of applying a transformation (sic). For instance, let us normalize the ratings so that they have zero mean for each user. End of explanation """ ratings.groupby("userId")["rating"].aggregate([min,max,np.mean,np.median,len]) """ Explanation: Aggregations [*] Aggregations let you aggreagate several operations. End of explanation """ import pandas as pd import numpy as np ratings_bots = pd.read_csv('data/ml-small/ratings_bots.csv') """ Explanation: Exercises Exercise: Bots Discovery In the dataset ratings_bots.csv, some users may be bots. To help a movie sucess they add ratings (favorable ones often). To get a better recommendation, we try to remove them. Count the users with a mean rating above 4.7/5 and delete them hint: the nunique function may be helpful to count Delete multiples reviews of a movie by a single user by replacing them with only the first one. What is the proportion of potential bots among the users? hint: the groupby function can be applied to several columns, also reset_index(drop=True) removes the grouby indexing. hint: remember the loc function, e.g. df.loc[df['userId'] == 128] returns a dataframe of the rows where the userId is 128; and df.loc[df['userId'] == 128].loc[samerev['movieId'] == 3825] returns a dataframe of the rows where the userId is 128 and the movieID is 3825. In total , 17 ratings have to be removed. For instance, user 128 has 3 ratings of the movie 3825 This dataset has around 100 000 ratings so hand picking won't do! End of explanation """ import pandas as pd import numpy as np planets = pd.read_csv('data/planets.csv') print(planets.shape) planets.head() """ Explanation: Exercise: Planets discovery We will use the Planets dataset, available via the Seaborn package. It provides information on how astronomers found new planets around stars, exoplanets. Display median, mean and quantile informations for these planets orbital periods, masses, and distances. For each method, display statistic on the years planets were discovered using this technique. End of explanation """
chengsoonong/crowdastro
notebooks/35_classifier_analysis.ipynb
mit
import csv import sys import astropy.wcs import h5py import matplotlib.pyplot as plot import numpy import sklearn.metrics sys.path.insert(1, '..') import crowdastro.train CROWDASTRO_H5_PATH = '../data/crowdastro.h5' CROWDASTRO_CSV_PATH = '../crowdastro.csv' TRAINING_H5_PATH = '../data/training.h5' ARCMIN = 1 / 60 %matplotlib inline """ Explanation: Classifier analysis In this notebook, I find the precision&ndash;recall and ROC curves of classifiers, and look at some examples of where the classifiers do really well (and really poorly). End of explanation """ with h5py.File(CROWDASTRO_H5_PATH) as crowdastro_h5: with h5py.File(TRAINING_H5_PATH) as training_h5: classifier, astro_t, image_t = crowdastro.train.train( crowdastro_h5, training_h5, '../data/classifier.pkl', '../data/astro_transformer.pkl', '../data/image_transformer.pkl', classifier='lr') testing_indices = crowdastro_h5['/atlas/cdfs/testing_indices'].value all_astro_inputs = astro_t.transform(training_h5['astro'].value) all_cnn_inputs = image_t.transform(training_h5['cnn_outputs'].value) all_inputs = numpy.hstack([all_astro_inputs, all_cnn_inputs]) all_labels = training_h5['labels'].value inputs = all_inputs[testing_indices] labels = all_labels[testing_indices] probs = classifier.predict_proba(inputs) precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, probs[:, 1]) plot.plot(recall, precision) plot.xlabel('Recall') plot.ylabel('Precision') plot.show() fpr, tpr, _ = sklearn.metrics.roc_curve(labels, probs[:, 1]) plot.plot(fpr, tpr) plot.xlabel('False positive rate') plot.ylabel('True positive rate') print('Accuracy: {:.02%}'.format(classifier.score(inputs, labels))) """ Explanation: Logistic regression Precision&ndash;recall and ROC curves End of explanation """ max_margin = float('-inf') max_index = None max_swire = None with h5py.File(CROWDASTRO_H5_PATH) as crowdastro_h5: with h5py.File(TRAINING_H5_PATH) as training_h5: classifier, astro_t, image_t = crowdastro.train.train( crowdastro_h5, training_h5, '../classifier.pkl', '../astro_transformer.pkl', '../image_transformer.pkl', classifier='lr') testing_indices = crowdastro_h5['/atlas/cdfs/testing_indices'].value swire_positions = crowdastro_h5['/swire/cdfs/catalogue'][:, :2] atlas_positions = crowdastro_h5['/atlas/cdfs/positions'].value all_astro_inputs = training_h5['astro'].value all_cnn_inputs = training_h5['cnn_outputs'].value all_labels = training_h5['labels'].value swire_tree = sklearn.neighbors.KDTree(swire_positions, metric='chebyshev') simple = True if simple: atlas_counts = {} # ATLAS ID to number of objects in that subject. for consensus in crowdastro_h5['/atlas/cdfs/consensus_objects']: atlas_id = int(consensus[0]) atlas_counts[atlas_id] = atlas_counts.get(atlas_id, 0) + 1 indices = [] for atlas_id, count in atlas_counts.items(): if count == 1 and atlas_id in testing_indices: indices.append(atlas_id) indices = numpy.array(sorted(indices)) atlas_positions = atlas_positions[indices] print('Found %d simple subjects.', len(atlas_positions)) else: atlas_positions = atlas_positions[testing_indices] print('Found %d subjects.', len(atlas_positions)) # Test each ATLAS subject. n_correct = 0 n_total = 0 for atlas_index, pos in enumerate(atlas_positions): neighbours, distances = swire_tree.query_radius([pos], ARCMIN, return_distance=True) neighbours = neighbours[0] distances = distances[0] astro_inputs = all_astro_inputs[neighbours] astro_inputs[:, -1] = distances cnn_inputs = all_cnn_inputs[neighbours] labels = all_labels[neighbours] features = [] features.append(astro_t.transform(astro_inputs)) features.append(image_t.transform(cnn_inputs)) inputs = numpy.hstack(features) outputs = classifier.predict_proba(inputs)[:, 1] assert len(labels) == len(outputs) index = outputs.argmax() correct = labels[index] == 1 if not correct: outputs.sort() margin = outputs[-1] - outputs[-2] if margin > max_margin: max_margin = margin max_index = atlas_index max_swire = swire_positions[index] with h5py.File(CROWDASTRO_H5_PATH) as crowdastro_h5: plot.imshow(crowdastro_h5['/atlas/cdfs/images_2x2'][max_index]) swire = crowdastro_h5['/atlas/cdfs/consensus_objects'][max_index][1] pos = crowdastro_h5['/swire/cdfs/catalogue'][swire][:2] with open(CROWDASTRO_CSV_PATH) as c_csv: r = csv.DictReader(c_csv) header = [a for a in r if int(a['index']) == max_index][0]['header'] wcs = astropy.wcs.WCS(header) (x, y), = wcs.wcs_world2pix([pos], 1) print(x,y) """ Explanation: Confident, but wrong classifications End of explanation """ with h5py.File(CROWDASTRO_H5_PATH) as crowdastro_h5: with h5py.File(TRAINING_H5_PATH) as training_h5: classifier, astro_t, image_t = crowdastro.train.train( crowdastro_h5, training_h5, '../classifier.pkl', '../astro_transformer.pkl', '../image_transformer.pkl', classifier='rf') testing_indices = crowdastro_h5['/atlas/cdfs/testing_indices'].value all_astro_inputs = astro_t.transform(training_h5['astro'].value) all_cnn_inputs = image_t.transform(training_h5['cnn_outputs'].value) all_inputs = numpy.hstack([all_astro_inputs, all_cnn_inputs]) all_labels = training_h5['labels'].value inputs = all_inputs[testing_indices] labels = all_labels[testing_indices] probs = classifier.predict_proba(inputs) precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, probs[:, 1]) plot.plot(recall, precision) plot.xlabel('Recall') plot.ylabel('Precision') plot.show() fpr, tpr, _ = sklearn.metrics.roc_curve(labels, probs[:, 1]) plot.plot(fpr, tpr) plot.xlabel('False positive rate') plot.ylabel('True positive rate') print('Accuracy: {:.02%}'.format(classifier.score(inputs, labels))) """ Explanation: Random forests End of explanation """
remenska/iSDM
notebooks/Demo-Climate-DEM.ipynb
apache-2.0
import logging root = logging.getLogger() root.addHandler(logging.StreamHandler()) import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Reading and manipulating Climate data layers just some logging/plotting magic to output in this notebook, nothing to care about. End of explanation """ from iSDM.environment import ClimateLayer worldclim_max_june = ClimateLayer(file_path="../data/tmax1/tmax6.bil") worldclim_max_june.load_data() """ Explanation: Load raster data (downloaded from WorldClim) on maximum temperature per month. Loading the data prints also some useful information on the number of raster bands, coordinate reference system, driver, affine transformation, height/width, and bounding box. The availability of this metadata depends on the content of the header file associated with the raster data. End of explanation """ from iSDM.environment import Source worldclim_max_june.set_source(Source.WORLDCLIM) """ Explanation: We can set the source of this data (as additional metadata) End of explanation """ worldclim_max_june = ClimateLayer(file_path="../data/tmax1/tmax6.bil", source=Source.WORLDCLIM) worldclim_max_june.load_data() """ Explanation: This can also be done at the same time while creating the ClimateLayer. Always load the data before reading it End of explanation """ worldclim_data = worldclim_max_june.read() # read all bands worldclim_data worldclim_data.shape # 3600 rows, 8640 columns, 1 band """ Explanation: To read the actual data, use .read() with optional parameter specifying a particular band (or layer). To read all layers in one 3D array datastructure: End of explanation """ worldclim_data = worldclim_max_june.read(1) # read band 1 worldclim_data.shape # now it's flat matrix datastructure, for that particular band worldclim_data """ Explanation: To read a particular band, pass its index End of explanation """ worldclim_max_june.metadata['nodata'] # this returns the "nodata" value used for cells with no data. """ Explanation: To access metadata values of the Climate layer we created above, use: End of explanation """ worldclim_max_june.close_dataset() """ Explanation: Close the dataset to release memory, once you are done End of explanation """ worldclim_max_june.read(1) # read band 1 """ Explanation: Trying to read a closed dataset ... End of explanation """ worldclim_max_june.load_data() """ Explanation: The worldclim_max_june Climate layer instance remembers the file from which it loaded data last time, so you can simply load it again. End of explanation """ worldclim_max_june.load_data("../data/tmax1/tmax1.bil") # let's take january instead of june worldclim_max_june.resolution # some more information... """ Explanation: ... unless of course you decide to change the file from which to load: End of explanation """ worldclim_max_june.mynote = "Random metadata note" worldclim_max_june.mynote worldclim_data = worldclim_max_june.read(1) # read the data again, since we loaded a new dataset (january) """ Explanation: Python even allows you to freely add properties of this layer object: End of explanation """ worldclim_data.max() # Units = deg C * 10 worldclim_data.min() """ Explanation: Going back to the band data we loaded, let's find the minimum and maximum values: End of explanation """ worldclim_data[worldclim_data!=-9999].min() """ Explanation: Hm, the minimum is not a real value, but the "nodata" value. We want to ignore it when computing a true minimum: End of explanation """ worldclim_data[worldclim_data!=worldclim_max_june.metadata['nodata']].min() """ Explanation: ...which is equivalent to: End of explanation """ fig, ax = plt.subplots(figsize=(15, 15)) value_min = worldclim_data[worldclim_data!=-9999].min() value_max = worldclim_data.max() ax.imshow(worldclim_data, cmap="coolwarm", vmin=value_min, vmax=value_max) """ Explanation: Plotting the raster data is just a bit more involved: End of explanation """ type(worldclim_data) """ Explanation: Remember, we only stored the actual pixel values in worldclim_data. It is just a numpy array. End of explanation """ worldclim_max_june.reproject(destination_file="../data/tmax1/tmax1_lower_res.bil", resolution=3.0) """ Explanation: Let's resample the data at a lower resolution. End of explanation """ worldclim_max_june.load_data("../data/tmax1/tmax1_lower_res.bil") worldclim_data_low_res = worldclim_max_june.read(1) worldclim_data_low_res.shape # height, width """ Explanation: The reprojected data is stored in a separate destination_file in order to not overwrite the original data (btw do we want that?). Let's load that data now. Notice the resolution and the height/width End of explanation """ fig, ax = plt.subplots(figsize=(15, 15)) value_min = worldclim_data_low_res[worldclim_data_low_res!=-9999].min() value_max = worldclim_data_low_res.max() ax.imshow(worldclim_data_low_res, cmap="coolwarm", interpolation="none", vmin=value_min, vmax=value_max) """ Explanation: Plot the data again. Notice it is "pixelized" i.e., not that smooth because of lower resolution. End of explanation """ from rasterio.warp import RESAMPLING worldclim_max_june.load_data("../data/tmax1/tmax1.bil") worldclim_max_june.reproject(destination_file="../data/tmax1/tmax1_lower_res.bil", resolution=3.0, resampling=RESAMPLING.cubic_spline) """ Explanation: There are several algorithms for resampling the data in a different resolution. These are based on the GDAL implementation: nearest (default), bilinear, cubic, cubicspline, lanczos, average, mode. To use a different resampling algorithm than the default: End of explanation """ worldclim_max_june.load_data("../data/tmax1/tmax1_lower_res.bil") worldclim_data_low_res = worldclim_max_june.read(1) fig, ax = plt.subplots(figsize=(15, 15)) value_min = worldclim_data_low_res[worldclim_data_low_res!=-9999].min() value_max = worldclim_data_low_res.max() ax.imshow(worldclim_data_low_res, cmap="coolwarm", interpolation="none", vmin=value_min, vmax=value_max) """ Explanation: Load the data again, and visualize it End of explanation """ from iSDM.environment import DEMLayer dem_layer = DEMLayer(file_path="../data/alt_5m_bil/alt.bil") # altitude data from http://www.worldclim.org/current dem_layer.load_data() from iSDM.environment import Source dem_layer.set_source(Source.WORLDCLIM) dem_data = dem_layer.read() # read all bands dem_data = dem_layer.read(1) # read band 1 dem_data.shape # == height x width dem_layer.metadata['nodata'] dem_data.max() # Units = meters dem_data[dem_data!=-9999].min() """ Explanation: The original coordinate reference system is 'epsg:4326'. The target coordinate system may be any of the usual GDAL/OGR forms, complete WKT, PROJ.4, EPSG:n. For more complex examples of reprojection based on new bounds, dimensions, and resolution, using rasterio: https://github.com/mapbox/rasterio/blob/master/docs/reproject.rst and using gdal: http://www.geos.ed.ac.uk/~smudd/TopoTutorials/html/tutorial_raster_conversion.html (rasterio works on top of GDAL) as well as the command line interface: https://github.com/mapbox/rasterio/blob/master/docs/cli.rst#warp DEM layers have the same functionality as climate (raster data) End of explanation """ fig, ax = plt.subplots(figsize=(15, 15)) value_min = dem_data[dem_data!=-9999].min() value_max = dem_data.max() ax.imshow(dem_data, cmap="seismic", vmin=value_min, vmax=value_max) # or cmap="terrain"? """ Explanation: Plot the raster (always calculate the vmin/vmax) End of explanation """ dem_layer.reproject(destination_file="../data/alt_5m_bil/downscaled.bil", resolution=2.0) """ Explanation: Reproject to a lower resolution (the value is the pixel size, in terms of the Units - degrees in this case) End of explanation """ dem_layer.load_data("../data/alt_5m_bil/downscaled.bil") dem_low_res = dem_layer.read(1) fig, ax = plt.subplots(figsize=(15, 15)) value_min = dem_low_res[dem_low_res!=-9999].min() value_max = dem_low_res.max() ax.imshow(dem_low_res, cmap="seismic", interpolation="none", vmin=value_min, vmax=value_max) """ Explanation: Load the projection (shall we make an "overwrite" option to avoid loading every time) to visualize it End of explanation """ dem_layer.load_data("../data/alt_5m_bil/alt.bil") dem_layer.reproject(destination_file="../data/alt_5m_bil/cropped.bil", left=-50, bottom=42,right=-76, top=54) """ Explanation: Load back the original data and crop it with a bounding box End of explanation """ dem_layer.load_data("../data/alt_5m_bil/cropped.bil") dem_cropped = dem_layer.read(1) fig, ax = plt.subplots(figsize=(15, 15)) value_min = dem_cropped[dem_cropped!=-9999].min() value_max = dem_cropped.max() ax.imshow(dem_cropped, cmap="seismic", interpolation="none", vmin=value_min, vmax=value_max) """ Explanation: Load the cropped data to visualize (notice the BoundingBox values of the loaded data, they match with the values we specified above) End of explanation """ dem_layer.reproject(destination_file="../data/alt_5m_bil/cropped_downscaled.bil", resolution=1.0) dem_layer.load_data("../data/alt_5m_bil/cropped_downscaled.bil") dem_cropped_downscaled = dem_layer.read(1) fig, ax = plt.subplots(figsize=(15, 15)) value_min = dem_cropped_downscaled[dem_cropped_downscaled!=-9999].min() value_max = dem_cropped_downscaled.max() ax.imshow(dem_cropped_downscaled, cmap="seismic", interpolation="none", vmin=value_min, vmax=value_max) """ Explanation: Lower the resolution of the cropped area End of explanation """
bryanwweber/thermostate
docs/Plot-Tutorial.ipynb
bsd-3-clause
from thermostate import State, Q_, units from thermostate.plotting import IdealGas, VaporDome """ Explanation: Ploting Tutorial This tutorial acts as a guide to the plotting classes in ThermoState. It is designed to ease the creation of simple plots of thermodynamic states and processes for a variety of common substances. In order to use the plotting classes a basic knowledge of ThermoState is required. Tutorial.py may be used as a reference if any usage of ThermoState in this tutorial is unfamiliar to you. The evaluation of the states and properties will be handled by ThermoState and so we must start by importing the parts of ThermoState that will enable this. Our second import will be to import the two ThermoState plotting classes, IdealGas and VaporDome. End of explanation """ substance = 'water' Ideal = IdealGas(substance, ('s', 'T')) Vapor = VaporDome(substance, ('s', 'T')) """ Explanation: Plot Creation With the plotting classes imported we can begin creating our plots. The syntax for instantiating a class is VaporDome(substance, (x-axis, y-axis)) or IdealGas(substance, (x-axis, y-axis)) respectively. The tuple of axes are optional and more than one tuple may be entered if more than one plot is desired. The available axes to plot are T (temperature), p (pressure), v (mass-specific volume), u (mass-specific internal energy), h (mass-specific enthalpy), s (mass-specific entropy) and must be entered as a string. The substance is required and may be any ThermoState accepted substance enumerated in the list below: water air R134a R22 propane ammonia isobutane carbondioxide oxygen nitrogen End of explanation """ Vapor = VaporDome(substance, ('s', 'T')) Vapor.plot('v','p') T_1 = Q_(560.0, 'degC') p_1 = Q_(16.0, 'MPa') T_2 = Q_(50.0, 'degC') p_2 = Q_(16.0, 'MPa') st_1 = State(substance, T=T_1, p=p_1, label = 1) st_2 = State(substance, T=T_2, p=p_2) Vapor.add_state(st_1) Vapor.add_state(st_2, label = "2") """ Explanation: The VaporDome and Ideal gas classes are functionally the same with one major difference, the VaporDome class will overlay a vapor dome over whatever plot is created. For the remainer of the tutorial we will use VaporDome to illustrate the capabilities of both classes. Before anything else the plot must be created before any states or processes may be added into the class instance. Plots may be created with the initialization of the class as shown above or may be added using the plot function. The syntax for the plot function is plot(x-axis, y-axis) where the axes follow the same rules as before. States may be added to the plot with the add_state function. The syntax for the add_state function is add_state(state, key = None, label = None) where state may be any ThermoState State object and both key and label are set to none as default. The key input may be assigned a string which can be used to later remove the state from the graph if desired. The label input may be assigned a string or integer which then assigns this to the label variable of the ThermoState State object. The add_state function will retrieve the value of the appropriate the property for each axis with the proper units and plot them as a point on the previously created graphs. When the label attribute of the State object is not None, this point on all previously created graphs is labeled with this variable. On all future graphs with this state, the point representing the state will be labeled with this variable as well until the variable is changed. End of explanation """ Vapor = VaporDome(substance, ('s', 'T')) Vapor.plot('v','p') T_1 = Q_(560.0, 'degC') p_1 = Q_(16.0, 'MPa') T_2 = Q_(50.0, 'degC') p_2 = Q_(16.0, 'MPa') st_1 = State(substance, T=T_1, p=p_1) st_2 = State(substance, T=T_2, p=p_2, label = "2") Vapor.add_process(st_1, st_2, 'isobaric', label_1 = "1") """ Explanation: The add_process function can connect graphed states together and trace out every point between the states while holding one property constant. The syntax for the function is add_process(state_1, state_2, process_type, label_1 = None, label_2 = None). The states do not have to be added to the graph prior to invoking this funciton. New states will be automatically added to the plot. One, both, or no labels can be assigned. The accepted process types are as follows: isochoric isovolumetric isobaric isothermal isoenergetic isoenthalpic isentropic End of explanation """ Vapor = VaporDome(substance, ('s', 'T')) Vapor.plot('v','p') T_1 = Q_(560.0, 'degC') p_1 = Q_(14.0, 'MPa') T_2 = Q_(75.0, 'degC') p_2 = Q_(14.0, 'MPa') st_1 = State(substance, T=T_1, p=p_1) st_2 = State(substance, T=T_2, p=p_2) Vapor.add_state(st_1) Vapor.add_state(st_2) Vapor.remove_state(st_2) """ Explanation: Both states an processes can be removed using the remove_state and remove_process functions respectively. The syntax for remove_state is remove_state(state = None, key = None). Either the state that is to be removed or its associated key must be input to the function to mark it for removal. End of explanation """ Vapor = VaporDome(substance, ('s', 'T')) Vapor.plot('v','p') T_1 = Q_(560.0, 'degC') p_1 = Q_(14.0, 'MPa') T_2 = Q_(75.0, 'degC') p_2 = Q_(14.0, 'MPa') T_3 = Q_(500.0, 'K') v_3 = Q_(.1, "m**3/kg") T_4 = Q_(300.0, 'K') v_4 = Q_(.1, "m**3/kg") st_1 = State(substance, T=T_1, p=p_1) st_2 = State(substance, T=T_2, p=p_2) st_3 = State(substance, T=T_3, v=v_3) st_4 = State(substance, T=T_4, v=v_4) Vapor.add_process(st_1, st_2, 'isobaric') Vapor.add_process(st_3, st_4, 'isochoric') Vapor.remove_process(st_1, st_2, True) Vapor.remove_process(st_3, st_4) """ Explanation: The remove_process function is meant to delete the specified process from the graphs. The remove_process function has the following syntax, remove_process(state_1, state_2, remove_states=False). The states must be entered to specify which process is slated to be removed from the graphs. The remove_states input, if true, will remove both states from the graphs. If left as False, as is the default, then only the process line will be removed from the graphs. End of explanation """ Vapor = VaporDome(substance, ('s', 'T')) Vapor.plot('v','p') T_1 = Q_(560.0, 'degC') p_1 = Q_(14.0, 'MPa') T_2 = Q_(75.0, 'degC') p_2 = Q_(14.0, 'MPa') T_3 = Q_(500.0, 'K') v_3 = Q_(.1, "m**3/kg") T_4 = Q_(300.0, 'K') v_4 = Q_(.1, "m**3/kg") st_1 = State(substance, T=T_1, p=p_1) st_2 = State(substance, T=T_2, p=p_2) st_3 = State(substance, T=T_3, v=v_3) st_4 = State(substance, T=T_4, v=v_4) Vapor.add_process(st_1, st_2, 'isobaric') Vapor.add_process(st_3, st_4, 'isochoric') Vapor.set_yscale('s', 'T', 'log') Vapor.set_xscale('v', 'p', 'linear') """ Explanation: The scale of the axes for 'p' and 'v' are in log by default and for all other axis types are linear by default. This can be overidden using the set_xscale and set_yscale functions. The syntax for these functions are set_xscale(self, x_axis, y_axis, scale="linear") and set_yscale(self, x_axis, y_axis, scale="linear") respectively. The x_axis and y_axis inputs specify which graph you want to alter the axis scale for. End of explanation """
anhaidgroup/py_entitymatching
notebooks/guides/step_wise_em_guides/Performing Blocking Using Built-In Blockers (Sorted Neighborhood Blocker).ipynb
bsd-3-clause
# Import py_entitymatching package import py_entitymatching as em import os import pandas as pd """ Explanation: Contents Introduction Block Using the Sorted Neighborhood Blocker Block Tables to Produce a Candidate Set of Tuple Pairs Handling Missing Values Window Size Stable Sort Order Sorted Neighborhood Blocker Limitations Introduction <font color='red'>WARNING: The sorted neighborhood blocker is still experimental and has not been fully tested yet. Use this blocker at your own risk.</font> Blocking is typically done to reduce the number of tuple pairs considered for matching. There are several blocking methods proposed. The py_entitymatching package supports a subset of such blocking methods (#ref to what is supported). One such supported blocker is the sorted neighborhood blocker. This IPython notebook illustrates how to perform blocking using the sorted neighborhood blocker. Note, often the sorted neighborhood blocking technique is used on a single table. In this case we have implemented sorted neighborhood blocking between two tables. We first enrich the tables with whether the table is the left table, or right table. Then we merge the tables. At this point we perform sorted neighborhood blocking, which is to pass a sliding window of window_size (default 2) across the merged dataset. Within the sliding window all tuple pairs that have one tuple from the left table and one tuple from the right table are returned. First, we need to import py_entitymatching package and other libraries as follows: End of explanation """ # Get the datasets directory datasets_dir = em.get_install_path() + os.sep + 'datasets' # Get the paths of the input tables path_A = datasets_dir + os.sep + 'person_table_A.csv' path_B = datasets_dir + os.sep + 'person_table_B.csv' # Read the CSV files and set 'ID' as the key attribute A = em.read_csv_metadata(path_A, key='ID') B = em.read_csv_metadata(path_B, key='ID') A.head() B.head() """ Explanation: Then, read the input tablse from the datasets directory End of explanation """ # Instantiate attribute equivalence blocker object sn = em.SortedNeighborhoodBlocker() """ Explanation: Block Using the Sorted Neighborhood Blocker Once the tables are read, we can do blocking using sorted neighborhood blocker. With the sorted neighborhood blocker, you can only block between two tables to produce a candidate set of tuple pairs. Block Tables to Produce a Candidate Set of Tuple Pairs End of explanation """ # Use block_tables to apply blocking over two input tables. C1 = sn.block_tables(A, B, l_block_attr='birth_year', r_block_attr='birth_year', l_output_attrs=['name', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', window_size=3) # Display the candidate set of tuple pairs C1.head() """ Explanation: For the given two tables, we will assume that two persons with different zipcode values do not refer to the same real world person. So, we apply attribute equivalence blocking on zipcode. That is, we block all the tuple pairs that have different zipcodes. End of explanation """ # Show the metadata of C1 em.show_properties(C1) id(A), id(B) """ Explanation: Note that the tuple pairs in the candidate set have the same zipcode. The attributes included in the candidate set are based on l_output_attrs and r_output_attrs mentioned in block_tables command (the key columns are included by default). Specifically, the list of attributes mentioned in l_output_attrs are picked from table A and the list of attributes mentioned in r_output_attrs are picked from table B. The attributes in the candidate set are prefixed based on l_output_prefix and r_ouptut_prefix parameter values mentioned in block_tables command. End of explanation """ # Introduce some missing values A1 = em.read_csv_metadata(path_A, key='ID') A1.ix[0, 'zipcode'] = pd.np.NaN A1.ix[0, 'birth_year'] = pd.np.NaN A1 # Use block_tables to apply blocking over two input tables. C2 = sn.block_tables(A1, B, l_block_attr='zipcode', r_block_attr='zipcode', l_output_attrs=['name', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', allow_missing=True) # setting allow_missing parameter to True len(C1), len(C2) C2 """ Explanation: Note that the metadata of C1 includes key, foreign key to the left and right tables (i.e A and B) and pointers to left and right tables. Handling Missing Values If the input tuples have missing values in the blocking attribute, then they are ignored by default. This is because, including all possible tuple pairs with missing values can significantly increase the size of the candidate set. But if you want to include them, then you can set allow_missing paramater to be True. End of explanation """ C3 = sn.block_tables(A, B, l_block_attr='birth_year', r_block_attr='birth_year', l_output_attrs=['name', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', window_size=5) len(C1) len(C3) """ Explanation: The candidate set C2 includes all possible tuple pairs with missing values. Window Size A tunable parameter to the Sorted Neighborhood Blocker is the Window size. To perform the same result as above with a larger window size is via the window_size argument. Note that it has more results than C1. End of explanation """ A["birth_year_plus_id"]=A["birth_year"].map(str)+'-'+A["ID"].map(str) B["birth_year_plus_id"]=B["birth_year"].map(str)+'-'+A["ID"].map(str) C3 = sn.block_tables(A, B, l_block_attr='birth_year_plus_id', r_block_attr='birth_year_plus_id', l_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'], r_output_attrs=['name', 'birth_year_plus_id', 'birth_year', 'zipcode'], l_output_prefix='l_', r_output_prefix='r_', window_size=5) C3.head() """ Explanation: Stable Sort Order One final challenge for the Sorted Neighborhood Blocker is making the sort order stable. If the column being sorted on has multiple identical keys, and those keys are longer than the window size, then different results may occur between runs. To always guarantee the same results for every run, make sure to make the sorting column unique. One method to do so is to append the id of the tuple onto the end of the sorting column. Here is an example. End of explanation """
Vvkmnn/books
ThinkBayes/07_Prediction.ipynb
gpl-3.0
def EvalPoissonPmf(k, lam): return (lam)**k * math.exp(-lam) / math.factorial(k) """ Explanation: Prediction The Boston Bruins problem In the 2010-11 National Hockey League (NHL) Finals, my beloved Boston Bruins played a best-of-seven championship series against the despised Vancouver Canucks. Boston lost the first two games 0-1 and 2-3, then won the next two games 8-1 and 4-0. At this point in the series, what is the probability that Boston will win the next game, and what is their probability of winning the championship? As always, to answer a question like this, we need to make some assumptions. First, it is reasonable to believe that goal scoring in hockey is at least approximately a Poisson process, which means that it is equally likely for a goal to be scored at any time during a game. Second, we can assume that against a particular opponent, each team has some long-term average goals per game, denoted $\lambda$. Given these assumptions, my strategy for answering this question is Use statistics from previous games to choose a prior distribution for $\lambda$. Use the score from the first four games to estimate $\lambda$ for each team. Use the posterior distributions of $\lambda$ to compute distribution of goals for each team, the distribution of the goal differential, and the probability that each team wins the next game. Compute the probability that each team wins the series. To choose a prior distribution, I got some statistics from http://www.nhl.com, specifically the average goals per game for each team in the 2010-11 season. The distribution is roughly Gaussian with mean 2.8 and standard deviation 0.3. The Gaussian distribution is continuous, but we’ll approximate it with a discrete Pmf. thinkbayes provides MakeGaussianPmf to do exactly that: ```python def MakeGaussianPmf(mu, sigma, num_sigmas, n=101): pmf = Pmf() low = mu - num_sigmassigma high = mu + num_sigmassigma for x in numpy.linspace(low, high, n): p = scipy.stats.norm.pdf(mu, sigma, x) pmf.Set(x, p) pmf.Normalize() return pmf ``` mu and sigma are the mean and standard deviation of the Gaussian distribution. num_sigmas is the number of standard deviations above and below the mean that the Pmf will span, and n is the number of values in the Pmf. Again we use numpy.linspace to make an array of n equally spaced values between low and high, including both. norm.pdf evaluates the Gaussian probability density function (PDF). Getting back to the hockey problem, here’s the definition for a suite of hypotheses about the value of $\lambda$. ```python class Hockey(thinkbayes.Suite): def __init__(self): pmf = thinkbayes.MakeGaussianPmf(mu=2.7, sigma=0.3, num_sigmas=4) thinkbayes.Suite.__init__(self, pmf) ``` So the prior distribution is Gaussian with mean 2.7, standard deviation 0.3, and it spans 4 sigmas above and below the mean. As always, we have to decide how to represent each hypothesis; in this case I represent the hypothesis that $\lambda=x$ with the floating-point value x. Poisson processes In mathematical statistics, a process is a stochastic model of a physical system (“stochastic” means that the model has some kind of randomness in it). For example, a Bernoulli process is a model of a sequence of events, called trials, in which each trial has two possible outcomes, like success and failure. So a Bernoulli process is a natural model for a series of coin flips, or a series of shots on goal. A Poisson process is the continuous version of a Bernoulli process, where an event can occur at any point in time with equal probability. Poisson processes can be used to model customers arriving in a store, buses arriving at a bus stop, or goals scored in a hockey game. In many real systems the probability of an event changes over time. Customers are more likely to go to a store at certain times of day, buses are supposed to arrive at fixed intervals, and goals are more or less likely at different times during a game. But all models are based on simplifications, and in this case modeling a hockey game with a Poisson process is a reasonable choice. Heuer, Müller and Rubner (2010) analyze scoring in a German soccer league and come to the same conclusion; see http://www.cimat.mx/Eventos/vpec10/img/poisson.pdf. The benefit of using this model is that we can compute the distribution of goals per game efficiently, as well as the distribution of time between goals. Specifically, if the average number of goals in a game is lam, the distribution of goals per game is given by the Poisson PMF: End of explanation """ def EvalExponentialPdf(x, lam): return lam * math.exp(-lam * x) """ Explanation: And the distribution of time between goals is given by the exponential PDF: End of explanation """ from hockey import * import thinkplot suite1 = Hockey('bruins') suite1.UpdateSet([0, 2, 8, 4]) suite2 = Hockey('canucks') suite2.UpdateSet([1, 3, 1, 0]) thinkplot.PrePlot(num=2) thinkplot.Pmf(suite1) thinkplot.Pmf(suite2) """ Explanation: I use the variable lam because lambda is a reserved keyword in Python. Both of these functions are in thinkbayes.py. The posteriors [fig.hockey1] Now we can compute the likelihood that a team with a hypothetical value of lam scores k goals in a game: ```python class Hockey def Likelihood(self, data, hypo): lam = hypo k = data like = thinkbayes.EvalPoissonPmf(k, lam) return like ``` Each hypothesis is a possible value of $\lambda$; data is the observed number of goals, k. With the likelihood function in place, we can make a suite for each team and update them with the scores from the first four games. End of explanation """ lam = 3.4 goal_dist = thinkbayes.MakePoissonPmf(lam, 10) """ Explanation: Figure 7.1: Posterior distribution of the number of goals per game. Figure 7.1 shows the resulting posterior distributions for lam. Based on the first four games, the most likely values for lam are 2.6 for the Canucks and 2.9 for the Bruins. The distribution of goals To compute the probability that each team wins the next game, we need to compute the distribution of goals for each team. If we knew the value of lam exactly, we could use the Poisson distribution again. thinkbayes provides a method that computes a truncated approximation of a Poisson distribution: python def MakePoissonPmf(lam, high): pmf = Pmf() for k in xrange(0, high+1): p = EvalPoissonPmf(k, lam) pmf.Set(k, p) pmf.Normalize() return pmf The range of values in the computed Pmf is from 0 to high. So if the value of lam were exactly 3.4, we would compute: End of explanation """ goal_dist1 = MakeGoalPmf(suite1) goal_dist2 = MakeGoalPmf(suite2) thinkplot.Clf() thinkplot.PrePlot(num=2) thinkplot.Pmf(goal_dist1) thinkplot.Pmf(goal_dist2) """ Explanation: I chose the upper bound, 10, because the probability of scoring more than 10 goals in a game is quite low. That’s simple enough so far; the problem is that we don’t know the value of lam exactly. Instead, we have a distribution of possible values for lam. For each value of lam, the distribution of goals is Poisson. So the overall distribution of goals is a mixture of these Poisson distributions, weighted according to the probabilities in the distribution of lam. Given the posterior distribution of lam, here’s the code that makes the distribution of goals: ```python def MakeGoalPmf(suite): metapmf = thinkbayes.Pmf() for lam, prob in suite.Items(): pmf = thinkbayes.MakePoissonPmf(lam, 10) metapmf.Set(pmf, prob) mix = thinkbayes.MakeMixture(metapmf) return mix ``` For each value of lam we make a Poisson Pmf and add it to the meta-Pmf. I call it a meta-Pmf because it is a Pmf that contains Pmfs as its values. Then we use MakeMixture to compute the mixture (we saw MakeMixture in Section [mixture]). End of explanation """ goal_dist1 = MakeGoalPmf(suite1) goal_dist2 = MakeGoalPmf(suite2) diff = goal_dist1 - goal_dist2 """ Explanation: Figure 7.2: Distribution of goals in a single game. Figure 7.2 shows the resulting distribution of goals for the Bruins and Canucks. The Bruins are less likely to score 3 goals or fewer in the next game, and more likely to score 4 or more. The probability of winning To get the probability of winning, first we compute the distribution of the goal differential: End of explanation """ p_win = diff.ProbGreater(0) p_loss = diff.ProbLess(0) p_tie = diff.Prob(0) print(p_win) print(p_loss) print(p_tie) """ Explanation: The subtraction operator invokes Pmf.__sub__, which enumerates pairs of values and computes the difference. Subtracting two distributions is almost the same as adding, which we saw in Section [addends]. If the goal differential is positive, the Bruins win; if negative, the Canucks win; if 0, it’s a tie: End of explanation """ lam = 3.4 time_dist = thinkbayes.MakeExponentialPmf(lam, high=2, n=101) """ Explanation: With the distributions from the previous section, p_win is 46%, p_loss is 37%, and p_tie is 17%. In the event of a tie at the end of “regulation play,” the teams play overtime periods until one team scores. Since the game ends immediately when the first goal is scored, this overtime format is known as “sudden death.” Sudden death To compute the probability of winning in a sudden death overtime, the important statistic is not goals per game, but time until the first goal. The assumption that goal-scoring is a Poisson process implies that the time between goals is exponentially distributed. Given lam, we can compute the time between goals like this: End of explanation """ def MakeGoalTimePmf(suite): metapmf = thinkbayes.Pmf() for lam, prob in suite.Items(): pmf = thinkbayes.MakeExponentialPmf(lam, high=2, n=2001) metapmf.Set(pmf, prob) mix = thinkbayes.MakeMixture(metapmf, name=suite.name) return mix """ Explanation: high is the upper bound of the distribution. In this case I chose 2, because the probability of going more than two games without scoring is small. n is the number of values in the Pmf. If we know lam exactly, that’s all there is to it. But we don’t; instead we have a posterior distribution of possible values. So as we did with the distribution of goals, we make a meta-Pmf and compute a mixture of Pmfs. End of explanation """ time_dist1 = MakeGoalTimePmf(suite1) time_dist2 = MakeGoalTimePmf(suite2) p_overtime = thinkbayes.PmfProbLess(time_dist1, time_dist2) """ Explanation: Figure 7.3 below shows the resulting distributions. For time values less than one period (one third of a game), the Bruins are more likely to score. The time until the Canucks score is more likely to be longer. I set the number of values, n, fairly high in order to minimize the number of ties, since it is not possible for both teams to score simultaneously. Now we compute the probability that the Bruins score first: End of explanation """ p_tie = diff.Prob(0) p_overtime = thinkbayes.PmfProbLess(time_dist1, time_dist2) p_win = diff.ProbGreater(0) + p_tie * p_overtime import matplotlib.pyplot as plt thinkplot.PrePlot(num=2) thinkplot.Pmf(time_dist1) thinkplot.Pmf(time_dist2) plt.legend(); """ Explanation: For the Bruins, the probability of winning in overtime is 52%. Finally, the total probability of winning is the chance of winning at the end of regulation play plus the probability of winning in overtime. End of explanation """ # win the next two p_series = p_win**2 # split the next two, win the third p_series += 2 * p_win * (1-p_win) * p_win p_series """ Explanation: Figure 7.3: Distribution of time between goals. For the Bruins, the overall chance of winning the next game is 55%. To win the series, the Bruins can either win the next two games or split the next two and win the third. Again, we can compute the total probability: End of explanation """
IBMDecisionOptimization/docplex-examples
examples/cp/jupyter/sudoku.ipynb
apache-2.0
import sys try: import docplex.cp except: if hasattr(sys, 'real_prefix'): #we are in a virtual env. !pip install docplex else: !pip install --user docplex """ Explanation: Sudoku This tutorial includes everything you need to set up decision optimization engines, build constraint programming models. When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics. This notebook is part of Prescriptive Analytics for Python It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account and you can start using IBM Cloud Pak for Data as a Service right away). CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>: - <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used: - <i>Python 3.x</i> runtime: Community edition - <i>Python 3.x + DO</i> runtime: full edition - <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition Table of contents: Describe the business problem How decision optimization (prescriptive analytics) can help Use decision optimization Step 1: Download the library Step 2: Model the Data Step 3: Set up the prescriptive model Define the decision variables Express the business constraints Express the objective Solve with Decision Optimization solve service Step 4: Investigate the solution and run an example analysis Summary Describe the business problem Sudoku is a logic-based, combinatorial number-placement puzzle. The objective is to fill a 9x9 grid with digits so that each column, each row, and each of the nine 3x3 sub-grids that compose the grid contains all of the digits from 1 to 9. The puzzle setter provides a partially completed grid, which for a well-posed puzzle has a unique solution. References See https://en.wikipedia.org/wiki/Sudoku for details How decision optimization can help Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes. Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage. <br/> For example: Automate complex decisions and trade-offs to better manage limited resources. Take advantage of a future opportunity or mitigate a future risk. Proactively update recommendations based on changing events. Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes. Use decision optimization Step 1: Download the library Run the following code to install Decision Optimization CPLEX Modeling library. The DOcplex library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier. End of explanation """ from docplex.cp.model import * from sys import stdout """ Explanation: Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization. End of explanation """ GRNG = range(9) """ Explanation: Step 2: Model the data Grid range End of explanation """ SUDOKU_PROBLEM_1 = ( (0, 0, 0, 0, 9, 0, 1, 0, 0), (2, 8, 0, 0, 0, 5, 0, 0, 0), (7, 0, 0, 0, 0, 6, 4, 0, 0), (8, 0, 5, 0, 0, 3, 0, 0, 6), (0, 0, 1, 0, 0, 4, 0, 0, 0), (0, 7, 0, 2, 0, 0, 0, 0, 0), (3, 0, 0, 0, 0, 1, 0, 8, 0), (0, 0, 0, 0, 0, 0, 0, 5, 0), (0, 9, 0, 0, 0, 0, 0, 7, 0), ) SUDOKU_PROBLEM_2 = ( (0, 7, 0, 0, 0, 0, 0, 4, 9), (0, 0, 0, 4, 0, 0, 0, 0, 0), (4, 0, 3, 5, 0, 7, 0, 0, 8), (0, 0, 7, 2, 5, 0, 4, 0, 0), (0, 0, 0, 0, 0, 0, 8, 0, 0), (0, 0, 4, 0, 3, 0, 5, 9, 2), (6, 1, 8, 0, 0, 0, 0, 0, 5), (0, 9, 0, 1, 0, 0, 0, 3, 0), (0, 0, 5, 0, 0, 0, 0, 0, 7), ) SUDOKU_PROBLEM_3 = ( (0, 0, 0, 0, 0, 6, 0, 0, 0), (0, 5, 9, 0, 0, 0, 0, 0, 8), (2, 0, 0, 0, 0, 8, 0, 0, 0), (0, 4, 5, 0, 0, 0, 0, 0, 0), (0, 0, 3, 0, 0, 0, 0, 0, 0), (0, 0, 6, 0, 0, 3, 0, 5, 4), (0, 0, 0, 3, 2, 5, 0, 0, 6), (0, 0, 0, 0, 0, 0, 0, 0, 0), (0, 0, 0, 0, 0, 0, 0, 0, 0) ) try: import numpy as np import matplotlib.pyplot as plt VISU_ENABLED = True except ImportError: VISU_ENABLED = False def print_grid(grid): """ Print Sudoku grid """ for l in GRNG: if (l > 0) and (l % 3 == 0): stdout.write('\n') for c in GRNG: v = grid[l][c] stdout.write(' ' if (c % 3 == 0) else ' ') stdout.write(str(v) if v > 0 else '.') stdout.write('\n') def draw_grid(values): %matplotlib inline fig, ax = plt.subplots(figsize =(4,4)) min_val, max_val = 0, 9 R = range(0,9) for l in R: for c in R: v = values[c][l] s = " " if v > 0: s = str(v) ax.text(l+0.5,8.5-c, s, va='center', ha='center') ax.set_xlim(min_val, max_val) ax.set_ylim(min_val, max_val) ax.set_xticks(np.arange(max_val)) ax.set_yticks(np.arange(max_val)) ax.grid() plt.show() def display_grid(grid, name): stdout.write(name) stdout.write(":\n") if VISU_ENABLED: draw_grid(grid) else: print_grid(grid) display_grid(SUDOKU_PROBLEM_1, "PROBLEM 1") display_grid(SUDOKU_PROBLEM_2, "PROBLEM 2") display_grid(SUDOKU_PROBLEM_3, "PROBLEM 3") """ Explanation: Different problems zero means cell to be filled with appropriate value End of explanation """ problem = SUDOKU_PROBLEM_3 """ Explanation: Choose your preferred problem (SUDOKU_PROBLEM_1 or SUDOKU_PROBLEM_2 or SUDOKU_PROBLEM_3) If you change the problem, ensure to re-run all cells below this one. End of explanation """ mdl = CpoModel(name="Sudoku") """ Explanation: Step 3: Set up the prescriptive model End of explanation """ grid = [[integer_var(min=1, max=9, name="C" + str(l) + str(c)) for l in GRNG] for c in GRNG] """ Explanation: Define the decision variables End of explanation """ for l in GRNG: mdl.add(all_diff([grid[l][c] for c in GRNG])) """ Explanation: Express the business constraints Add alldiff constraints for lines End of explanation """ for c in GRNG: mdl.add(all_diff([grid[l][c] for l in GRNG])) """ Explanation: Add alldiff constraints for columns End of explanation """ ssrng = range(0, 9, 3) for sl in ssrng: for sc in ssrng: mdl.add(all_diff([grid[l][c] for l in range(sl, sl + 3) for c in range(sc, sc + 3)])) """ Explanation: Add alldiff constraints for sub-squares End of explanation """ for l in GRNG: for c in GRNG: v = problem[l][c] if v > 0: grid[l][c].set_domain((v, v)) """ Explanation: Initialize known cells End of explanation """ print("\nSolving model....") msol = mdl.solve(TimeLimit=10) """ Explanation: Solve with Decision Optimization solve service End of explanation """ display_grid(problem, "Initial problem") if msol: sol = [[msol[grid[l][c]] for c in GRNG] for l in GRNG] stdout.write("Solve time: " + str(msol.get_solve_time()) + "\n") display_grid(sol, "Solution") else: stdout.write("No solution found\n") """ Explanation: Step 4: Investigate the solution and then run an example analysis End of explanation """
JasonNK/udacity-dlnd
dcgan-svhn/DCGAN.ipynb
mit
%matplotlib inline import pickle as pkl import matplotlib.pyplot as plt import numpy as np from scipy.io import loadmat import tensorflow as tf !mkdir data """ Explanation: Deep Convolutional GANs In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here. You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same. End of explanation """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(data_dir + "train_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat', data_dir + 'train_32x32.mat', pbar.hook) if not isfile(data_dir + "test_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat', data_dir + 'test_32x32.mat', pbar.hook) """ Explanation: Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine. End of explanation """ trainset = loadmat(data_dir + 'train_32x32.mat') testset = loadmat(data_dir + 'test_32x32.mat') """ Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above. End of explanation """ idx = np.random.randint(0, trainset['X'].shape[3], size=36) fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),) for ii, ax in zip(idx, axes.flatten()): ax.imshow(trainset['X'][:,:,:,ii], aspect='equal') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.subplots_adjust(wspace=0, hspace=0) """ Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake. End of explanation """ def scale(x, feature_range=(-1, 1)): # scale to (0, 1) x = ((x - x.min())/(255 - x.min())) # scale to feature_range min, max = feature_range x = x * (max - min) + min return x class Dataset: def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None): split_idx = int(len(test['y'])*(1 - val_frac)) self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:] self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:] self.train_x, self.train_y = train['X'], train['y'] self.train_x = np.rollaxis(self.train_x, 3) self.valid_x = np.rollaxis(self.valid_x, 3) self.test_x = np.rollaxis(self.test_x, 3) if scale_func is None: self.scaler = scale else: self.scaler = scale_func self.shuffle = shuffle def batches(self, batch_size): if self.shuffle: idx = np.arange(len(dataset.train_x)) np.random.shuffle(idx) self.train_x = self.train_x[idx] self.train_y = self.train_y[idx] n_batches = len(self.train_y)//batch_size for ii in range(0, len(self.train_y), batch_size): x = self.train_x[ii:ii+batch_size] y = self.train_y[ii:ii+batch_size] yield self.scaler(x), y """ Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images. End of explanation """ def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z """ Explanation: Network Inputs Here, just creating some placeholders like normal. End of explanation """ def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x1 = tf.layers.dense(z, 4*4*512) # Reshape it to start the convolutional stack x1 = tf.reshape(x1, (-1, 4, 4, 512)) x1 = tf.layers.batch_normalization(x1, training=training) x1 = tf.maximum(alpha * x1, x1) # 4x4x512 now x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same') x2 = tf.layers.batch_normalization(x2, training=training) x2 = tf.maximum(alpha * x2, x2) # 8x8x256 now x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same') x3 = tf.layers.batch_normalization(x3, training=training) x3 = tf.maximum(alpha * x3, x3) # 16x16x128 now # Output layer logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same') # 32x32x3 now out = tf.tanh(logits) return out """ Explanation: Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper: Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. End of explanation """ def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same') relu1 = tf.maximum(alpha * x1, x1) # 16x16x64 x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same') bn2 = tf.layers.batch_normalization(x2, training=True) relu2 = tf.maximum(alpha * bn2, bn2) # 8x8x128 x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same') bn3 = tf.layers.batch_normalization(x3, training=True) relu3 = tf.maximum(alpha * bn3, bn3) # 4x4x256 # Flatten it flat = tf.reshape(relu3, (-1, 4*4*256)) logits = tf.layers.dense(flat, 1) out = tf.sigmoid(logits) return out, logits """ Explanation: Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately. End of explanation """ def model_loss(input_real, input_z, output_dim, alpha=0.2): """ Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, generator loss) """ g_model = generator(input_z, output_dim, alpha=alpha) d_model_real, d_logits_real = discriminator(input_real, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha) d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))) g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))) d_loss = d_loss_real + d_loss_fake return d_loss, g_loss """ Explanation: Model Loss Calculating the loss like before, nothing new here. End of explanation """ def model_opt(d_loss, g_loss, learning_rate, beta1): """ Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :return: A tuple of (discriminator training operation, generator training operation) """ # Get weights and bias to update t_vars = tf.trainable_variables() d_vars = [var for var in t_vars if var.name.startswith('discriminator')] g_vars = [var for var in t_vars if var.name.startswith('generator')] # Optimize with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars) return d_train_opt, g_train_opt """ Explanation: Optimizers Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics. End of explanation """ class GAN: def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5): tf.reset_default_graph() self.input_real, self.input_z = model_inputs(real_size, z_size) self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, real_size[2], alpha=alpha) self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1) """ Explanation: Building the model Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object. End of explanation """ def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)): fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.axis('off') img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8) ax.set_adjustable('box-forced') im = ax.imshow(img, aspect='equal') plt.subplots_adjust(wspace=0, hspace=0) return fig, axes """ Explanation: Here is a function for displaying generated images. End of explanation """ def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(72, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in dataset.batches(batch_size): steps += 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z}) _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x}) if steps % print_every == 0: # At the end of each epoch, get the losses and print them out train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x}) train_loss_g = net.g_loss.eval({net.input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) if steps % show_every == 0: gen_samples = sess.run( generator(net.input_z, 3, reuse=True, training=False), feed_dict={net.input_z: sample_z}) samples.append(gen_samples) _ = view_samples(-1, samples, 6, 12, figsize=figsize) plt.show() saver.save(sess, './checkpoints/generator.ckpt') with open('samples.pkl', 'wb') as f: pkl.dump(samples, f) return losses, samples """ Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt. End of explanation """ real_size = (32,32,3) z_size = 100 learning_rate = 0.0002 batch_size = 128 epochs = 25 alpha = 0.2 beta1 = 0.5 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) dataset = Dataset(trainset, testset) losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5)) fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() _ = view_samples(-1, samples, 6, 12, figsize=(10,5)) _ = view_samples(-1, samples, 6, 12, figsize=(10,5)) """ Explanation: Hyperparameters GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. End of explanation """
adityaka/misc_scripts
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/05_03/Final/Multiple.ipynb
bsd-3-clause
import pandas as pd import numpy as np import matplotlib.pyplot as plt plt.style.use('ggplot') """ Explanation: <h1>Multiples Lines, Single Plot</h1> End of explanation """ data_set_size = 15 low_mu, low_sigma = 50, 4.3 low_data_set = low_mu + low_sigma * np.random.randn(data_set_size) high_mu, high_sigma = 57, 5.2 high_data_set = high_mu + high_sigma * np.random.randn(data_set_size) days = list(range(1, data_set_size + 1)) plt.plot(days, low_data_set) plt.show() plt.plot(days, low_data_set, days, high_data_set) plt.show() plt.plot(days, low_data_set, days, low_data_set, "vm", days, high_data_set, days, high_data_set, "^k") plt.show() plt.plot( days, high_data_set, "^k") plt.show() plt.plot(days, low_data_set, days, low_data_set, "vm", days, high_data_set, days, high_data_set, "^k") plt.xlabel('Day') plt.ylabel('Temperature: degrees Farenheit') plt.title('Randomized temperature data') plt.show() plt.plot(days, low_data_set, days, high_data_set ) plt.xlabel('Day') plt.ylabel('Temperature: degrees Farenheit') plt.title('Randomized temperature data') plt.show() plt.plot( days, high_data_set, "^k") plt.xlabel('Day') plt.ylabel('Temperature: degrees Farenheit') plt.title('Randomized temperature data') plt.show() """ Explanation: <h2>random low and high temperature data</h2> End of explanation """ t1 = np.arange(0.0, 2.0, 0.1) t2 = np.arange(0.0, 2.0, 0.01) # note that plot returns a list of lines. The "l1, = plot" usage # extracts the first element of the list into l1 using tuple # unpacking. So l1 is a Line2D instance, not a sequence of lines l1, = plt.plot(t2, np.exp(-t2)) l2, l3 = plt.plot(t2, np.sin(2 * np.pi * t2), '--go', t1, np.log(1 + t1), '.') l4, = plt.plot(t2, np.exp(-t2) * np.sin(2 * np.pi * t2), 'rs-.') plt.legend((l2, l4), ('oscillatory', 'damped'), loc='upper right', shadow=True) plt.xlabel('time') plt.ylabel('volts') plt.title('Damped oscillation') plt.show() """ Explanation: Next example from: http://matplotlib.org/examples/pylab_examples/legend_demo2.html End of explanation """
mne-tools/mne-tools.github.io
0.22/_downloads/81308ca6ca6807326a79661c989cfcba/plot_make_report.ipynb
bsd-3-clause
# Authors: Teon Brooks <teon.brooks@gmail.com> # Eric Larson <larson.eric.d@gmail.com> # # License: BSD (3-clause) from mne.report import Report from mne.datasets import sample from mne import read_evokeds from matplotlib import pyplot as plt data_path = sample.data_path() meg_path = data_path + '/MEG/sample' subjects_dir = data_path + '/subjects' evoked_fname = meg_path + '/sample_audvis-ave.fif' """ Explanation: Make an MNE-Report with a Slider In this example, MEG evoked data are plotted in an HTML slider. End of explanation """ report = Report(image_format='png', subjects_dir=subjects_dir, info_fname=evoked_fname, subject='sample', raw_psd=False) # use False for speed here report.parse_folder(meg_path, on_error='ignore', mri_decim=10) """ Explanation: Do standard folder parsing (this can take a couple of minutes): End of explanation """ # Load the evoked data evoked = read_evokeds(evoked_fname, condition='Left Auditory', baseline=(None, 0), verbose=False) evoked.crop(0, .2) times = evoked.times[::4] # Create a list of figs for the slider figs = list() for t in times: figs.append(evoked.plot_topomap(t, vmin=-300, vmax=300, res=100, show=False)) plt.close(figs[-1]) report.add_slider_to_section(figs, times, 'Evoked Response', image_format='png') # can also use 'svg' # Save the report report.save('my_report.html', overwrite=True) """ Explanation: Add a custom section with an evoked slider: End of explanation """
obestwalter/pet
ipynb/containers.ipynb
mit
aString = "123456" aList = [1, 2.0, 1j, 'hello', [], {}, (1, 2)] aSet = {1, 2.0, 1j, 'hello', (1, 2)} aTuple = (1, 2.0, 1j, 'hello', [], {}, (1, 2)) aDict = { 1: 1, 2.0: 2.0, 1j: 1j, (1, 2): (1, 2), 'hello': 'hello', 'list': [], 'dict': {}, } iterables = [ aString, aList, aSet, aTuple, aDict, ] for iterable in iterables: print('iterating over %s' % (type(iterable))) for element in iterable: print(element, end=' ') if isinstance(iterable, dict): print('(value: %s)' % (str(iterable[element])), end=' ') print() """ Explanation: Containers Create containers with literals End of explanation """ aList = list((1, 2.0, 'hello', [])) aSet = set(([1, 2.0, 'hello'])) aTuple = tuple([1, 2.0, 'hello', []]) aString = str(['a', 'b', 'c']) aDict = dict(key1='value', key2=1j, key3=[1, 2, {}]) iterables = [ aList, aSet, aString, aDict, ] for iterable in iterables: print(iterable, (type(iterable))) """ Explanation: Create containers with constructor functions End of explanation """ import collections def func(): pass class Cnt(object): """Container but not iterable""" def __contains__(self, _): return True for obj in [1, 1.0, 1j, 's', None, True, [], (1, 2), {}, {1}, object, collections, func, Cnt, Cnt()]: isContainer = isinstance(obj, collections.Container) isIterable = isinstance(obj, collections.Iterable) isHashable = isinstance(obj, collections.Hashable) print ("%s %s; container: %s; sequence: %s; hashable: %s" % (obj, type(obj), isContainer, isIterable, isHashable)) """ Explanation: Explore more containers End of explanation """ for element in [1, 's', [1, 3], 1j]: print(element, end=' ') for letter in "hello world": print(letter, end=' ') mydict = {'key1': 1, 'key2': 2} for key in mydict: print("%s: %s" % (key, mydict[key])) """ Explanation: Iterables End of explanation """ for key, value in {'k1': 1, 'k2': 2}.items(): print("%s: %s" % (key, value)) """ Explanation: NOTE: dict.items() returns a tuple with key and value of the current dictionary element whith each iteration which is then unpacked directly into the two names key and value. End of explanation """
mattmcd/PyBayes
scripts/dc_manipulating_time_series.ipynb
apache-2.0
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import os import yfinance as yf %matplotlib inline def ddir(name=None): data_dir = 'dc_manipulating_time_series/stock_data/' if name is None: print(os.listdir(data_dir)) else: return os.path.join(data_dir, name) ddir() """ Explanation: Manipulating Time Series Notebook to follow along with final chapter of the DataCamp Manipulating Time Series course. End of explanation """ nyse = pd.read_excel(ddir('listings.xlsx'), sheet_name='nyse', na_values='n/a') nyse.info() nyse.head() nyse.loc[nyse['Stock Symbol'].isin(['CPRT', 'ILMN']), :] """ Explanation: Import the data End of explanation """ # NASDAQ listings from https://www.nasdaq.com/market-activity/stocks/screener # Looks like the listings data for the exercise is from Nasdaq rather than NYSE nasdaq = pd.read_csv(ddir('nasdaq_screener_1646066842549.csv')).rename( columns={'Symbol': 'Stock Symbol', 'Market Cap': 'Market Capitalization', 'Name': 'Company Name'} ) nasdaq['Last Sale'] = nasdaq['Last Sale'].replace({'\$': '', ',': ''}, regex=True).astype(float) nasdaq.head() nasdaq.set_index('Stock Symbol', inplace=True) # Make Stock Symbol the index nasdaq.dropna(subset=['Sector'], inplace=True) # Remove stocks without sector info nasdaq['Market Capitalization'] /= 1e6 # Scale to million $ # Exercise only considers stocks that IPOd before 2019, need to filter # by this as well otherwise reading stock time series data fails # due to missing tickers. Also fiddle with sector label to get things matching. nasdaq = nasdaq.loc[(nasdaq['IPO Year'] < 2016), :] nasdaq.loc['GS', 'Sector'] = 'Bank' nasdaq.loc['ILMN', 'Sector'] = 'Biotech' nasdaq.loc['CPRT', 'Sector'] = 'Finance' nasdaq.info() # nasdaq.loc[nasdaq.index.isin(data.columns), :] allowed_sectors = ['Technology', 'Health Care', 'Consumer Services', 'Miscellaneous', 'Consumer Non-Durables', 'Bank', 'Health Care', 'Biotech', 'Energy', 'Basic Industries', 'Public Utilities', 'Transportation'] disallowed_stocks = ['LIN', 'TSLA', 'GNRC', 'MPLX', 'HDB', 'ABBV', 'KMI', 'FANG', 'RSG', 'CLR', 'AWK', 'WES', 'PSXP'] components = nasdaq.loc[ nasdaq.Sector.isin(allowed_sectors) & ~nasdaq.index.isin(disallowed_stocks), : ].groupby('Sector')['Market Capitalization'].nlargest(1) components.sort_values(ascending=False) components.info() # Index is MultiIndex so use get_level_values to extract tickers tickers = components.index.get_level_values('Stock Symbol') tickers columns = ['Company Name', 'Market Capitalization', 'Last Sale'] component_info = nasdaq.loc[tickers, columns].sort_values('Market Capitalization', ascending=False) pd.options.display.float_format = '{:,.2f}'.format component_info # Read the price time series data data = pd.read_csv( ddir('stock_data.csv'), parse_dates=['Date'], index_col='Date' ).loc[:, tickers.tolist()].dropna() data.info() # Look at price returns over period price_return = data.iloc[-1].div(data.iloc[0]).sub(1).mul(100).sort_values(ascending=False) price_return price_return.sort_values().plot(title='Stock Price Returns', kind='barh') plt.show() """ Explanation: Problem: it looks like the listing data Excel file from the Datacamp data download for this course contains different stocks to the ones we want. Some investigation shows that the NASDAQ listings do have all the stocks of interest. Some filtering by sector and removing stocks more recent than the ones used in the course, e.g. TSLA, is then required to get the tickers of the biggest companies in each sector to match those expected in the course (and whose prices are available in the data files). End of explanation """ shares = component_info['Market Capitalization'].div(component_info['Last Sale']) shares.sort_values(ascending=False) market_cap_series = data.mul(shares) _ = market_cap_series.plot(logy=True) # market_cap_series.first('D').append(market_cap_series.last('D')) # .append is deprecated pd.concat([market_cap_series.first('D'), market_cap_series.last('D')], axis=0) agg_mcap = market_cap_series.sum(axis=1) _ = agg_mcap.plot(title='Aggregate Market Cap') mcap_index = agg_mcap.div(agg_mcap.iloc[0]).mul(100) _ = mcap_index.plot(title='Market-Cap Weighted Index') """ Explanation: Build a Market Cap Weighted Index End of explanation """ # Total performance of companies in index print(f'Companies in index added ${agg_mcap.iloc[-1] - agg_mcap.iloc[0]:,.2f}M in value') change = pd.concat([market_cap_series.first('D'), market_cap_series.last('D')], axis=0) change.diff().iloc[-1].sort_values() weights = component_info['Market Capitalization'].div(component_info['Market Capitalization'].sum()) weights index_return = (mcap_index.iloc[-1]/mcap_index.iloc[0] - 1)*100 index_return weighted_return = weights.mul(index_return) _ = weighted_return.sort_values().plot(kind='barh') df_index = mcap_index.to_frame('Index') df_index['SP500'] = pd.read_csv(ddir('sp500.csv'), parse_dates=['date'], index_col='date') df_index['SP500'] = df_index['SP500'].div(df_index['SP500'].iloc[0]).mul(100) _ = df_index.plot() # Multi period returns def multi_period_returns(r): return (np.prod(1 + r) - 1) * 100 _ = df_index.pct_change().rolling('30D').apply(multi_period_returns).plot() """ Explanation: Evaluate Index Performance End of explanation """ daily_returns = data.pct_change().dropna() correlations = daily_returns.corr() correlations _ = sns.heatmap(correlations, annot=True, cmap='RdBu', center=0) _ = plt.xticks(rotation=45) _ = plt.title('Daily Return Correlations') # correlations.to_excel(excel_writer=ddir('dc_correlation.xls'), sheet_name='correlations', startrow=1, startcol=1) with pd.ExcelWriter(ddir('dc_stock_data.xlsx')) as writer: correlations.to_excel(excel_writer=writer, sheet_name='correlations') data.to_excel(excel_writer=writer, sheet_name='prices') data.pct_change().to_excel(excel_writer=writer, sheet_name='returns') """ Explanation: Index Correlation and Exporting to Excel End of explanation """ from statsmodels.tsa.arima_process import ArmaProcess from statsmodels.graphics.tsaplots import plot_acf, plot_pacf from statsmodels.tsa.stattools import adfuller, coint from statsmodels.tsa.arima_model import ARMA daily_returns.head() len(daily_returns) _ = plot_acf(daily_returns.AMZN, lags=20, alpha=0.05) _ = plot_acf((1+daily_returns).resample('Q').prod().AMZN -1 , lags=20, alpha=0.05) quarterly_returns = data.resample('Q').last().pct_change().dropna() quarterly_returns.head() _ = plot_acf(quarterly_returns.AMZN, lags=20, alpha=0.05) ((1 + daily_returns).resample('Q').prod() - 1).head() # aapl = yf.Ticker('aapl') # aapl_historical = aapl.history(start="2022-03-01", end="2022-03-05", interval="1m") # aapl_historical.to_csv(ddir('aapl_data_20220301.csv')) aapl_historical = pd.read_csv(ddir('aapl_data_20220301.csv'), parse_dates=['Datetime'], index_col='Datetime') aapl_historical.head() _ = aapl_historical.loc['2022-03-01', 'Close'].plot() appl_ret_20220301 = aapl_historical.loc['2022-03-01', 'Close'].pct_change().dropna() appl_ret_20220301.head() _ = plot_acf(appl_ret_20220301, lags=60) _ = aapl_historical.loc['2022-03-01 9:30':'2022-03-01 10:00', 'Close'].plot() help(coint) #(data.AAPL, data.AMZN) """ Explanation: Time Series Analysis Second course in Skill Track - look at AR, MA, ACF etc End of explanation """
telecom-research/crtc-scraper
_code/notebooks/CRTC-Hearing-TextAnalysis.ipynb
mit
# importing code modules import json import ijson from ijson import items import pprint from tabulate import tabulate import matplotlib.pyplot as plt import re import csv import sys import codecs import nltk import nltk.collocations import collections import statistics from nltk.metrics.spearman import * from nltk.collocations import * from nltk.stem import WordNetLemmatizer # This is a function for reading the contents of files def read_file(filename): "Read the contents of FILENAME and return as a string." infile = codecs.open(filename, 'r', 'utf-8') contents = infile.read() infile.close() return contents """ Explanation: CRTC Hearing Text Analysis The purpose of this notebook is to illustrate the method of text analysis using a corpus created from digital content published by the CRTC. This is the second part in a two-part process, the first of which is a description of the code that 'scraped' the CRTC webpage to create the corpus. Setting Up The code below imports the modules that are required to process the text. End of explanation """ # loading the JSON file filename = "../scrapy/hearing_result6.json" # loading the stopwords file stopwords = read_file('cornellStopWords.txt') customStopwords = stopwords.split() # reads the file and assigns the keys and values to a Python dictionary structure with open(filename, 'r') as f: objects = ijson.items(f, 'item') file = list(objects) """ Explanation: Reading the File This code loads and then reads the necessary files: the json file with all the hearing text, and a txt file with a list of stopwords, taken from here: http://www.lextek.com/manuals/onix/stopwords2.html. I've also added a few custom words. End of explanation """ # checks to see how many records we have print(len(file)) """ Explanation: A bit of error checking here to confirm the number of records in the file. We should have 14. End of explanation """ # commenting this out to make the github notebook more readable. # prints all content in a single record. Changing the number shows a different record file[0] """ Explanation: Changing the number in the code below will print a different record from the file. Please remember that in coding, numbered lists begin at 0. End of explanation """ # iterates through each record in the file for row in file: # prints the title of each record and its url print(row['title'], ":", row['url']) """ Explanation: Here is a bit more error checking to confirm the record titles and their urls. End of explanation """ # appends all of the text items to a single string object (rather than a list) joined_text = [] for row in file: joined_text.append(' '.join(row['text'])) # shows the text. Changing the number displays a different record... # ...changing/removing the second number limits/expands the text shown. print(joined_text[5][:750]) """ Explanation: And a bit more processing to make the text more readable. It's printed below. End of explanation """ # splits the text string in each record into a list of separate words token_joined = [] for words in joined_text: # splits the text into a list of words text = words.split() # makes all words lowercase clean = [w.lower() for w in text if w.isalpha()] # applies stopword removal text = [w for w in clean if w not in customStopwords] token_joined.append(text) """ Explanation: Text Analysis Processing This is the begining of the first processing for the text analysis. Here we will split all the words apart, make them all lowercase, and remove the punctuation, numbers, and words on the stopword list. End of explanation """ #for title,word in zip(file,token_joined): # print(title['title'],"guarantee:", word.count('guarantee'), "guarantees:", \ # word.count('guarantees'), "guaranteed:", word.count('guaranteed')) for title,word in zip(file,token_joined): print(title['title'],"service:", word.count('service'),"services:", word.count('services')) """ Explanation: Since a word of interest is guarantee, here is a list of how many times that word (and its variations) appear in each record. End of explanation """ # splits the text from the record into a list of individual words words = joined_text[0].split() #assigns NLTK functionality to the text text = nltk.Text(words) # prints a concordance output for the selected word (shown in green) print(text.concordance('services', lines=25)) #creates a new file that can be written by the print queue fileconcord = codecs.open('April11_service_concord.txt', 'w', 'utf-8') #makes a copy of the empty print queue, so that we can return to it at the end of the function tmpout = sys.stdout #stores the text in the print queue sys.stdout = fileconcord #generates and prints the concordance, the number pertains to the total number of bytes per line text.concordance("service", 79, sys.maxsize) #closes the file fileconcord.close() #returns the print queue to an empty state sys.stdout = tmpout """ Explanation: Concordance It looks like record number 5 has the most occurences of the word guarantee. The code below isolates the record and creates a concordance based on the selected word. End of explanation """ # shows the text list for a given record. Changing the first number displays a... # ...different record, changing/removing the second number limits/expands the text shown print(token_joined[5][:50]) """ Explanation: Below is what the text looks like after the initial processing, without punctuation, numbers, or stopwords. End of explanation """ # creates a variable for the lemmatizing function wnl = WordNetLemmatizer() # lemmatizes all of the verbs lemm = [] for record in token_joined: for word in record: lemm.append(wnl.lemmatize(word, 'v')) ''' lemm = [] for word in token_joined[13]: lemm.append(wnl.lemmatize(word, 'v')) ''' # lemmatizes all of the nouns lems = [] for word in lemm: lems.append(wnl.lemmatize(word, 'n')) """ Explanation: Lemmatization Some more preparation for the text processing. The code below works on the all of the records, creating one master list of words which is then lemmatized. End of explanation """ # just making sure the lemmatizer has worked #print("guarantee:", lems.count('guarantee'), "guarantees:", \ # lems.count('guarantees'), "guaranteed:", lems.count('guaranteed')) print("service:", lems.count('service'), lems.count('services')) """ Explanation: Here we are checking to make sure the lemmatizer has worked. Now the word guarantee only appears in one form. End of explanation """ # counting the number of words in each record for name, each in zip(file,token_joined): print(name['title'], ":",len(each), "words") """ Explanation: Word Frequency Here is a count of the number of words in each record. While this data isn't terribly useful 'as is', we can make a few assumptions about the text here. Notably that some of the hearings were much longer than others. End of explanation """ docfreq = [] for words in token_joined: docfreq.append(nltk.FreqDist(words)) for name, words in zip(file_obj, docfreq): print(name['title'], ":", words.most_common(5)) """ Explanation: Here we will count the five most common words in each record. End of explanation """ # prints the 10 most common bigrams colText = nltk.Text(lems) colText.collocations(10) """ Explanation: These are the 10 most common word pairs in the text. End of explanation """ # creates a list of bigrams (ngrams of 2), printing the first 5 colBigrams = list(nltk.ngrams(colText, 2)) colBigrams[:5] """ Explanation: Error checking to make sure the code is processing the text properly. End of explanation """ # error checking. There should be one less bigram than total words print("Number of words:", len(lems)) print("Number of bigrams:", len(colBigrams)) """ Explanation: More error checking. End of explanation """ # frequency plot with stopwords removed %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 10.0) fd = nltk.FreqDist(colText) fd.plot(25) """ Explanation: Below is a frequency plot showing the occurence of the 25 most frequent words. End of explanation """ # loads bigram code from NLTK bigram_measures = nltk.collocations.BigramAssocMeasures() # bigrams with a window size of 2 words finder = BigramCollocationFinder.from_words(lems, window_size = 2) # ngrams with 'word of interest' as a member word_filter = lambda *w: 'service' not in w # only bigrams that contain the 'word of interest' finder.apply_ngram_filter(word_filter) # filter results based on statistical test # calulates the raw frequency as an actual number and percentage of total words act = finder.ngram_fd.items() raw = finder.score_ngrams(bigram_measures.raw_freq) # log-likelihood ratio log = finder.score_ngrams(bigram_measures.likelihood_ratio) """ Explanation: Collocations Here we are preparing the text to search for bigrams containing the word guarantee. This code searches for words appearing before and after guarantee with a window size of two words on either side. End of explanation """ # prints list of results. print(tabulate(log, headers = ["Collocate", "Log-Likelihood"], floatfmt=".3f", \ numalign="left")) # prints list of results. print(tabulate(act, headers = ["Collocate", "Actual"], floatfmt=".3f", \ numalign="left")) with open('digital-literacy_collocate_Act.csv','w') as f: w = csv.writer(f) w.writerows(act) """ Explanation: Research shows that this is the most reliable statistical test for unreliable data. Log-Likelihood Ratio The Log-likelihood ratio calculates the size and significance between the observed and expected frequencies of bigrams and assigns a score based on the result, taking into account the overall size of the corpus. The larger the difference between the observed and expected, the higher the score, and the more statistically significant the collocate is. The Log-likelihood ratio is my preferred test for collocates because it does not rely on a normal distribution, and for this reason, it can account for sparse or low frequency bigrams. It does not over-represent low frequency bigrams with inflated scores, as the test is only reporting how much more likely it is that the frequencies are different than they are the same. The drawback to the Log-likelihood ratio is that it cannot be used to compare scores across corpora. An important note here that words will appear twice in the following list. As the ngrams can appear both before and after the word, care must be taken to identify duplicate occurences in the list below and then combine the totals. End of explanation """ ################################################################## ############### sorts list of log-likelihood scores ############## ################################################################## # group bigrams by first and second word in bigram prefix_keys = collections.defaultdict(list) for key, l in log: # first word prefix_keys[key[0]].append((key[1], l)) # second word prefix_keys[key[1]].append((key[0], l)) # sort bigrams by strongest association for key in prefix_keys: prefix_keys[key].sort(key = lambda x: -x[1]) # prints top 80 results logkeys = prefix_keys['service'][:80] """ Explanation: Here is an example of words appearing twice. Below are both instances of the ngram 'quality'. The first instance appears before 'guarantee' and the second occurs after. A bit more processing to clean up the list. End of explanation """ from tabulate import tabulate print(tabulate(logkeys, headers = ["Collocate", "Log-Likelihood"], floatfmt=".3f", \ numalign="left")) with open('service_collocate_Log.csv','w') as f: w = csv.writer(f) w.writerows(logkeys) """ Explanation: Here is a list showing only the collocates for the word guarantee. Again, watch for duplicate words below. End of explanation """ # working on a regex to split text by speaker diced = [] for words in joined_text: diced.append(re.split('(\d+(\s)\w+[A-Z](\s|.\s)\w+[A-Z]:\s)', words)) print(diced[8]) init_names = [] for words in joined_text: init_names.append(set(re.findall('[A-Z]{3,}', words))) print(init_names) with open('initialNames.csv','w') as f: w = csv.writer(f) w.writerows(init_names) """ Explanation: End of explanation """
thaophung/Udacity_deep_learning
sentiment-network/.ipynb_checkpoints/Sentiment_Classification_Projects-checkpoint.ipynb
mit
def pretty_print_review_and_label(i): print(labels[i] + "\t:\t" + reviews[i][:80] + "...") g = open('reviews.txt','r') # What we know! reviews = list(map(lambda x:x[:-1],g.readlines())) g.close() g = open('labels.txt','r') # What we WANT to know! labels = list(map(lambda x:x[:-1].upper(),g.readlines())) g.close() """ Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network by Andrew Trask Twitter: @iamtrask Blog: http://iamtrask.github.io What You Should Already Know neural networks, forward and back-propagation stochastic gradient descent mean squared error and train/test splits Where to Get Help if You Need it Re-watch previous Udacity Lectures Leverage the recommended Course Reading Material - Grokking Deep Learning (Check inside your classroom for a discount code) Shoot me a tweet @iamtrask Tutorial Outline: Intro: The Importance of "Framing a Problem" (this lesson) Curate a Dataset Developing a "Predictive Theory" PROJECT 1: Quick Theory Validation Transforming Text to Numbers PROJECT 2: Creating the Input/Output Data Putting it all together in a Neural Network (video only - nothing in notebook) PROJECT 3: Building our Neural Network Understanding Neural Noise PROJECT 4: Making Learning Faster by Reducing Noise Analyzing Inefficiencies in our Network PROJECT 5: Making our Network Train and Run Faster Further Noise Reduction PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary Analysis: What's going on in the weights? Lesson: Curate a Dataset<a id='lesson_1'></a> The cells from here until Project 1 include code Andrew shows in the videos leading up to mini project 1. We've included them so you can run the code along with the videos without having to type in everything. End of explanation """ len(reviews) reviews[0] labels[0] """ Explanation: Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. End of explanation """ print("labels.txt \t : \t reviews.txt\n") pretty_print_review_and_label(2137) pretty_print_review_and_label(12816) pretty_print_review_and_label(6267) pretty_print_review_and_label(21934) pretty_print_review_and_label(5297) pretty_print_review_and_label(4998) """ Explanation: Lesson: Develop a Predictive Theory<a id='lesson_2'></a> End of explanation """ from collections import Counter import numpy as np """ Explanation: Project 1: Quick Theory Validation<a id='project_1'></a> There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook. You'll find the Counter class to be useful in this exercise, as well as the numpy library. End of explanation """ # Create three Counter objects to store positive, negative and total counts positive_counts = Counter() negative_counts = Counter() total_counts = Counter() """ Explanation: We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words. End of explanation """ # TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects """ Explanation: TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter. Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show. End of explanation """ # Examine the counts of the most common words in positive reviews positive_counts.most_common() # Examine the counts of the most common words in negative reviews negative_counts.most_common() """ Explanation: Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. End of explanation """ # Create Counter object to store positive/negative ratios pos_neg_ratios = Counter() # TODO: Calculate the ratios of positive and negative uses of the most common words # Consider words to be "common" if they've been used at least 100 times """ Explanation: As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews. TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios. Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews. End of explanation """ print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) """ Explanation: Examine the ratios you've calculated for a few words: End of explanation """ # TODO: Convert ratios to logs """ Explanation: Looking closely at the values you just calculated, we see the following: Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be. Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be. Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway. Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons: Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys. When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms. TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio)) In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs. End of explanation """ print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"])) print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"])) print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) """ Explanation: Examine the new ratios you've calculated for the same words from before: End of explanation """ # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] # Note: Above is the code Andrew uses in his solution video, # so we've included it here to avoid confusion. # If you explore the documentation for the Counter class, # you will see you could also find the 30 least common # words like this: pos_neg_ratios.most_common()[:-31:-1] """ Explanation: If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments. Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.) The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).) You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios. End of explanation """ from IPython.display import Image review = "This was a horrible, terrible movie." Image(filename='sentiment_network.png') review = "The movie was excellent" Image(filename='sentiment_network_pos.png') """ Explanation: End of Project 1. Watch the next video to see Andrew's solution, then continue on to the next lesson. Transforming Text into Numbers<a id='lesson_3'></a> The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything. End of explanation """ # TODO: Create set named "vocab" containing all of the words from all of the reviews vocab = None """ Explanation: Project 2: Creating the Input/Output Data<a id='project_2'></a> TODO: Create a set named vocab that contains every word in the vocabulary. End of explanation """ vocab_size = len(vocab) print(vocab_size) """ Explanation: Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074 End of explanation """ from IPython.display import Image Image(filename='sentiment_network_2.png') """ Explanation: Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer. End of explanation """ # TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros layer_0 = None """ Explanation: TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns. End of explanation """ layer_0.shape from IPython.display import Image Image(filename='sentiment_network.png') """ Explanation: Run the following cell. It should display (1, 74074) End of explanation """ # Create a dictionary of words in the vocabulary mapped to index positions # (to be used in layer_0) word2index = {} for i,word in enumerate(vocab): word2index[word] = i # display the map of words to indices word2index """ Explanation: layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word. End of explanation """ def update_input_layer(review): """ Modify the global layer_0 to represent the vector form of review. The element at a given index of layer_0 should represent how many times the given word occurs in the review. Args: review(string) - the string of the review Returns: None """ global layer_0 # clear out previous state by resetting the layer to be all 0s layer_0 *= 0 # TODO: count how many times each word is used in the given review and store the results in layer_0 """ Explanation: TODO: Complete the implementation of update_input_layer. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside layer_0. End of explanation """ update_input_layer(reviews[0]) layer_0 """ Explanation: Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0. End of explanation """ def get_target_for_label(label): """Convert a label to `0` or `1`. Args: label(string) - Either "POSITIVE" or "NEGATIVE". Returns: `0` or `1`. """ # TODO: Your code here """ Explanation: TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1, depending on whether the given label is NEGATIVE or POSITIVE, respectively. End of explanation """ labels[0] get_target_for_label(labels[0]) """ Explanation: Run the following two cells. They should print out'POSITIVE' and 1, respectively. End of explanation """ labels[1] get_target_for_label(labels[1]) """ Explanation: Run the following two cells. They should print out 'NEGATIVE' and 0, respectively. End of explanation """ import time import sys import numpy as np # Encapsulate our neural network in a class class SentimentNetwork: def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1): """Create a SentimenNetwork with the given settings Args: reviews(list) - List of reviews used for training labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews hidden_nodes(int) - Number of nodes to create in the hidden layer learning_rate(float) - Learning rate to use while training """ # Assign a seed to our random number generator to ensure we get # reproducable results during development np.random.seed(1) # process the reviews and their associated labels so that everything # is ready for training self.pre_process_data(reviews, labels) # Build the network to have the number of hidden nodes and the learning rate that # were passed into this initializer. Make the same number of input nodes as # there are vocabulary words and create a single output node. self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate) def pre_process_data(self, reviews, labels): review_vocab = set() # TODO: populate review_vocab with all of the words in the given reviews # Remember to split reviews into individual words # using "split(' ')" instead of "split()". # Convert the vocabulary set to a list so we can access words via indices self.review_vocab = list(review_vocab) label_vocab = set() # TODO: populate label_vocab with all of the words in the given labels. # There is no need to split the labels because each one is a single word. # Convert the label vocabulary set to a list so we can access labels via indices self.label_vocab = list(label_vocab) # Store the sizes of the review and label vocabularies. self.review_vocab_size = len(self.review_vocab) self.label_vocab_size = len(self.label_vocab) # Create a dictionary of words in the vocabulary mapped to index positions self.word2index = {} # TODO: populate self.word2index with indices for all the words in self.review_vocab # like you saw earlier in the notebook # Create a dictionary of labels mapped to index positions self.label2index = {} # TODO: do the same thing you did for self.word2index and self.review_vocab, # but for self.label2index and self.label_vocab instead def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Store the number of nodes in input, hidden, and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Store the learning rate self.learning_rate = learning_rate # Initialize weights # TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between # the input layer and the hidden layer. self.weights_0_1 = None # TODO: initialize self.weights_1_2 as a matrix of random values. # These are the weights between the hidden layer and the output layer. self.weights_1_2 = None # TODO: Create the input layer, a two-dimensional matrix with shape # 1 x input_nodes, with all values initialized to zero self.layer_0 = np.zeros((1,input_nodes)) def update_input_layer(self,review): # TODO: You can copy most of the code you wrote for update_input_layer # earlier in this notebook. # # However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE # THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS. # For example, replace "layer_0 *= 0" with "self.layer_0 *= 0" pass def get_target_for_label(self,label): # TODO: Copy the code you wrote for get_target_for_label # earlier in this notebook. pass def sigmoid(self,x): # TODO: Return the result of calculating the sigmoid activation function # shown in the lectures pass def sigmoid_output_2_derivative(self,output): # TODO: Return the derivative of the sigmoid activation function, # where "output" is the original output from the sigmoid fucntion pass def train(self, training_reviews, training_labels): # make sure out we have a matching number of reviews and labels assert(len(training_reviews) == len(training_labels)) # Keep track of correct predictions to display accuracy during training correct_so_far = 0 # Remember when we started for printing time statistics start = time.time() # loop through all the given reviews and run a forward and backward pass, # updating weights for every item for i in range(len(training_reviews)): # TODO: Get the next review and its correct label # TODO: Implement the forward pass through the network. # That means use the given review to update the input layer, # then calculate values for the hidden layer, # and finally calculate the output layer. # # Do not use an activation function for the hidden layer, # but use the sigmoid activation function for the output layer. # TODO: Implement the back propagation pass here. # That means calculate the error for the forward pass's prediction # and update the weights in the network according to their # contributions toward the error, as calculated via the # gradient descent and back propagation algorithms you # learned in class. # TODO: Keep track of correct predictions. To determine if the prediction was # correct, check that the absolute value of the output error # is less than 0.5. If so, add one to the correct_so_far count. # For debug purposes, print out our prediction accuracy and speed # throughout the training process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \ + " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%") if(i % 2500 == 0): print("") def test(self, testing_reviews, testing_labels): """ Attempts to predict the labels for the given testing_reviews, and uses the test_labels to calculate the accuracy of those predictions. """ # keep track of how many correct predictions we make correct = 0 # we'll time how many predictions per second we make start = time.time() # Loop through each of the given reviews and call run to predict # its label. for i in range(len(testing_reviews)): pred = self.run(testing_reviews[i]) if(pred == testing_labels[i]): correct += 1 # For debug purposes, print out our prediction accuracy and speed # throughout the prediction process. elapsed_time = float(time.time() - start) reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0 sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \ + "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \ + " #Correct:" + str(correct) + " #Tested:" + str(i+1) \ + " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%") def run(self, review): """ Returns a POSITIVE or NEGATIVE prediction for the given review. """ # TODO: Run a forward pass through the network, like you did in the # "train" function. That means use the given review to # update the input layer, then calculate values for the hidden layer, # and finally calculate the output layer. # # Note: The review passed into this function for prediction # might come from anywhere, so you should convert it # to lower case prior to using it. # TODO: The output layer should now contain a prediction. # Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`, # and `NEGATIVE` otherwise. pass """ Explanation: End of Project 2. Watch the next video to see Andrew's solution, then continue on to the next lesson. Project 3: Building a Neural Network<a id='project_3'></a> TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following: - Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs. - Re-use the code from earlier in this notebook to create the training data (see TODOs in the code) - Implement the pre_process_data function to create the vocabulary for our training data generating functions - Ensure train trains over the entire corpus Where to Get Help if You Need it Re-watch earlier Udacity lectures Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code) End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) """ Explanation: Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from. End of explanation """ mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network. End of explanation """ from IPython.display import Image Image(filename='sentiment_network.png') def update_input_layer(review): global layer_0 # clear out previous state, reset the layer to be all 0s layer_0 *= 0 for word in review.split(" "): layer_0[0][word2index[word]] += 1 update_input_layer(reviews[0]) layer_0 review_counter = Counter() for word in reviews[0].split(" "): review_counter[word] += 1 review_counter.most_common() """ Explanation: With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to see Andrew's solution, then continue on to the next lesson. Understanding Neural Noise<a id='lesson_4'></a> The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. End of explanation """ # TODO: -Copy the SentimentNetwork class from Projet 3 lesson # -Modify it to reduce noise, like in the video """ Explanation: Project 4: Reducing Noise in Our Input Data<a id='project_4'></a> TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following: * Copy the SentimentNetwork class you created earlier into the following cell. * Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions. End of explanation """ Image(filename='sentiment_network_sparse.png') layer_0 = np.zeros(10) layer_0 layer_0[4] = 1 layer_0[9] = 1 layer_0 weights_0_1 = np.random.randn(10,5) layer_0.dot(weights_0_1) indices = [4,9] layer_1 = np.zeros(5) for index in indices: layer_1 += (1 * weights_0_1[index]) layer_1 Image(filename='sentiment_network_sparse_2.png') layer_1 = np.zeros(5) for index in indices: layer_1 += (weights_0_1[index]) layer_1 """ Explanation: End of Project 4. Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson. Analyzing Inefficiencies in our Network<a id='lesson_5'></a> The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. End of explanation """ # TODO: -Copy the SentimentNetwork class from Project 4 lesson # -Modify it according to the above instructions """ Explanation: Project 5: Making our Network More Efficient<a id='project_5'></a> TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Remove the update_input_layer function - you will not need it in this version. * Modify init_network: You no longer need a separate input layer, so remove any mention of self.layer_0 You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero Modify train: Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step. At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review. Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review. When updating weights_0_1, only update the individual weights that were used in the forward pass. Modify run: Remove call to update_input_layer Use self's layer_1 instead of a local layer_1 object. Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review. End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to recreate the network and train it once again. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions. End of explanation """ Image(filename='sentiment_network_sparse_2.png') # words most frequently seen in a review with a "POSITIVE" label pos_neg_ratios.most_common() # words most frequently seen in a review with a "NEGATIVE" label list(reversed(pos_neg_ratios.most_common()))[0:30] from bokeh.models import ColumnDataSource, LabelSet from bokeh.plotting import figure, show, output_file from bokeh.io import output_notebook output_notebook() hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="Word Positive/Negative Affinity Distribution") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) frequency_frequency = Counter() for word, cnt in total_counts.most_common(): frequency_frequency[cnt] += 1 hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="The frequency distribution of the words in our corpus") p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555") show(p) """ Explanation: End of Project 5. Watch the next video to see Andrew's solution, then continue on to the next lesson. Further Noise Reduction<a id='lesson_6'></a> End of explanation """ # TODO: -Copy the SentimentNetwork class from Project 5 lesson # -Modify it according to the above instructions """ Explanation: Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a> TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following: * Copy the SentimentNetwork class from the previous project into the following cell. * Modify pre_process_data: Add two additional parameters: min_count and polarity_cutoff Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.) Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times. Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff Modify __init__: Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to train your network with a small polarity cutoff. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: And run the following cell to test it's performance. It should be End of explanation """ mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01) mlp.train(reviews[:-1000],labels[:-1000]) """ Explanation: Run the following cell to train your network with a much larger polarity cutoff. End of explanation """ mlp.test(reviews[-1000:],labels[-1000:]) """ Explanation: And run the following cell to test it's performance. End of explanation """ mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01) mlp_full.train(reviews[:-1000],labels[:-1000]) Image(filename='sentiment_network_sparse.png') def get_most_similar_words(focus = "horrible"): most_similar = Counter() for word in mlp_full.word2index.keys(): most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]]) return most_similar.most_common() get_most_similar_words("excellent") get_most_similar_words("terrible") import matplotlib.colors as colors words_to_visualize = list() for word, ratio in pos_neg_ratios.most_common(500): if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]: if(word in mlp_full.word2index.keys()): words_to_visualize.append(word) pos = 0 neg = 0 colors_list = list() vectors_list = list() for word in words_to_visualize: if word in pos_neg_ratios.keys(): vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]]) if(pos_neg_ratios[word] > 0): pos+=1 colors_list.append("#00ff00") else: neg+=1 colors_list.append("#000000") from sklearn.manifold import TSNE tsne = TSNE(n_components=2, random_state=0) words_top_ted_tsne = tsne.fit_transform(vectors_list) p = figure(tools="pan,wheel_zoom,reset,save", toolbar_location="above", title="vector T-SNE for most polarized words") source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0], x2=words_top_ted_tsne[:,1], names=words_to_visualize, color=colors_list)) p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color") word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6, text_font_size="8pt", text_color="#555555", source=source, text_align='center') p.add_layout(word_labels) show(p) # green indicates positive words, black indicates negative words """ Explanation: End of Project 6. Watch the next video to see Andrew's solution, then continue on to the next lesson. Analysis: What's Going on in the Weights?<a id='lesson_7'></a> End of explanation """
skaae/Recipes
examples/Using a Caffe Pretrained Network - CIFAR10.ipynb
mit
!wget https://www.dropbox.com/s/blrajqirr1p31v0/cifar10_nin.caffemodel !wget https://gist.githubusercontent.com/ebenolson/91e2cfa51fdb58782c26/raw/b015b7403d87b21c6d2e00b7ec4c0880bbeb1f7e/model.prototxt """ Explanation: Introduction This example demonstrates how to convert a network from Caffe's Model Zoo for use with Lasagne. We will be using the NIN model trained for CIFAR10. We will create a set of Lasagne layers corresponding to the Caffe model specification (prototxt), then copy the parameters from the caffemodel file into our model. Final product If you just want to try the final result, you can download the pickled weights here Converting from Caffe to Lasagne Download the required files First we download cifar10_nin.caffemodel and model.prototxt. The supplied train_val.prototxt was modified to replace the data layers with an input specification, and remove the unneeded loss/accuracy layers. End of explanation """ import caffe """ Explanation: Import Caffe To load the saved parameters, we'll need to have Caffe's Python bindings installed. End of explanation """ net_caffe = caffe.Net('model.prototxt', 'cifar10_nin.caffemodel', caffe.TEST) """ Explanation: Load the pretrained Caffe network End of explanation """ import lasagne from lasagne.layers import InputLayer, DropoutLayer, FlattenLayer from lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer from lasagne.layers import MaxPool2DLayer as PoolLayer from lasagne.utils import floatX """ Explanation: Import Lasagne End of explanation """ import theano from theano.tensor.signal import downsample # We need a recent theano version for this to work assert theano.__version__ >= '0.7.0.dev-512c2c16ac1c7b91d2db3849d8e7f384b524d23b' class AveragePool2DLayer(lasagne.layers.MaxPool2DLayer): def get_output_for(self, input, **kwargs): pooled = downsample.max_pool_2d(input, ds=self.pool_size, st=self.stride, ignore_border=self.ignore_border, padding=self.pad, mode='average_exc_pad', ) return pooled """ Explanation: Define a custom pooling layer To replicate Caffe's average pooling behavior, we need a custom layer derived from Lasagne's MaxPool2DLayer. End of explanation """ net = {} net['input'] = InputLayer((None, 3, 32, 32)) net['conv1'] = ConvLayer(net['input'], num_filters=192, filter_size=5, pad=2) net['cccp1'] = ConvLayer(net['conv1'], num_filters=160, filter_size=1) net['cccp2'] = ConvLayer(net['cccp1'], num_filters=96, filter_size=1) net['pool1'] = PoolLayer(net['cccp2'], pool_size=3, stride=2) net['drop3'] = DropoutLayer(net['pool1'], p=0.5) net['conv2'] = ConvLayer(net['drop3'], num_filters=192, filter_size=5, pad=2) net['cccp3'] = ConvLayer(net['conv2'], num_filters=192, filter_size=1) net['cccp4'] = ConvLayer(net['cccp3'], num_filters=192, filter_size=1) net['pool2'] = AveragePool2DLayer(net['cccp4'], pool_size=3, stride=2) net['drop6'] = DropoutLayer(net['pool2'], p=0.5) net['conv3'] = ConvLayer(net['drop6'], num_filters=192, filter_size=3, pad=1) net['cccp5'] = ConvLayer(net['conv3'], num_filters=192, filter_size=1) net['cccp6'] = ConvLayer(net['cccp5'], num_filters=10, filter_size=1) net['pool3'] = AveragePool2DLayer(net['cccp6'], pool_size=8) net['output'] = lasagne.layers.FlattenLayer(net['pool3']) """ Explanation: Create a Lasagne network Layer names match those in model.prototxt End of explanation """ layers_caffe = dict(zip(list(net_caffe._layer_names), net_caffe.layers)) for name, layer in net.items(): try: layer.W.set_value(layers_caffe[name].blobs[0].data) layer.b.set_value(layers_caffe[name].blobs[1].data) except AttributeError: continue """ Explanation: Copy the parameters from Caffe to Lasagne End of explanation """ import numpy as np import pickle import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Trying it out Let's see if that worked. Import numpy and set up plotting End of explanation """ !wget https://s3.amazonaws.com/lasagne/recipes/pretrained/cifar10/cifar10.npz data = np.load('cifar10.npz') """ Explanation: Download some test data Since the network expects ZCA whitened and normalized input, we'll download a preprocessed portion (1000 examples) of the CIFAR10 test set. End of explanation """ prob = np.array(lasagne.layers.get_output(net['output'], floatX(data['whitened']), deterministic=True).eval()) predicted = np.argmax(prob, 1) """ Explanation: Make predictions on the test data End of explanation """ accuracy = np.mean(predicted == data['labels']) print(accuracy) """ Explanation: Check our accuracy We expect around 90% End of explanation """ net_caffe.blobs['data'].reshape(1000, 3, 32, 32) net_caffe.blobs['data'].data[:] = data['whitened'] prob_caffe = net_caffe.forward()['pool3'][:,:,0,0] np.allclose(prob, prob_caffe) """ Explanation: Double check Let's compare predictions against Caffe End of explanation """ def make_image(X): im = np.swapaxes(X.T, 0, 1) im = im - im.min() im = im * 1.0 / im.max() return im plt.figure(figsize=(16, 5)) for i in range(0, 10): plt.subplot(1, 10, i+1) plt.imshow(make_image(data['raw'][i]), interpolation='nearest') true = data['CLASSES'][data['labels'][i]] pred = data['CLASSES'][predicted[i]] color = 'green' if true == pred else 'red' plt.text(0, 0, true, color='black', bbox=dict(facecolor='white', alpha=1)) plt.text(0, 32, pred, color=color, bbox=dict(facecolor='white', alpha=1)) plt.axis('off') """ Explanation: Graph some images and predictions End of explanation """ import pickle values = lasagne.layers.get_all_param_values(net['output']) pickle.dump(values, open('model.pkl', 'w')) """ Explanation: Save our model Let's save the weights in pickle format, so we don't need Caffe next time End of explanation """
EmuKit/emukit
notebooks/Emukit-tutorial-sensitivity-montecarlo.ipynb
apache-2.0
# General imports %matplotlib inline import numpy as np import matplotlib.pyplot as plt from matplotlib import colors as mcolors from matplotlib import cm ## Figures config colors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS) LEGEND_SIZE = 15 TITLE_SIZE = 25 AXIS_SIZE = 15 """ Explanation: Introduction to global sensitivity analysis with Emukit End of explanation """ from emukit.test_functions.sensitivity import Ishigami ### change this one for the one in the library ### --- Load the Ishigami function ishigami = Ishigami(a=5, b=0.1) target_simulator = ishigami.fidelity1 ### --- Define the input space in which the simulator is defined variable_domain = (-np.pi,np.pi) x_grid = np.linspace(*variable_domain,100) """ Explanation: Overview Sensitivity analysis is a statistical technique widely used to test the reliability of real systems. Imagine a simulator of taxis picking up customers in a city like the one showed in the <a href='https://github.com/amzn/emukit-playground'>Emukit playground</a>. The profit of the taxi company depends on factors like the number of taxis on the road and the price per trip. In this example, a global sensitivity analysis of the simulator could be useful to decompose the variance of the profit in a way that can be assigned to the input variables of the simulator. There are different ways of doing a sensitivity analysis of the variables of a simulator. In this notebook we will start with an approach based on Monte Carlo sampling that is useful when evaluating the simulator is cheap. If evaluating the simulator is expensive, emulators can then be used to speed up computations. We will show this in the last part of the notebook. Next, we start with a few formal definitions and literature review so we can understand the basics of Sensitivity Analysis and how it can performed with Emukit. Navigation Global sensitivity analysis with Sobol indexes and decomposition of variance First order Sobol indices using Monte Carlo Total effects using Monte Carlo Computing the sensitivity coefficients using the output of a model Conclusions References 1. Global sensitivity analysis with Sobol indexes and decomposition of variance Any simulator can be viewed as a function $$y=f(\textbf{X})$$ where $\textbf{X}$ is a vector of $d$ uncertain model inputs $X_1,\dots,X_d$, and $Y$ is some univariate model output. We assume that $f$ is a square integrable function and that the inputs are statistically independent and uniformly distributed within the hypercube $X_i \in [0,1]$ for $i=1,2,...,d$, although the bounds can be generalized. The so-called Sobol decomposition of $f$ allows us to write it as $$Y = f_0 + \sum_{i=1}^d f_i(X_i) + \sum_{i<j}^{d} f_{ij}(X_i,X_j) + \cdots + f_{1,2,\dots,d}(X_1,X_2,\dots,X_d)$$ where $f_0$ is a constant term, $f_i$ is a function of $X_i$, $f_{ij}$ a function of $X_i$ and $X_j$, etc. A condition of this decomposition is that, $$ \int_0^1 f_{i_1 i_2 \dots i_d}(X_{i_1},X_{i_2},\dots,X_{i_d}) dX_{k}=0, \text{ for } k = i_1,...,i_d. $$ This means that all the terms in the decomposition are orthogonal, which can be written in terms of conditional expected values as $$ f_0 = E(Y) $$ $$ f_i(X_i) = E(Y|X_i) - f_0 $$ $$ f_{ij}(X_i,X_j) = E(Y|X_i,X_j) - f_0 - f_i - f_j $$ with all the expectations computed over Y. Each component $f_i$ (main effects) can be seen as the effect on $Y$ of varying $X_i$ alone. The same interpretation follows for $f_{ij}$ which accounts for the (extra) variation of changing $X_i$ and $X_j$ simultaneously (second-order interaction). Higher-order terms have analogous definitions. The key step to decompose the variation of $Y$ is to notice that $$Var(Y) = E(Y^2) - E(Y)^2 = \int_0^1 f^2(\mathbf{X}) d\mathbf{X} - f_0^2$$ and that this variance can be decomposed as $$ Var(Y) = \int_0^1 \sum_{i=1}^d f_i(X_i)d X_i + \int_0^1 \sum_{i<j}^{d} f_{ij}(X_i,X_j)d X_i d X_j + \cdots +\int_0^1 f_{1,2,\dots,d}(X_1,X_2,\dots,X_d)d\mathbf{X}.$$ This expression leads to the decomposition of the variance of $Y$ as $$ \operatorname{Var}(Y) = \sum_{i=1}^d V_i + \sum_{i<j}^{d} V_{ij} + \cdots + V_{12 \dots d}$$ where $$ V_{i} = \operatorname{Var}{X_i} \left( E{\textbf{X}{- i}} (Y \mid X{i}) \right),$$ $$ V_{ij} = \operatorname{Var}{X{ij}} \left( E_{\textbf{X}{-ij}} \left( Y \mid X_i, X_j\right)\right) - \operatorname{V}{i} - \operatorname{V}_{j}$$ and so on. The $X_{-i}$ notation is used to indicate all the set of variables but the $i^{th}$. Note: The previous decomposition is important because it shows how the variance in the output $Y$ can be associated to each input or interaction separately. Example: the Ishigami function We illustrate the exact calculation of the Sobol indexes with the three dimensional Ishigami function of Ishigami & Homma, 1990. This is a well-known example for uncertainty and sensitivity analysis methods because of its strong nonlinearity and peculiar dependence on $x_3$. More details of this function can be found in Sobol' & Levitan (1999). Mathematically, the from of the Ishigami function is $$f(\textbf{x}) = \sin(x_1) + a \sin^2(x_2) + bx_3^4 \sin(x_1). $$ In this notebook we will set the parameters to be $a = 5$ and $b=0.1$ . The input variables are sampled randomly $x_i \sim Uniform(-\pi,\pi)$. Next we create the function object and visualize its shape marginally for each one of its three inputs. End of explanation """ fig = plt.figure(figsize=(20, 5)) plt.clf() plt.subplot(1,2,1) plt.plot(x_grid,ishigami.f1(x_grid),'-r') plt.xlabel('X1', fontsize=AXIS_SIZE) plt.ylabel('f1', fontsize=AXIS_SIZE) plt.subplot(1,2,2) plt.plot(x_grid,ishigami.f2(x_grid),'-r') plt.xlabel('X2', fontsize=AXIS_SIZE) plt.ylabel('f2', fontsize=AXIS_SIZE) plt.suptitle('Non-zero Sobol components of the Ishigami function', fontsize=TITLE_SIZE) X, Y = np.meshgrid(x_grid, x_grid) Z = ishigami.f13(np.array([x_grid,x_grid]).T)[:,None] from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(20, 5)) ax = fig.add_subplot(111, projection='3d') surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm, linewidth=0, antialiased=False) ax.set_xlabel('X1', fontsize=AXIS_SIZE) ax.set_ylabel('X3', fontsize=AXIS_SIZE) ax.set_zlabel('f13', fontsize=AXIS_SIZE); """ Explanation: Before moving to any further analysis, we first plot the non-zero components $f(\textbf{X})$. These components are $$f_1(x_1) = \sin(x_1)$$ $$f_2(x_1) = a \sin^2 (x_2)$$ $$f_{13}(x_1,x_3) = bx_3^4 \sin(x_1) $$ End of explanation """ ishigami.variance_total """ Explanation: The total variance $Var(Y)$ in this example is End of explanation """ ishigami.variance_x1, ishigami.variance_x2, ishigami.variance_x13 """ Explanation: which is the sum of the variance of $V_1$, $V_2$ and $V_{13}$ End of explanation """ ishigami.variance_x1 + ishigami.variance_x2 + ishigami.variance_x13 """ Explanation: as we can easily show: End of explanation """ ishigami.main_effects """ Explanation: 2. First order Sobol indices using Monte Carlo The first order Sobol indexes are a measure of "first order sensitivity" of each input variable. They account for the proportion of variance of $Y$ explained by changing each variable alone while marginalizing over the rest. The Sobol index of the $ith$ variable is simply computed as $$S_i = \frac{V_i}{\operatorname{Var}(Y)}.$$ This value is standardized using the total variance so it is possible to account for a fractional contribution of each variable to the total variance of the output. The Sobol indexes for higher order interactions $S_{ij}$ are computed similarly. Note that the sum of all Solbol indexes equals to one. In most cases we are interested in the first order indexes. In the Ishigami function their values are:| End of explanation """ from emukit.core import ContinuousParameter, ParameterSpace target_simulator = ishigami.fidelity1 variable_domain = (-np.pi,np.pi) space = ParameterSpace([ContinuousParameter('x1', variable_domain[0], variable_domain[1]), ContinuousParameter('x2', variable_domain[0], variable_domain[1]), ContinuousParameter('x3', variable_domain[0], variable_domain[1])]) """ Explanation: The most standard way of computing the Sobol indexes is using Monte Carlo. For the interested reader, the paper Sobol, 2001 contains the detail of how they can be computed. With Emukit, the first-order Sobol indexes can be easily computed. We first need to define the space where of target simulator is analyzed. End of explanation """ from emukit.sensitivity.monte_carlo import ModelFreeMonteCarloSensitivity np.random.seed(10) # for reproducibility num_mc = 10000 # Number of MC samples senstivity_ishigami = ModelFreeMonteCarloSensitivity(target_simulator, space) main_effects, total_effects, _ = senstivity_ishigami.compute_effects(num_monte_carlo_points = num_mc) main_effects """ Explanation: Compute the indexes is as easy as doing End of explanation """ import pandas as pd d = {'Sobol True': ishigami.main_effects, 'Monte Carlo': main_effects} pd.DataFrame(d).plot(kind='bar',figsize=(12, 5)) plt.title('First-order Sobol indexes - Ishigami', fontsize=TITLE_SIZE) plt.ylabel('% of explained output variance',fontsize=AXIS_SIZE); """ Explanation: We compare the true effects with the Monte Carlo effects in a bar-plot. The total effects are discussed later. End of explanation """ ishigami.total_effects """ Explanation: 3. Total Effects using Monte Carlo Computing high order sensitivity indexes can be computationally very demanding in high dimensional scenarios and measuring the total influence of each variable on the variance of the output is infeasible. To solve this issue the total indexes are used which account for the contribution to the output variance of $X_i$ including all variance caused by the variable alone and all its interactions of any order. The total effect for $X_i$ is given by: $$ S_{Ti} = \frac{E_{\textbf{X}{- i}} \left(\operatorname{Var}{X_i} (Y \mid \mathbf{X}{- i}) \right)}{\operatorname{Var}(Y)} = 1 - \frac{\operatorname{Var}{\textbf{X}{-i}} \left(E{X_i} (Y \mid \mathbf{X}_{- i}) \right)}{\operatorname{Var}(Y)}$$ Note that the sum of $S_{Ti}$ is not necessarily one in this case unless the model is additive. In the Ishigami example the value of the total effects is End of explanation """ d = {'Sobol True': ishigami.total_effects, 'Monte Carlo': total_effects} pd.DataFrame(d).plot(kind='bar',figsize=(12, 5)) plt.title('Total effects - Ishigami', fontsize=TITLE_SIZE) plt.ylabel('Effects value',fontsize=AXIS_SIZE) """ Explanation: As in the previous example, the total effects can be computed with Monte Carlo. In the next plot we show the comparison with the true total effects. End of explanation """ from emukit.core.initial_designs import RandomDesign desing = RandomDesign(space) X = desing.get_samples(500) Y = ishigami.fidelity1(X)[:,None] """ Explanation: 4. Computing the sensitivity coefficients using the output of a model In the example used above the Ishigami function is very cheap to evaluate. However, in most real scenarios the functions of interest are expensive and we need to limit ourselves to a few number of evaluations. Using Monte Carlo methods is infeasible in these scenarios as a large number of samples are typically required to provide good estimates of the Sobol coefficients. An alternative in these cases is to use Gaussaian process emulator of the function of interest trained on a few inputs and outputs. If the model is properly trained, its mean prediction which is cheap to evaluate, can be used to compute the Monte Carlo estimates of the Sobol coefficients. Let's see how we can do this in Emukit. We start by generating 100 samples in the input domain. Note that this a just 1% of the number of samples that we used to compute the Sobol coefficients using Monte Carlo. End of explanation """ from GPy.models import GPRegression from emukit.model_wrappers import GPyModelWrapper from emukit.sensitivity.monte_carlo import MonteCarloSensitivity model_gpy = GPRegression(X,Y) model_emukit = GPyModelWrapper(model_gpy) model_emukit.optimize() """ Explanation: Now, we fit a standard Gaussian process to the samples and we wrap it as an Emukit model. End of explanation """ senstivity_ishigami_gpbased = MonteCarloSensitivity(model = model_emukit, input_domain = space) main_effects_gp, total_effects_gp, _ = senstivity_ishigami_gpbased.compute_effects(num_monte_carlo_points = num_mc) main_effects_gp = {ivar: main_effects_gp[ivar][0] for ivar in main_effects_gp} d = {'Sobol True': ishigami.main_effects, 'Monte Carlo': main_effects, 'GP Monte Carlo':main_effects_gp} pd.DataFrame(d).plot(kind='bar',figsize=(12, 5)) plt.title('First-order Sobol indexes - Ishigami', fontsize=TITLE_SIZE) plt.ylabel('% of explained output variance',fontsize=AXIS_SIZE); total_effects_gp = {ivar: total_effects_gp[ivar][0] for ivar in total_effects_gp} d = {'Sobol True': ishigami.total_effects, 'Monte Carlo': total_effects, 'GP Monte Carlo':total_effects_gp} pd.DataFrame(d).plot(kind='bar',figsize=(12, 5)) plt.title('Total effects - Ishigami', fontsize=TITLE_SIZE) plt.ylabel('% of explained output variance',fontsize=AXIS_SIZE); """ Explanation: The final step is to compute the coefficients using the class ModelBasedMonteCarloSensitivity which directly calls the model and uses its predictive mean to compute the Monte Carlo estimates of the Sobol indices. We plot the true estimates, those computed using 10000 direct evaluations of the objecte using Monte Carlo and those computed using a Gaussian process model trained on 100 evaluations. End of explanation """
zerothi/sisl
docs/visualization/viz_module/showcase/GridPlot.ipynb
mpl-2.0
import sisl import sisl.viz import numpy as np # This is just for convenience to retreive files siesta_files = sisl._environ.get_environ_variable("SISL_FILES_TESTS") / "sisl" / "io" / "siesta" """ Explanation: GridPlot GridPlot class will help you very easily display any Grid. <div class="alert alert-info"> Note Dedicated software like VESTA might be faster in rendering big 3D grids, but the strength of this plot class lies in its great **flexibility**, **tunability** (change settings as you wish to customize the display from python) **and convenience** (no need to open another software, you can see the grid directly in your notebook). Also, you can **combine your grid plots** however you want **with the other plot classes** in the framework. </div> End of explanation """ rho_file = siesta_files / "SrTiO3.RHO" # From a sisl grid, you can do grid.plot() grid = sisl.get_sile(rho_file).read_grid() plot = grid.plot() # All siles that implement read_grid can also be directly plotted plot = sisl.get_sile(rho_file).plot() """ Explanation: The first thing that we need to do to plot a grid is having a grid, so let's get one! In this case, we are going to use an electronic density grid from a .RHO file in SIESTA. There are multiple ways of getting the plot: End of explanation """ plot """ Explanation: Anyway, you will end up having the grid under plot.grid. Let's see what we've got: End of explanation """ plot.update_settings(axes="xy") """ Explanation: Well, this doesn't look much like a grid, does it? By default, GridPlot only shows the third axis of the grid, while reducing along the others. This is because 3D representations can be quite heavy depending on the computer you are in, and it's better to be safe :) Plotting in 3D, 2D and 1D Much like GeometryPlot, you can select the axes of the grid that you want to display. In this case, the other axes will be reduced. For example, if we want to see the xy plane of the electronic density, we can do: End of explanation """ plot.update_settings(zsmooth="best") """ Explanation: Note that you can make 2d representations look smoother without having to make the grid finer by using the zsmooth setting, which is part of plotly's go.Heatmap trace options. End of explanation """ plot.update_settings(axes="xyz") """ Explanation: 3d representations of a grid will display isosurfaces: End of explanation """ plot.update_settings(axes="xy") """ Explanation: Specifying the axes You may want to see your grid in The most common one is to display the cartesian coordinates. You indicate that you want cartesian coordinates by passing {"x", "y", "z"}. You can pass them as a list or as a multicharacter string: End of explanation """ plot.update_settings(axes="yx") """ Explanation: The order of the axes is relevant, although in this case we will not see much difference: End of explanation """ plot.update_settings(axes="ab") """ Explanation: Sometimes, to inspect the grid you may want to see the values in fractional coordinates. In that case, you should pass {"a", "b", "c", "0", "1", "2", 0, 1, 2}: End of explanation """ plot.get_param("reduce_method").options """ Explanation: To summarize the different possibilities: {"x", "y", "z"}: The cartesian coordinates are displayed. {"a", "b", "c"}: The fractional coordinates are displayed. Same for {0,1,2}. <div class="alert alert-warning"> Some non-obvious behavior **You can not mix cartesian axes with lattice vectors**. Also, for now, the **3D representation only displays cartesian coordinates**. </div> Dimensionality reducing method As we mentioned, the dimensions that are not displayed in the plot are reduced. The setting that controls how this process is done is reduce_method. Let's see what are the options: End of explanation """ plot.update_settings(axes="z", reduce_method="average") plot.update_settings(reduce_method="sum") """ Explanation: We can test different reducing methods in a 1D representation to see the effects: End of explanation """ plot = plot.update_settings(axes="xyz") """ Explanation: You can see that the values obtained are very different. Your analysis might be dependent on the method used to reduce the dimensionality, be careful! End of explanation """ print(plot.get_param("isos").help) """ Explanation: Isosurfaces and contours There's one parameter that controls both the display of isosurfaces (in 3d) and contours (in 2d): isos. isos is a list of dicts where each dict asks for an isovalue. See the help message: End of explanation """ plot.update_settings(isos=[{"frac": 0.3, "opacity": 0.4}, {"frac": 0.7}]) """ Explanation: If no isos is provided, 3d representations plot the 0.3 and 0.7 (frac) isosurfaces. This is what you can see in the 3d plot that we displayed above. Let's play a bit with isos. The first thing I will do is change the opacity of the outer isosurface, since there's no way to see the inner one right now (although you can toggle it by clicking at the legend, courtesy of plotly :)). End of explanation """ plot.update_settings(axes="xy") """ Explanation: Now we can see all the very interesting features of the inner isosurface! :) Let's now see how contours look in 2d: End of explanation """ plot.update_settings( axes="xyz", isos=[{"frac":frac, "opacity": frac/2, "color": "green"} for frac in np.linspace(0.1, 0.8, 20)], ) """ Explanation: Not bad. Volumetric display Using isosurfaces in the 3D representation, one can achieve a sense of volumetric data. We can do so by asking for multiple isosurfaces an setting the opacity of each one properly. You can play with it and do the exact thing that you wish. For example, we can represent isosurfaces at increasing values, with the higher values being more opaque: End of explanation """ plot = plot.update_settings(axes="xy", isos=[]) """ Explanation: The more surfaces you add, the more sense of depth you'll acheive. But of course, it will be more expensive to render. <div class="alert alert-info"> Note Playing with colors (e.g. setting a colorscale) and not just opacities might give you even a better sense of depth. A way of automatically handling this for you might be introduced in the future. </div> End of explanation """ plot.update_settings(colorscale="temps") """ Explanation: Colorscales You might have already seen that 2d representations use a colorscale. You can change it with the colorscale setting. End of explanation """ plot.update_settings(crange=[50, 200]) """ Explanation: And you can control its range using crange (min and max bounds of the colorscale) and cmid (middle value of the colorscale, will take preference over crange). In this way, you are able saturate the display as you wish. For example, in this case, if we bring the lower bound up enough, we will be able to hide the Sr atoms that are in the corners. But be careful not to make it so high that you hide the oxygens as well! End of explanation """ plot.update_settings(z_range=[1,3]) """ Explanation: Using only part of the grid As we can see in the 3d representation of the electronic density, there are two atoms contributing to the electronic density in the center, that's why we get such a big difference in the 2d heatmap. We can use the x_range, ỳ_range and z_range settings to take into account only a certain part of the grid. for example, in this case we want only the grid where z is in the [1,3] interval so that we remove the influence of the oxygen atom that is on top. End of explanation """ plot.update_settings(transforms=[abs, "numpy.sin"], crange=None) """ Explanation: Now we are seeing the real difference between the Ti atom (in the center) and the O atoms (at the edges). Applying transformations One can apply as many transformations as it wishes to the grid by using the transforms setting. It should be a list where each item can be a string (the name of the function, e.g. "numpy.sin") or a function. Each transform is passed to Grid.apply, see the method for more info. End of explanation """ plot.update_settings(transforms=["sin", abs], crange=None) # If a string is provided with no module, it will be interpreted as a numpy function # Therefore "sin" == "numpy.sin" and abs != "abs" == "numpy.abs" """ Explanation: Notice that the order of the transformations matter: End of explanation """ plot.update_settings(nsc=[1,3,1]) """ Explanation: Visualizing supercells Visualizing grid supercells is as easy as using the nsc setting. If we want to repeat our grid visualization along the second axis 3 times, we just do: End of explanation """ plot.scan("z", num=5) """ Explanation: Performing scans We can use the scan method to create a scan of the grid along a given direction. End of explanation """ plot.scan("z", start=0, stop=plot.grid.cell[2,2], num=10) """ Explanation: Notice how the scan respected our z_range from 1 to 3. If we want the rest of the grid, we can set z_range back to None before creating the scan, or we can indicate the bounds of the scan. End of explanation """ scan = plot.scan("z", mode="as_is", num=15) scan """ Explanation: This is the "moving_slice" scan, but we can also display the scan as an animation: End of explanation """ scan.update_layout(xaxis_scaleanchor="y", xaxis_scaleratio=1) """ Explanation: If we are using the plotly backend, the axes of the animation will not be correctly scaled, but we can easily solve this: End of explanation """ plot.update_settings(axes="z").scan("y",mode="as_is", num=15) """ Explanation: This mode is called "as_is" because it creates an animation of the current representation. That is, it can scan through 1d, 2d and 3d representations and it keeps displaying the supercell. Here's a scan of 1d data: End of explanation """ thumbnail_plot = plot.update_settings(axes="yx", z_range=[1.7, 1.9]) if thumbnail_plot: thumbnail_plot.show("png") """ Explanation: We hope you enjoyed what you learned! This next cell is just to create the thumbnail for the notebook in the docs End of explanation """
arogozhnikov/einops
docs/1-einops-basics.ipynb
mit
# Examples are given for numpy. This code also setups ipython/jupyter # so that numpy arrays in the output are displayed as images import numpy from utils import display_np_arrays_as_images display_np_arrays_as_images() """ Explanation: Einops tutorial, part 1: basics <!-- <img src='http://arogozhnikov.github.io/images/einops/einops_logo_350x350.png' height="80" /> --> Welcome to einops-land! We don't write python y = x.transpose(0, 2, 3, 1) We write comprehensible code python y = rearrange(x, 'b c h w -&gt; b h w c') einops supports widely used tensor packages (such as numpy, pytorch, chainer, gluon, tensorflow), and extends them. What's in this tutorial? fundamentals: reordering, composition and decomposition of axes operations: rearrange, reduce, repeat how much you can do with a single operation! Preparations End of explanation """ ims = numpy.load('./resources/test_images.npy', allow_pickle=False) # There are 6 images of shape 96x96 with 3 color channels packed into tensor print(ims.shape, ims.dtype) # display the first image (whole 4d tensor can't be rendered) ims[0] # second image in a batch ims[1] # we'll use three operations from einops import rearrange, reduce, repeat # rearrange, as its name suggests, rearranges elements # below we swapped height and width. # In other words, transposed first two axes (dimensions) rearrange(ims[0], 'h w c -> w h c') """ Explanation: Load a batch of images to play with End of explanation """ # einops allows seamlessly composing batch and height to a new height dimension # We just rendered all images by collapsing to 3d tensor! rearrange(ims, 'b h w c -> (b h) w c') # or compose a new dimension of batch and width rearrange(ims, 'b h w c -> h (b w) c') # resulting dimensions are computed very simply # length of newly composed axis is a product of components # [6, 96, 96, 3] -> [96, (6 * 96), 3] rearrange(ims, 'b h w c -> h (b w) c').shape # we can compose more than two axes. # let's flatten 4d array into 1d, resulting array has as many elements as the original rearrange(ims, 'b h w c -> (b h w c)').shape """ Explanation: Composition of axes transposition is very common and useful, but let's move to other capabilities provided by einops End of explanation """ # decomposition is the inverse process - represent an axis as a combination of new axes # several decompositions possible, so b1=2 is to decompose 6 to b1=2 and b2=3 rearrange(ims, '(b1 b2) h w c -> b1 b2 h w c ', b1=2).shape # finally, combine composition and decomposition: rearrange(ims, '(b1 b2) h w c -> (b1 h) (b2 w) c ', b1=2) # slightly different composition: b1 is merged with width, b2 with height # ... so letters are ordered by w then by h rearrange(ims, '(b1 b2) h w c -> (b2 h) (b1 w) c ', b1=2) # move part of width dimension to height. # we should call this width-to-height as image width shrunk by 2 and height doubled. # but all pixels are the same! # Can you write reverse operation (height-to-width)? rearrange(ims, 'b h (w w2) c -> (h w2) (b w) c', w2=2) """ Explanation: Decomposition of axis End of explanation """ # compare with the next example rearrange(ims, 'b h w c -> h (b w) c') # order of axes in composition is different # rule is just as for digits in the number: leftmost digit is the most significant, # while neighboring numbers differ in the rightmost axis. # you can also think of this as lexicographic sort rearrange(ims, 'b h w c -> h (w b) c') # what if b1 and b2 are reordered before composing to width? rearrange(ims, '(b1 b2) h w c -> h (b1 b2 w) c ', b1=2) # produces 'einops' rearrange(ims, '(b1 b2) h w c -> h (b2 b1 w) c ', b1=2) # produces 'eoipns' """ Explanation: Order of axes matters End of explanation """ # average over batch reduce(ims, 'b h w c -> h w c', 'mean') # the previous is identical to familiar: ims.mean(axis=0) # but is so much more readable # Example of reducing of several axes # besides mean, there are also min, max, sum, prod reduce(ims, 'b h w c -> h w', 'min') # this is mean-pooling with 2x2 kernel # image is split into 2x2 patches, each patch is averaged reduce(ims, 'b (h h2) (w w2) c -> h (b w) c', 'mean', h2=2, w2=2) # max-pooling is similar # result is not as smooth as for mean-pooling reduce(ims, 'b (h h2) (w w2) c -> h (b w) c', 'max', h2=2, w2=2) # yet another example. Can you compute result shape? reduce(ims, '(b1 b2) h w c -> (b2 h) (b1 w)', 'mean', b1=2) """ Explanation: Meet einops.reduce In einops-land you don't need to guess what happened python x.mean(-1) Because you write what the operation does python reduce(x, 'b h w c -&gt; b h w', 'mean') if axis is not present in the output — you guessed it — axis was reduced. End of explanation """ # rearrange can also take care of lists of arrays with the same shape x = list(ims) print(type(x), 'with', len(x), 'tensors of shape', x[0].shape) # that's how we can stack inputs # "list axis" becomes first ("b" in this case), and we left it there rearrange(x, 'b h w c -> b h w c').shape # but new axis can appear in the other place: rearrange(x, 'b h w c -> h w c b').shape # that's equivalent to numpy stacking, but written more explicitly numpy.array_equal(rearrange(x, 'b h w c -> h w c b'), numpy.stack(x, axis=3)) # ... or we can concatenate along axes rearrange(x, 'b h w c -> h (b w) c').shape # which is equivalent to concatenation numpy.array_equal(rearrange(x, 'b h w c -> h (b w) c'), numpy.concatenate(x, axis=1)) """ Explanation: Stack and concatenate End of explanation """ x = rearrange(ims, 'b h w c -> b 1 h w 1 c') # functionality of numpy.expand_dims print(x.shape) print(rearrange(x, 'b 1 h w 1 c -> b h w c').shape) # functionality of numpy.squeeze # compute max in each image individually, then show a difference x = reduce(ims, 'b h w c -> b () () c', 'max') - ims rearrange(x, 'b h w c -> h (b w) c') """ Explanation: Addition or removal of axes You can write 1 to create a new axis of length 1. Similarly you can remove such axis. There is also a synonym () that you can use. That's a composition of zero axes and it also has a unit length. End of explanation """ # repeat along a new axis. New axis can be placed anywhere repeat(ims[0], 'h w c -> h new_axis w c', new_axis=5).shape # shortcut repeat(ims[0], 'h w c -> h 5 w c').shape # repeat along w (existing axis) repeat(ims[0], 'h w c -> h (repeat w) c', repeat=3) # repeat along two existing axes repeat(ims[0], 'h w c -> (2 h) (2 w) c') # order of axes matters as usual - you can repeat each element (pixel) 3 times # by changing order in parenthesis repeat(ims[0], 'h w c -> h (w repeat) c', repeat=3) """ Explanation: Repeating elements Third operation we introduce is repeat End of explanation """ repeated = repeat(ims, 'b h w c -> b h new_axis w c', new_axis=2) reduced = reduce(repeated, 'b h new_axis w c -> b h w c', 'min') assert numpy.array_equal(ims, reduced) """ Explanation: Note: repeat operation covers functionality identical to numpy.repeat, numpy.tile and actually more than that. Reduce ⇆ repeat reduce and repeat are like opposite of each other: first one reduces amount of elements, second one increases. In the following example each image is repeated first, then we reduce over new axis to get back original tensor. Notice that operation patterns are "reverse" of each other End of explanation """ # interweaving pixels of different pictures # all letters are observable rearrange(ims, '(b1 b2) h w c -> (h b1) (w b2) c ', b1=2) # interweaving along vertical for couples of images rearrange(ims, '(b1 b2) h w c -> (h b1) (b2 w) c', b1=2) # interweaving lines for couples of images # exercise: achieve the same result without einops in your favourite framework reduce(ims, '(b1 b2) h w c -> h (b2 w) c', 'max', b1=2) # color can be also composed into dimension # ... while image is downsampled reduce(ims, 'b (h 2) (w 2) c -> (c h) (b w)', 'mean') # disproportionate resize reduce(ims, 'b (h 4) (w 3) c -> (h) (b w)', 'mean') # spilt each image in two halves, compute mean of the two reduce(ims, 'b (h1 h2) w c -> h2 (b w)', 'mean', h1=2) # split in small patches and transpose each patch rearrange(ims, 'b (h1 h2) (w1 w2) c -> (h1 w2) (b w1 h2) c', h2=8, w2=8) # stop me someone! rearrange(ims, 'b (h1 h2 h3) (w1 w2 w3) c -> (h1 w2 h3) (b w1 h2 w3) c', h2=2, w2=2, w3=2, h3=2) rearrange(ims, '(b1 b2) (h1 h2) (w1 w2) c -> (h1 b1 h2) (w1 b2 w2) c', h1=3, w1=3, b2=3) # patterns can be arbitrarily complicated reduce(ims, '(b1 b2) (h1 h2 h3) (w1 w2 w3) c -> (h1 w1 h3) (b1 w2 h2 w3 b2) c', 'mean', h2=2, w1=2, w3=2, h3=2, b2=2) # subtract background in each image individually and normalize # pay attention to () - this is composition of 0 axis, a dummy axis with 1 element. im2 = reduce(ims, 'b h w c -> b () () c', 'max') - ims im2 /= reduce(im2, 'b h w c -> b () () c', 'max') rearrange(im2, 'b h w c -> h (b w) c') # pixelate: first downscale by averaging, then upscale back using the same pattern averaged = reduce(ims, 'b (h h2) (w w2) c -> b h w c', 'mean', h2=6, w2=8) repeat(averaged, 'b h w c -> (h h2) (b w w2) c', h2=6, w2=8) rearrange(ims, 'b h w c -> w (b h) c') # let's bring color dimension as part of horizontal axis # at the same time horizontal axis is downsampled by 2x reduce(ims, 'b (h h2) (w w2) c -> (h w2) (b w c)', 'mean', h2=3, w2=3) """ Explanation: Fancy examples in random order (a.k.a. mad designer gallery) End of explanation """
sdpython/pyquickhelper
_unittests/ut_helpgen/data_gallery/notebooks/notebook_eleves/2014_2015/2015_page_rank.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() """ Explanation: PageRank avec PIG auteurs : M. Amestoy M., A. Auffret L'algorithme PageRank propose une mesure de la pertinence d'un site. Il fut inventé par les fondateurs de google. L'implémentation proposée ici s'est appuyée sur celle proposée dans Data-Intensive Text Processing with MapReduce, page 106. L'algorithme est d'abord appliqué sur un jeu de test (plus petit et permettant un développement rapide) puis à un jeu plus consistent : Google web graph. End of explanation """ import pyquickhelper params={"blob_storage":"", "password1":"", "hadoop_server":"", "password2":"", "username":""} pyquickhelper.ipythonhelper.open_html_form(params=params,title="server + hadoop + credentials", key_save="blobhp") import pyensae %load_ext pyensae blobstorage = blobhp["blob_storage"] blobpassword = blobhp["password1"] hadoop_server = blobhp["hadoop_server"] hadoop_password = blobhp["password2"] username = blobhp["username"] client, bs = %hd_open client, bs """ Explanation: Connexion au cluster End of explanation """ with open("DataTEST.txt", "w") as f : f.write("1"+"\t"+"2"+"\n"+"1"+"\t"+"4"+"\n"+"2"+"\t"+"3"+"\n"+"2"+"\t"+"5"+"\n"+"3"+"\t"+"4"+"\n"+"4"+"\t"+"5"+"\n"+"5"+"\t"+"3"+"\n"+"5"+"\t"+"1"+"\n"+"5"+"\t"+"2") import pandas df = pandas.read_csv("DataTEST.txt", sep="\t",names=["Frm","To"]) df """ Explanation: Création d'un petit jeu de données On crée un set de données pour tester l'algorithme. (en reprenant celui présenté dans l'article) End of explanation """ %blob_up DataTEST.txt /$PSEUDO/Data/DataTEST.txt """ Explanation: On importe ce graphe: End of explanation """ %blob_ls /$PSEUDO/Data/ """ Explanation: On vérifie que les données ont bien été chargées: End of explanation """ pyensae.download_data("web-Google.txt.gz", url="http://snap.stanford.edu/data/") %head web-Google.txt """ Explanation: Récupération de données réelles On fait de même avec les données réelles : Google web graph End of explanation """ with open("web-Google.txt", "r") as f: with open("DataGoogle.txt", "w") as g: for line in f: if not line.startswith("#"): g.write(line) %head DataGoogle.txt %blob_up DataGoogle.txt /$PSEUDO/Data/DataGoogle.txt %blob_ls /$PSEUDO/Data/ """ Explanation: On filtre les premières lignes. End of explanation """ %%PIG Creation_Graph.pig Arcs = LOAD '$CONTAINER/$PSEUDO/Data/$path' USING PigStorage('\t') AS (frm:int,to:int); GrSort = GROUP Arcs BY frm; deg_sort = FOREACH GrSort GENERATE COUNT(Arcs) AS degs, Arcs , group AS ID; GrEntr = GROUP Arcs BY to; GrFin= JOIN deg_sort BY ID, GrEntr BY group; N = FOREACH (group GrSort ALL) GENERATE COUNT(GrSort); Pr = FOREACH GrFin GENERATE deg_sort::ID AS ID , (float) 1 / (float)N.$0 AS PageRank; PageRank = JOIN GrFin BY deg_sort::ID, Pr BY ID; STORE PageRank INTO '$CONTAINER/$PSEUDO/Projet/SortTest.txt' USING PigStorage('\t') ; client.pig_submit(bs, client.account_name, "Creation_Graph.pig", params=dict(path="DataTEST.txt"), stop_on_failure=True) st = %hd_job_status job_1435385350894_0001 st["id"],st["percentComplete"],st["status"]["jobComplete"] %tail_stderr job_1435385350894_0001 10 """ Explanation: Algorithme Page Rank Initialisation de la table End of explanation """ %%PIG iteration.pig gr = LOAD '$CONTAINER/$PSEUDO/Projet/SortTest.txt' USING PigStorage('\t') AS (DegS:long,Asort:{(frm: int,to: int)},Noeud:int,Noeud2:int,Aent:{(frm: int,to: int)},ID: int,PageRank: float); Arcs = LOAD '$CONTAINER/$PSEUDO/Data/DataTEST.txt' USING PigStorage('\t') AS (frm:int,to:int); Graph = FOREACH gr GENERATE Noeud , DegS, PageRank AS Pinit, PageRank, PageRank/ (float) DegS AS Ratio; DEFINE my_macro(G,A,ALP) RETURNS S { Gi= FOREACH $G GENERATE Noeud , Ratio; GrEntr = JOIN $A BY frm , Gi BY Noeud ; Te = GROUP GrEntr BY to; so = FOREACH Te GENERATE SUM(GrEntr.Ratio) AS Pr, group AS ID; tu = JOIN $G BY Noeud, so BY ID; sort = FOREACH tu GENERATE Noeud , DegS, Pinit, $ALP*Pinit+(1-$ALP)*Pr AS PageRank; $S = FOREACH sort GENERATE Noeud , DegS, Pinit, PageRank, PageRank/ (float) DegS AS Ratio; } Ite1 = my_macro(Graph,Arcs,$alpha); Ite2 = my_macro(Ite1,Arcs,$alpha); Ite3 = my_macro(Ite2,Arcs,$alpha); Ite4 = my_macro(Ite3,Arcs,$alpha); Ite5 = my_macro(Ite4,Arcs,$alpha); Ite6 = my_macro(Ite5,Arcs,$alpha); Ite7 = my_macro(Ite6,Arcs,$alpha); Ite8 = my_macro(Ite7,Arcs,$alpha); Dump Ite1; dump Ite8; jid = client.pig_submit(bs, client.account_name, "iteration.pig", params=dict(alpha="0"), stop_on_failure=True ) jid st = %hd_job_status job_1435385350894_0006 st["id"],st["percentComplete"],st["status"]["jobComplete"] %tail_stderr job_1435385350894_0006 20 """ Explanation: Itérations On crée une macro pour répéter les iterations. End of explanation """ %%PIG Creation_Graph2.pig Arcs = LOAD '$CONTAINER/$PSEUDO/Data/$path' USING PigStorage('\t') AS (frm:int,to:int); GrSort = GROUP Arcs BY frm; deg_sort = FOREACH GrSort GENERATE COUNT(Arcs) AS degs, Arcs , group AS ID; GrEntr = GROUP Arcs BY to; GrFin = JOIN deg_sort BY ID, GrEntr BY group; N = FOREACH (GROUP GrSort ALL) GENERATE COUNT(GrSort); Pr = FOREACH GrFin GENERATE deg_sort::ID AS ID , (float) 1 / (float)N.$0 AS PageRank; PageRank = JOIN GrFin BY deg_sort::ID, Pr BY ID; STORE PageRank INTO '$CONTAINER/$PSEUDO/Projet/SortGoogle.txt' USING PigStorage('\t') ; client.pig_submit(bs, client.account_name, "Creation_Graph2.pig", params=dict(path="DataGoogle.txt"), stop_on_failure=True ) st = %hd_job_status job_1435385350894_0037 st["id"],st["percentComplete"],st["status"]["jobComplete"] %tail_stderr job_1435385350894_0037 20 %%PIG iteration2.pig gr = LOAD '$CONTAINER/$PSEUDO/Projet/SortGoogle.txt' USING PigStorage('\t') AS (DegS:long,Asort:{(frm: int,to: int)},Noeud:int,Noeud2:int,Aent:{(frm: int,to: int)},ID: int,PageRank: float); Arcs = LOAD '$CONTAINER/$PSEUDO/Data/DataGoogle.txt' USING PigStorage('\t') AS (frm:int,to:int); Graph = FOREACH gr GENERATE Noeud , DegS, PageRank AS Pinit, PageRank, PageRank/ (float) DegS AS Ratio; DEFINE my_macro(G,A,ALP) RETURNS S { Gi= FOREACH $G GENERATE Noeud , Ratio; GrEntr = JOIN $A by frm , Gi by Noeud ; Te = GROUP GrEntr by to; so = FOREACH Te generate SUM(GrEntr.Ratio) AS Pr, group AS ID; tu = JOIN $G by Noeud, so by ID; sort = FOREACH tu GENERATE Noeud , DegS, Pinit, $ALP*Pinit+(1-$ALP)*Pr AS PageRank; $S = FOREACH sort GENERATE Noeud , DegS, Pinit, PageRank, PageRank/ (float) DegS AS Ratio; } Ite1 = my_macro(Graph,Arcs,$alpha); Ite2 = my_macro(Ite1,Arcs,$alpha); Ite3 = my_macro(Ite2,Arcs,$alpha); Ite4 = my_macro(Ite3,Arcs,$alpha); Ite5 = my_macro(Ite4,Arcs,$alpha); Ite6 = my_macro(Ite5,Arcs,$alpha); Ite7 = my_macro(Ite6,Arcs,$alpha); Ite8 = my_macro(Ite7,Arcs,$alpha); Dump Ite1; dump Ite8; STORE Ite8 INTO '$CONTAINER/$PSEUDO/Projet/PageRank.txt' USING PigStorage('\t') ; client.pig_submit(bs, client.account_name, "iteration2.pig", params=dict(alpha="0.5"), stop_on_failure=True ) st = %hd_job_status job_1435385350894_0042 st["id"],st["percentComplete"],st["status"]["jobComplete"] %tail_stderr job_1435385350894_0042 20 %blob_downmerge /$PSEUDO/Projet/PageRank.txt PageRank.txt import pandas import matplotlib as plt plt.style.use('ggplot') df = pandas.read_csv("PageRank.txt", sep="\t",names=["Node","OutDeg","Pinit", "PageRank", "k"]) df df['PageRank'].hist(bins=100, range=(0,0.000005)) df.sort_values("PageRank",ascending=False).head() %blob_close """ Explanation: On peut alors s'intéresser aux vraies données ! Avec les données Google initialisation End of explanation """
GoogleCloudPlatform/asl-ml-immersion
notebooks/kubeflow_pipelines/walkthrough/labs/kfp_walkthrough_vertex.ipynb
apache-2.0
import os import time import pandas as pd from google.cloud import aiplatform, bigquery from sklearn.compose import ColumnTransformer from sklearn.linear_model import SGDClassifier from sklearn.pipeline import Pipeline from sklearn.preprocessing import OneHotEncoder, StandardScaler """ Explanation: Using custom containers with Vertex AI Training Learning Objectives: 1. Learn how to create a train and a validation split with BigQuery 1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI 1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters 1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query it In this lab, you develop, package as a docker image, and run on Vertex AI Training a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The dataset used in the lab is based on Covertype Data Set from UCI Machine Learning Repository. The training code uses scikit-learn for data pre-processing and modeling. The code has been instrumented using the hypertune package so it can be used with Vertex AI hyperparameter tuning. End of explanation """ REGION = "us-central1" PROJECT_ID = !(gcloud config get-value core/project) PROJECT_ID = PROJECT_ID[0] ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store" DATA_ROOT = f"{ARTIFACT_STORE}/data" JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs" TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv" VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv" API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com" os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH os.environ["PROJECT_ID"] = PROJECT_ID os.environ["REGION"] = REGION """ Explanation: Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment. REGION - the compute region for Vertex AI Training and Prediction ARTIFACT_STORE - A GCS bucket in the created in the same region. End of explanation """ !gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE} """ Explanation: We now create the ARTIFACT_STORE bucket if it's not there. Note that this bucket should be created in the region specified in the variable REGION (if you have already a bucket with this name in a different region than REGION, you may want to change the ARTIFACT_STORE name so that you can recreate a bucket in REGION with the command in the cell below). End of explanation """ %%bash DATASET_LOCATION=US DATASET_ID=covertype_dataset TABLE_ID=covertype DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv SCHEMA=Elevation:INTEGER,\ Aspect:INTEGER,\ Slope:INTEGER,\ Horizontal_Distance_To_Hydrology:INTEGER,\ Vertical_Distance_To_Hydrology:INTEGER,\ Horizontal_Distance_To_Roadways:INTEGER,\ Hillshade_9am:INTEGER,\ Hillshade_Noon:INTEGER,\ Hillshade_3pm:INTEGER,\ Horizontal_Distance_To_Fire_Points:INTEGER,\ Wilderness_Area:STRING,\ Soil_Type:STRING,\ Cover_Type:INTEGER bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \ --source_format=CSV \ --skip_leading_rows=1 \ --replace \ $TABLE_ID \ $DATA_SOURCE \ $SCHEMA """ Explanation: Importing the dataset into BigQuery End of explanation """ %%bigquery SELECT * FROM `covertype_dataset.covertype` """ Explanation: Explore the Covertype dataset End of explanation """ !bq query \ -n 0 \ --destination_table covertype_dataset.training \ --replace \ --use_legacy_sql=false \ 'SELECT * \ FROM `covertype_dataset.covertype` AS cover \ WHERE \ MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)' !bq extract \ --destination_format CSV \ covertype_dataset.training \ $TRAINING_FILE_PATH """ Explanation: Create training and validation splits Use BigQuery to sample training and validation splits and save them to GCS storage Create a training split End of explanation """ # TODO: You code to create the BQ table validation split # TODO: Your code to export the validation table to GCS df_train = pd.read_csv(TRAINING_FILE_PATH) df_validation = pd.read_csv(VALIDATION_FILE_PATH) print(df_train.shape) print(df_validation.shape) """ Explanation: Create a validation split Exercise End of explanation """ numeric_feature_indexes = slice(0, 10) categorical_feature_indexes = slice(10, 12) preprocessor = ColumnTransformer( transformers=[ ("num", StandardScaler(), numeric_feature_indexes), ("cat", OneHotEncoder(), categorical_feature_indexes), ] ) pipeline = Pipeline( [ ("preprocessor", preprocessor), ("classifier", SGDClassifier(loss="log", tol=1e-3)), ] ) """ Explanation: Develop a training application Configure the sklearn training pipeline. The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling. End of explanation """ num_features_type_map = { feature: "float64" for feature in df_train.columns[numeric_feature_indexes] } df_train = df_train.astype(num_features_type_map) df_validation = df_validation.astype(num_features_type_map) """ Explanation: Convert all numeric features to float64 To avoid warning messages from StandardScaler all numeric features are converted to float64. End of explanation """ X_train = df_train.drop("Cover_Type", axis=1) y_train = df_train["Cover_Type"] X_validation = df_validation.drop("Cover_Type", axis=1) y_validation = df_validation["Cover_Type"] pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200) pipeline.fit(X_train, y_train) """ Explanation: Run the pipeline locally. End of explanation """ accuracy = pipeline.score(X_validation, y_validation) print(accuracy) """ Explanation: Calculate the trained model's accuracy. End of explanation """ TRAINING_APP_FOLDER = "training_app" os.makedirs(TRAINING_APP_FOLDER, exist_ok=True) """ Explanation: Prepare the hyperparameter tuning application. Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training. End of explanation """ %%writefile {TRAINING_APP_FOLDER}/train.py import os import subprocess import sys import fire import hypertune import numpy as np import pandas as pd import pickle from sklearn.compose import ColumnTransformer from sklearn.linear_model import SGDClassifier from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler, OneHotEncoder def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune): df_train = pd.read_csv(training_dataset_path) df_validation = pd.read_csv(validation_dataset_path) if not hptune: df_train = pd.concat([df_train, df_validation]) numeric_feature_indexes = slice(0, 10) categorical_feature_indexes = slice(10, 12) preprocessor = ColumnTransformer( transformers=[ ('num', StandardScaler(), numeric_feature_indexes), ('cat', OneHotEncoder(), categorical_feature_indexes) ]) pipeline = Pipeline([ ('preprocessor', preprocessor), ('classifier', SGDClassifier(loss='log',tol=1e-3)) ]) num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]} df_train = df_train.astype(num_features_type_map) df_validation = df_validation.astype(num_features_type_map) print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter)) X_train = df_train.drop('Cover_Type', axis=1) y_train = df_train['Cover_Type'] pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter) pipeline.fit(X_train, y_train) if hptune: X_validation = df_validation.drop('Cover_Type', axis=1) y_validation = df_validation['Cover_Type'] accuracy = pipeline.score(X_validation, y_validation) print('Model accuracy: {}'.format(accuracy)) # Log it with hypertune hpt = hypertune.HyperTune() hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='accuracy', metric_value=accuracy ) # Save the model if not hptune: model_filename = 'model.pkl' with open(model_filename, 'wb') as model_file: pickle.dump(pipeline, model_file) gcs_model_path = "{}/{}".format(job_dir, model_filename) subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout) print("Saved model in: {}".format(gcs_model_path)) if __name__ == "__main__": fire.Fire(train_evaluate) """ Explanation: Write the tuning script. Notice the use of the hypertune package to report the accuracy optimization metric to Vertex AI hyperparameter tuning service. End of explanation """ %%writefile {TRAINING_APP_FOLDER}/Dockerfile FROM gcr.io/deeplearning-platform-release/base-cpu RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2 # TODO """ Explanation: Package the script into a docker image. Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. Make sure to update the URI for the base image so that it points to your project's Container Registry. Exercise Complete the Dockerfile below so that it copies the 'train.py' file into the container at /app and runs it when the container is started. End of explanation """ IMAGE_NAME = "trainer_image" IMAGE_TAG = "latest" IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}" os.environ["IMAGE_URI"] = IMAGE_URI !gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER """ Explanation: Build the docker image. You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker. End of explanation """ TIMESTAMP = time.strftime("%Y%m%d_%H%M%S") JOB_NAME = f"forestcover_tuning_{TIMESTAMP}" JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}" os.environ["JOB_NAME"] = JOB_NAME os.environ["JOB_DIR"] = JOB_DIR """ Explanation: Submit an Vertex AI hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier: - Max iterations - Alpha The file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of max_iter and the linear range between 1.0e-4 and 1.0e-1 for alpha. End of explanation """ %%bash MACHINE_TYPE="n1-standard-4" REPLICA_COUNT=1 CONFIG_YAML=config.yaml cat <<EOF > $CONFIG_YAML studySpec: metrics: - metricId: accuracy goal: MAXIMIZE parameters: # TODO algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization trialJobSpec: workerPoolSpecs: - machineSpec: machineType: $MACHINE_TYPE replicaCount: $REPLICA_COUNT containerSpec: imageUri: $IMAGE_URI args: - --job_dir=$JOB_DIR - --training_dataset_path=$TRAINING_FILE_PATH - --validation_dataset_path=$VALIDATION_FILE_PATH - --hptune EOF gcloud ai hp-tuning-jobs create \ --region=# TODO \ --display-name=# TODO \ --config=# TODO \ --max-trial-count=# TODO \ --parallel-trial-count=# TODO echo "JOB_NAME: $JOB_NAME" """ Explanation: Exercise Complete the config.yaml file generated below so that the hyperparameter tunning engine try for parameter values * max_iter the two values 10 and 20 * alpha a linear range of values between 1.0e-4 and 1.0e-1 Also complete the gcloud command to start the hyperparameter tuning job with a max trial count and a max number of parallel trials both of 5 each. End of explanation """ def retrieve_best_trial_from_job_name(jobname): # TODO return best_trial """ Explanation: Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs". Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): Exercise Complete the body of the function below to retrieve the best trial from the JOBNAME: End of explanation """ best_trial = retrieve_best_trial_from_job_name(JOB_NAME) """ Explanation: You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below. End of explanation """ alpha = best_trial.parameters[0].value max_iter = best_trial.parameters[1].value TIMESTAMP = time.strftime("%Y%m%d_%H%M%S") JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}" JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}" MACHINE_TYPE="n1-standard-4" REPLICA_COUNT=1 WORKER_POOL_SPEC = f"""\ machine-type={MACHINE_TYPE},\ replica-count={REPLICA_COUNT},\ container-image-uri={IMAGE_URI}\ """ ARGS = f"""\ --job_dir={JOB_DIR},\ --training_dataset_path={TRAINING_FILE_PATH},\ --validation_dataset_path={VALIDATION_FILE_PATH},\ --alpha={alpha},\ --max_iter={max_iter},\ --nohptune\ """ !gcloud ai custom-jobs create \ --region={REGION} \ --display-name={JOB_NAME} \ --worker-pool-spec={WORKER_POOL_SPEC} \ --args={ARGS} print("The model will be exported at:", JOB_DIR) """ Explanation: Retrain the model with the best hyperparameters You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job End of explanation """ !gsutil ls $JOB_DIR """ Explanation: Examine the training output The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on GCS. Note: We need to wait for job triggered by the cell above to complete before running the cells below. End of explanation """ MODEL_NAME = "forest_cover_classifier_2" SERVING_CONTAINER_IMAGE_URI = ( "us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest" ) SERVING_MACHINE_TYPE = "n1-standard-2" """ Explanation: Deploy the model to Vertex AI Prediction End of explanation """ uploaded_model = # TODO """ Explanation: Uploading the trained model Exercise Upload the trained model using aiplatform.Model.upload: End of explanation """ endpoint = # TODO """ Explanation: Deploying the uploaded model Exercise Deploy the model using uploaded_model: End of explanation """ instance = [ 2841.0, 45.0, 0.0, 644.0, 282.0, 1376.0, 218.0, 237.0, 156.0, 1003.0, "Commanche", "C4758", ] # TODO """ Explanation: Serve predictions Prepare the input file with JSON formated instances. Exercise Query the deployed model using endpoint: End of explanation """
ecervera/ga-nb
First Example.ipynb
mit
from pyevolve import G1DList """ Explanation: First Example This notebook is adapted from a tutorial from the Pyevolve website To make the API easy to use, there are default parameters for almost every parameter in Pyevolve, for example, when you will use the <tt>G1DList.G1DList</tt> genome without specifying the Mutator, Crossover and Initializator, you will use the default ones: Swap Mutator, One Point Crossover and the Integer Initialzator. All those default parameters are specified in the <tt>Consts</tt> module (and you are highly encouraged to take a look at source code, hosted at github). Let’s begin with the first simple example. First of all, you must know your problem, in this case, our problem is to find a simple 1D list of integers of n-size with zero in all positions. At the first look, we know by intuition that the representation needed to this problem is a 1D List, which you can found in Pyevolve by the name of <tt>G1DList.G1DList</tt>, which means Genome 1D List. This representation is based on a Python list as you will see, and is very easy to manipulate. End of explanation """ # This function is the evaluation function, we want # to give high score to more zero'ed chromosomes def eval_func(chromosome): score = 0.0 # iterate over the chromosome elements (items) for value in chromosome: if value==0: score += 1.0 return score """ Explanation: The next step is to define the our evaluation function to our Genetic Algorithm. We want all the $n$ list positions with value of ‘0’, so we can propose the evaluation function: $$ f(x) = \sum^n_{i=0}(x[i]==0) \, ? \, 1 \, : \, 0 $$ End of explanation """ x = [1, 2, 3, 8, 0, 2, 0, 4, 1, 0] """ Explanation: As you can see in the above equation, with the $x$ variable representing our genome list of integers, the $f(x)$ shows our evaluation function, which is the sum of ‘0’ values in the list. For example, if we have a list with 10 elements like this: End of explanation """ eval_func(x) """ Explanation: we will got the raw score value of 3, or $f(x)$ = 3. End of explanation """ # Genome instance genome = G1DList.G1DList(20) # The evaluator function (objective function) genome.evaluator.set(eval_func) """ Explanation: It is important to note that in Pyevolve, we have raw score and fitness score, the raw score is the return of the evaluation function and the fitness score is the scaled score. The next step is the creation of one sample genome for the Genetic Algorithm. We can define our genome as this: End of explanation """ from pyevolve import Consts Consts.CDefRangeMin, Consts.CDefRangeMax """ Explanation: This will create an instance of the G1DList.G1DList class (which resides in the G1DList module) with the list $n$-size of 20 and sets the evaluation function of the genome to the evaluation function “eval_func” that we have created before. But wait, where is the range of integers that will be used in the list? Where is the mutator, crossover and initialization functions? They are all in the default parameters, as you see, this parameters keep things simple. By default (and you have the documentation to find this defaults), the range of the integers in the G1DList.G1DList is between the inverval [ Consts.CDefRangeMin, Consts.CDefRangeMax] inclusive, and genetic operators is the same I have cited before: Swap Mutator Mutators.G1DListMutatorSwap(), One Point Crossover Crossovers.G1DListCrossoverSinglePoint() and the Integer Initializator Initializators.G1DListInitializatorInteger(). End of explanation """ genome.setParams(rangemin=0, rangemax=10) """ Explanation: You can change everything with the API, for example, you can pass the ranges to the genome, like this: End of explanation """ from pyevolve import GSimpleGA ga = GSimpleGA.GSimpleGA(genome) """ Explanation: Right, now we have our evaluation function and our first genome ready, the next step is to create our Genetic Algorithm Engine, the GA Core which will do the evolution, control statistics, etc... The GA Engine which we will use is the GSimpleGA.GSimpleGA which resides in the GSimpleGA module, this GA Engine is the genetic algorithm described by Goldberg. So, let’s create the engine: End of explanation """ %matplotlib inline import matplotlib.pyplot as plt import pyevolve.Interaction as it """ Explanation: Ready! Simple, isn't it? We simple create our GA Engine with the created genome. You can ask: “Where is the selector method? The number of generations? Mutation rate?“. Again: we have defaults. By default, the GA will evolve for 100 generations with a population size of 80 individuals, it will use the mutation rate of 2% and a crossover rate of 80%, the default selector is the Ranking Selection (Selectors.GRankSelector()) method. Those default parameters was not random picked, they are all based on the commom used properties. We need to import the Interaction module, which includes some plot functions. We will plot the score of all the individuals of the population after each generation. End of explanation """ ga.setGenerations(1) ga.evolve() print("Generation: %d" % ga.currentGeneration) population = ga.getPopulation() it.plotPopScore(population) """ Explanation: Now, all we need to do is to evolve! For didactic purposes, we are setting evolution steps of only one generation. In real problems, you are likely to increase this number. For the first call we use the evolve function. The plot represents the score of each individual. WARNING: run the following cell ONLY ONCE with Shift+Enter. End of explanation """ ga.step() print("Generation: %d" % ga.currentGeneration) best = ga.bestIndividual() print('Best individual: %s' % str(best.genomeList)) print('Best score: %.0f' % best.score) population = ga.getPopulation() it.plotPopScore(population) it.plotHistPopScore(population) """ Explanation: From the second generation on, we use the step function. We will display the best individual and its score, and plot the scores of the population, for each individual, and the histogram. You may repeat the cell below many times by pressing Ctrl+Enter. End of explanation """
ellamil/bubblepopper
bubblepopper_3articleclusters.ipynb
mit
from sklearn import cluster import pandas as pd import numpy as np import pickle %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns num_topics = 20 doc_data = pickle.load(open('pub_probabs_topic'+str(num_topics)+'.pkl','rb')) lda_topics = ['topic'+str(i) for i in range(0,num_topics)] cluster_dims = ['source','trust'] + lda_topics cluster_data = doc_data[cluster_dims].values # Inertia (within-cluster sum of squares criterion) is a measure of how internally coherent clusters are MAX_K = 10 ks = range(1,MAX_K+1) inertias = np.zeros(MAX_K) for k in ks: kmeans = cluster.KMeans(k).fit(cluster_data) inertias[k-1] = kmeans.inertia_ with sns.axes_style("whitegrid"): plt.plot(ks, inertias) plt.ylabel("Average inertia") plt.xlabel("Number of clusters") plt.show() """ Explanation: ARTICLE CLUSTERS Cluster Inertia Cluster article data and compute inertia as a function of cluster number End of explanation """ from sklearn.metrics import silhouette_score import random num_topics = 20 doc_data = pickle.load(open('pub_probabs_topic'+str(num_topics)+'.pkl','rb')) lda_topics = ['topic'+str(i) for i in range(0,num_topics)] cluster_dims = ['source','trust'] + lda_topics cluster_data = doc_data[cluster_dims].values # The silhouette score is a measure of the density and separation of the formed clusters seed = 42 MAX_K = 10 ks = range(1,MAX_K+1) silhouette_avg = [] for i,k in enumerate(ks[1:]): kmeans = cluster.KMeans(n_clusters=k,random_state=seed).fit(cluster_data) kmeans_clusters = kmeans.predict(cluster_data) silhouette_avg.append(silhouette_score(cluster_data,kmeans_clusters)) with sns.axes_style("whitegrid"): plt.plot(ks[1:], silhouette_avg) plt.ylabel("Average silhouette score") plt.xlabel("Number of clusters") plt.ylim([0.0,1.0]) plt.show() """ Explanation: Silhouette Score Cluster article data and compute the silhouette score as a function of cluster number End of explanation """ num_topics = 20 doc_data = pickle.load(open('pub_probabs_topic'+str(num_topics)+'.pkl','rb')) lda_topics = ['topic'+str(i) for i in range(0,num_topics)] cluster_dims = ['source','trust'] + lda_topics cluster_data = doc_data[cluster_dims].values num_folds = 5 seed = 42 np.random.seed(seed) np.random.shuffle(cluster_data) # Shuffles in-place cluster_data = np.split(cluster_data[0:-1,:],num_folds) # Make divisible by 10 train_data,test_data= [],[] for hold in range(num_folds): keep = [i for i in list(range(num_folds)) if i != hold] train = [cluster_data[i] for i in keep] test = cluster_data[hold] train_data.append(np.vstack(train)) test_data.append(test) full = [cluster_data[i] for i in list(range(num_folds))] full_data = np.vstack(full) """ Explanation: Cross-Validation Split data set for cross-validation End of explanation """ MAX_K = 10 ks = range(1,MAX_K+1) kmeans_accuracy = [] for k in ks: full_kmeans = cluster.KMeans(n_clusters=k,random_state=seed).fit(full_data) accuracy = [] for fold in range(num_folds): train_kmeans = cluster.KMeans(n_clusters=k,random_state=seed).fit(train_data[fold]) test_labels = train_kmeans.predict(test_data[fold]) full_labels = np.split(full_kmeans.labels_,num_folds)[fold] accuracy.append(1.0 * np.sum(np.equal(full_labels,test_labels)) / len(test_labels)) kmeans_accuracy.append(np.mean(accuracy)) with sns.axes_style("whitegrid"): plt.plot(ks, kmeans_accuracy) plt.ylabel("Average accuracy") plt.xlabel("Number of clusters") plt.ylim([0.0,1.0]) plt.show() """ Explanation: Clustering consistency between the full and partial data sets End of explanation """ num_clusters = 4 kmeans = cluster.KMeans(n_clusters=num_clusters,random_state=seed).fit(full_data) kmeans_labels = kmeans.labels_ kmeans_centroids = kmeans.cluster_centers_ # 0 = mostly liberal, 1 = mostly conservative, 2 = mixed liberal, 3 = mixed conservative kmeans_distances = kmeans.transform(full_data) pickle.dump([kmeans,kmeans_labels,kmeans_centroids,kmeans_distances], open('pub_kmeans_clean_cluster'+str(num_clusters)+'.pkl','wb')) """ Explanation: Number of Clusters = 4 End of explanation """
musketeer191/job_analytics
extract_feat.ipynb
gpl-3.0
HOME_DIR = 'd:/larc_projects/job_analytics/'; DATA_DIR = HOME_DIR + 'data/clean/' RES_DIR = HOME_DIR + 'results/' skill_df = pd.read_csv(DATA_DIR + 'skill_index.csv') """ Explanation: Load data End of explanation """ doc_skill = buildDocSkillMat(jd_docs, skill_df, folder=DATA_DIR) with(open(DATA_DIR + 'doc_skill.mtx', 'w')) as f: mmwrite(f, doc_skill) """ Explanation: Build feature matrix The matrix is a JD-Skill matrix where each entry $e(d, s)$ is the number of times skill $s$ occurs in job description $d$. End of explanation """ extracted_skill_df = getSkills4Docs(docs=doc_index['doc'], doc_term=doc_skill, skills=skills) df = pd.merge(doc_index, extracted_skill_df, left_index=True, right_index=True) print(df.shape) df.head() df.to_csv(DATA_DIR + 'doc_index.csv') # later no need to extract skill again """ Explanation: Get skills in each JD Using the matrix, we can retrieve skills in each JD. End of explanation """ reload(ja_helpers) from ja_helpers import * # load frameworks of SF as docs pst_docs = pd.read_csv(DATA_DIR + 'SF/pst.csv') pst_docs pst_skill = buildDocSkillMat(pst_docs, skill_df, folder=None) with(open(DATA_DIR + 'pst_skill.mtx', 'w')) as f: mmwrite(f, pst_skill) """ Explanation: Extract features of new documents End of explanation """
mdiaz236/DeepLearningFoundations
intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
mit
# Import Numpy, TensorFlow, TFLearn, and MNIST data import numpy as np import tensorflow as tf import tflearn import tflearn.datasets.mnist as mnist """ Explanation: Handwritten Number Recognition with TFLearn and MNIST In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9. We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network. End of explanation """ # Retrieve the training and test data trainX, trainY, testX, testY = mnist.load_data(one_hot=True) """ Explanation: Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has: 1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image) We'll call the images, which will be the input to our neural network, X and their corresponding labels Y. We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. Flattened data For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network. End of explanation """ # Visualizing the data import matplotlib.pyplot as plt %matplotlib inline # Function for displaying a training image by it's index in the MNIST set def show_digit(index): label = trainY[index].argmax(axis=0) # Reshape 784 array into 28x28 image image = trainX[index].reshape([28,28]) plt.title('Training data, index: %d, Label: %d' % (index, label)) plt.imshow(image, cmap='gray_r') plt.show() # Display the first (index 0) training image show_digit(0) """ Explanation: Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title. End of explanation """ # Define the neural network def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model net = tflearn.input_data([None, 784]) net = tflearn.fully_connected(net, 500, activation='ReLU') net = tflearn.fully_connected(net, 250, activation='ReLU') net = tflearn.fully_connected(net, 10, activation='softmax') net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') # This model assumes that your network is named "net" model = tflearn.DNN(net) return model # Build the model model = build_model() """ Explanation: Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the output layer, and The output layer, which defines how the network learns and outputs a label for a given image. Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example, net = tflearn.input_data([None, 100]) would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units. Adding layers To add new hidden layers, you use net = tflearn.fully_connected(net, n_units, activation='ReLU') This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). Then, to set how you train the network, use: net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') Again, this is passing in the network you've been building. The keywords: optimizer sets the training method, here stochastic gradient descent learning_rate is the learning rate loss determines how the network error is calculated. In this example, with categorical cross-entropy. Finally, you put all this together to create the model with tflearn.DNN(net). Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer. End of explanation """ # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20) """ Explanation: Training the network Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely! End of explanation """ # Compare the labels that our model predicts with the actual labels # Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample. predictions = np.array(model.predict(testX)).argmax(axis=1) # Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels actual = testY.argmax(axis=1) test_accuracy = np.mean(predictions == actual, axis=0) # Print out the result print("Test accuracy: ", test_accuracy) """ Explanation: Testing After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results. A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! End of explanation """
cosmolejo/Fisica-Experimental-3
Fourier/Tarea_Fourier/FT-2D.ipynb
gpl-3.0
import numpy as np import matplotlib import pylab as plt import scipy.misc as pim from scipy import stats % matplotlib inline """ Explanation: Análisis de Fourier 2D Última actualización: Edgar Rueda, marzo de 2016. End of explanation """ tam = 256 # tamaño matriz dx = 0.01 # resolución (m/pixel) x = np.arange(-dx*tam/2,dx*tam/2,dx) # coordenadas espaciales X , Y = np.meshgrid(x,x) # espacio bidimensional A1 = 1. # amplitud en unidades arbitrarias f1 = 1. # frecuencia espacial (1/m) g1 = A1*np.sin(2*np.pi*f1*X) # Imagen en el espacio "espacial" ftg1 = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(g1)))*dx**2 # Transformada de Fourier, espacio frecuencial plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.imshow(abs(g1), cmap='gray') plt.title('Espacio espacial') plt.subplot(1,2,2) plt.imshow(abs(ftg1), cmap='gray') plt.title('Amplitud espacio frecuencial') """ Explanation: Interpretación de la FT de una imagen Una imagen se puede entender como la superpocisión de funciones armónicas (senos y cocenos) bidimensionales de diferentes frecuencias y direcciónes. La FT me dará información de los senos y cocenos que se necesitan (en términos de su frecuencia, dirección y amplitud) para formar la imagen. Formemos primero una imagen correspondiente a un seno en la dirección horizontal End of explanation """ tam = 256 # tamaño matriz dx = 0.01 # resolución (m/pixel) x = np.arange(-dx*tam/2,dx*tam/2,dx) # coordenadas espaciales X , Y = np.meshgrid(x,x) # espacio bidimensional A1 = 1. # amplitud en unidades arbitrarias f1 = 1. # frecuencia espacial (1/m) gx = A1*np.sin(2*np.pi*f1*X) # Seno en la dirección horizontal gy = A1*np.sin(2*np.pi*f1*Y) # Seno en la dirección vertical g = gx + gy # Superposición ftg = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(g)))*dx**2 # Transformada de Fourier, espacio frecuencial plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.imshow(g, cmap='gray') plt.title('Espacio espacial') plt.subplot(1,2,2) plt.imshow(abs(ftg), cmap='gray') plt.title('Amplitud espacio frecuencial') """ Explanation: Note que solo aparecen aproximadamente dos deltas de Dirac en el espacio frecuencial. De forma análoga al caso unidimensional esos dos puntos corresponden a la frecuencia del seno en el espacio espacial. Note también que los puntos salen en la dirección horizontal, indicando que la dirección del seno bidimensional es horizontal. Finalmente, observe que en el centro de la imagen está el centro del origen (para matrices cuadradas esto no es del todo exacto), mientras que la posición (0,0) de la matriz está en la esquina superior-izquierda. Ahora, grafiquemos la superposición de un seno horizontal y uno vertical de la misma frecuencia End of explanation """ tam = 256 # tamaño matriz dx = 0.01 # resolución (m/pixel) x = np.arange(-dx*tam/2,dx*tam/2,dx) # coordenadas espaciales X , Y = np.meshgrid(x,x) # espacio bidimensional A1 = 1. # amplitud en unidades arbitrarias f1 = 1. # frecuencia espacial (1/m) gx = A1*np.sin(2*np.pi*2*f1*X) # Seno en la dirección horizontal gy = A1*np.sin(2*np.pi*4*f1*Y) # Seno en la dirección vertical gd = 2*A1*np.cos(2*np.pi*f1*(X + Y)) # CCoceno en la dirección diagonal g = gx + gy + gd # Superposición ftg = np.fft.fftshift(np.fft.fft2(np.fft.fftshift(g)))*dx**2 # Transformada de Fourier, espacio frecuencial plt.figure(figsize=(15,15)) plt.subplot(1,2,1) plt.imshow(g, cmap='gray') plt.title('Espacio espacial') plt.subplot(1,2,2) plt.imshow(abs(ftg), cmap='gray') plt.title('Amplitud espacio frecuencial') """ Explanation: Observe que ahora aparecen unas deltas en la dirección vertical que dan cuenta del seno en la dirección vertical. Hagamos un último ejemplo incluyendo un coseno en la dirección diagonal que tiene una amplidut dos veces superior a los otros senos. Además, el seno en la dirección horizontal tiene una frecuencia dos veces mayor al coceno, y el seno en la dirección vertical 4 veces mayor. End of explanation """
kwinkunks/axb
NumPy_reflectivity.ipynb
apache-2.0
import numpy as np import numpy.linalg as la import matplotlib.pyplot as plt from utils import plot_all %matplotlib inline from scipy import linalg as spla def convmtx(h, n): """ Equivalent of MATLAB's convmtx function, http://www.mathworks.com/help/signal/ref/convmtx.html. Makes the convolution matrix, C. The product C.x is the convolution of h and x. Args h (ndarray): a 1D array, the kernel. n (int): the number of rows to make. Returns ndarray. Size m+n-1 """ col_1 = np.r_[h[0], np.zeros(n-1)] row_1 = np.r_[h, np.zeros(n-1)] return spla.toeplitz(col_1, row_1) """ Explanation: Impedance or reflectivity Trying to see how to combine G with a derivative operator to get from the impedance model to the data with one forward operator. End of explanation """ # Impedance, imp VP RHO imp = np.ones(50) * 2550 * 2650 imp[10:15] = 2700 * 2750 imp[15:27] = 2400 * 2450 imp[27:35] = 2800 * 3000 plt.plot(imp) """ Explanation: Construct the model m End of explanation """ D = convmtx([-1, 1], imp.size)[:, :-1] D r = D @ imp plt.plot(r[:-1]) """ Explanation: But I really want to use the reflectivity, so let's compute that: End of explanation """ m = (imp[1:] - imp[:-1]) / (imp[1:] + imp[:-1]) plt.plot(m) """ Explanation: I don't know how best to control the magnitude of the coefficients or how to combine this matrix with G, so for now we'll stick to the model m being the reflectivity, calculated the normal way. End of explanation """ from scipy.signal import ricker wavelet = ricker(40, 2) plt.plot(wavelet) # Downsampling: set to 1 to use every sample. s = 2 # Make G. G = convmtx(wavelet, m.size)[::s, 20:70] plt.imshow(G, cmap='viridis', interpolation='none') # Or we can use bruges (pip install bruges) # from bruges.filters import ricker # wavelet = ricker(duration=0.04, dt=0.001, f=100) # G = convmtx(wavelet, m.size)[::s, 21:71] # f, (ax0, ax1) = plt.subplots(1, 2) # ax0.plot(wavelet) # ax1.imshow(G, cmap='viridis', interpolation='none', aspect='auto') """ Explanation: Forward operator: convolution with wavelet Now we make the kernel matrix G, which represents convolution. End of explanation """ d = G @ m """ Explanation: Forward model the data d Now we can perform the forward problem: computing the data. End of explanation """ def add_subplot_axes(ax, rect, axisbg='w'): """ Facilitates the addition of a small subplot within another plot. From: http://stackoverflow.com/questions/17458580/ embedding-small-plots-inside-subplots-in-matplotlib License: CC-BY-SA Args: ax (axis): A matplotlib axis. rect (list): A rect specifying [left pos, bot pos, width, height] Returns: axis: The sub-axis in the specified position. """ def axis_to_fig(axis): fig = axis.figure def transform(coord): a = axis.transAxes.transform(coord) return fig.transFigure.inverted().transform(a) return transform fig = plt.gcf() left, bottom, width, height = rect trans = axis_to_fig(ax) x1, y1 = trans((left, bottom)) x2, y2 = trans((left + width, bottom + height)) subax = fig.add_axes([x1, y1, x2 - x1, y2 - y1]) x_labelsize = subax.get_xticklabels()[0].get_size() y_labelsize = subax.get_yticklabels()[0].get_size() x_labelsize *= rect[2] ** 0.5 y_labelsize *= rect[3] ** 0.5 subax.xaxis.set_tick_params(labelsize=x_labelsize) subax.yaxis.set_tick_params(labelsize=y_labelsize) return subax from matplotlib import gridspec, spines fig = plt.figure(figsize=(12, 6)) gs = gridspec.GridSpec(5, 8) # Set up axes. axw = plt.subplot(gs[0, :5]) # Wavelet. axg = plt.subplot(gs[1:4, :5]) # G axm = plt.subplot(gs[:, 5]) # m axe = plt.subplot(gs[:, 6]) # = axd = plt.subplot(gs[1:4, 7]) # d cax = add_subplot_axes(axg, [-0.08, 0.05, 0.03, 0.5]) params = {'ha': 'center', 'va': 'bottom', 'size': 40, 'weight': 'bold', } axw.plot(G[5], 'o', c='r', mew=0) axw.plot(G[5], 'r', alpha=0.4) axw.locator_params(axis='y', nbins=3) axw.text(1, 0.6, "wavelet", color='k') im = axg.imshow(G, cmap='viridis', aspect='1', interpolation='none') axg.text(45, G.shape[0]//2, "G", color='w', **params) axg.axhline(5, color='r') plt.colorbar(im, cax=cax) y = np.arange(m.size) axm.plot(m, y, 'o', c='r', mew=0) axm.plot(m, y, c='r', alpha=0.4) axm.text(0, m.size//2, "m", color='k', **params) axm.invert_yaxis() axm.locator_params(axis='x', nbins=3) axe.set_frame_on(False) axe.set_xticks([]) axe.set_yticks([]) axe.text(0.5, 0.5, "=", color='k', **params) y = np.arange(d.size) axd.plot(d, y, 'o', c='b', mew=0) axd.plot(d, y, c='b', alpha=0.4) axd.plot(d[5], y[5], 'o', c='r', mew=0, ms=10) axd.text(0, d.size//2, "d", color='k', **params) axd.invert_yaxis() axd.locator_params(axis='x', nbins=3) for ax in fig.axes: ax.xaxis.label.set_color('#888888') ax.tick_params(axis='y', colors='#888888') ax.tick_params(axis='x', colors='#888888') for child in ax.get_children(): if isinstance(child, spines.Spine): child.set_color('#aaaaaa') # For some reason this doesn't work... for _, sp in cax.spines.items(): sp.set_color('w') # But this does... cax.xaxis.label.set_color('#ffffff') cax.tick_params(axis='y', colors='#ffffff') cax.tick_params(axis='x', colors='#ffffff') fig.tight_layout() plt.show() """ Explanation: Let's visualize these components for fun... End of explanation """ plt.plot(np.convolve(wavelet, m, mode='same')[::s], 'blue', lw=3) plt.plot(G @ m, 'red') """ Explanation: Note that G * m gives us exactly the same result as np.convolve(w_, m). This is just another way of implementing convolution that lets us use linear algebra to perform the operation, and its inverse. End of explanation """
fifabsas/talleresfifabsas
python/Extras/Labo3/Adquisicion_programada.ipynb
mit
import time import numpy as np import visa rm = visa.ResourceManager() # Creamos al Resource Manager rm.list_resources() # Esto les permitirá ver qué es lo que pyvisa reconoce conectado a la PC resource_name = 'USB0::0x0699::0x0346::C033250::INSTR' # Este es un nombre ejemplo con el cual Pyvisa reconoce al instrumento fungen = rm.open_resource(resource_name) # "Abrimos la comunicación con el aparato llamándolo por su nombre fungen.write('*IDN?') # Entre otras cosas, nosotros podemos preguntarle al generador de funciones quién es o cómo se hace llamar print(fungen.read()) """ Explanation: Adquisición programada en Python Otra ventaja de la conversión del mundo analógico al digital es la posibilidad de programar nuestro instrumental, dado que es esencialmente una computadora. Programar significa dar órdenes para que la computadora ejecute ciertas órdenes en un determinado orden y una dada cantidad de veces. Pero para comunicarse con la computadora que hará las acciones desde una que nosotros manejamos necesitamos que entre ellas haya un "lenguaje común". Eso se llama protocolo. En nuestro caso utilizaremos el instrumental Tektronix, y tenemos la suerte de que esta empresa haya desarrollado un protocolo de comunicación con las PC, llamado VISA (Virtual Instrument Software Architecture). Según la empresa: "is an industry-standard communication protocol. VISA is a Test & Measurement industry standard communication API (Application Programming Interface) for use with test and measurement devices. Some times called a communication driver, VISA allows for the development of programs to be bus independent. Using VISA libraries enables communication for many interfaces such as GPIB, USB, and Ethernet." Una vez establecida la comunicación entre las dos PC's, tenemos que ver cómo pedirle cosas al instrumental en algún lenguaje. Algo de eso ya hicimos, tenemos un software que permite hacer el screenshot del osciloscopio y toma los datos en pantalla, te los guarda en un txt. Lo que vamos a hacer ahora es superar al software "manual", y vamos a programarlo para que adquiera N veces, y nos guarde sólo un parámetro de la adquisición, y luego plotee ese parámetro en función de lo que determinemos. Con Python podemos hacer todo eso. Generador de funciones La biblioteca que utilizaremos para establecer la comunicación PC-instrumental se llama pyvisa. Con ella, deberemos crearnos un tipo de variable llamado Resource Manager, desde el cual le pediremos información al instrumental. Probaremos esto para un generador de funciones Tektronix End of explanation """ print(fungen.query('*IDN?')) """ Explanation: Es importante en los pasos que acabamos de dar que reconozcamos cuando Pyvisa reconoce nuestro instrumental y cuando no. Si nos conectamos por USB, eso debería indicarlo (distinto sería si nos conectáramos por puerto GPIB o similares), y si es un instrumento, debería decir "instr" o similar. Todos los demás números del nombre del recurso reconocen marca, modelo y número de serie, propio de cada aparato. Notemos que la comunicación, el envío y la recepción de información, se hace por medio de un write y un read. En el primero, definimos qué string vamos a mandarle al aparato. Esos strings los sabemos si tenemos acceso a su manual de programación, y es propio de cada marca y a veces del modelo. En particular, el string IDN? lo que hace es solicitarle al aparato su "identidad", por lo que este generador de funciones emite y almacena en un buffer esa información, disponible si nosotros emitimos un read. El equivalente a enviar un string y leer el buffer es la función query End of explanation """ # Rampa logaritmica de frequencias # Los dos primeros numeros (1 y 3) indican los exponentes de los limites(10^1 y 10^3) # El siguiente el numero de pasos for freq in np.logspace(1, 3, 20): fungen.write('FREQ %f' % freq) time.sleep(0.1) # Rampa lineal de amplitudes # Los dos primeros numeros (0 y 1) indican los limites. # El siguiente el numero de pasos for amplitude in np.linspace(0, 1, 10): fungen.write('VOLT %f' % amplitude) time.sleep(0.1) # Rampa lineal de offset # Los dos primeros numeros (0 y 1) indican los limites. # El siguiente el numero de pasos for offset in np.linspace(0, 1, 10): fungen.write('VOLT:OFFS %f' % offset) time.sleep(0.1) """ Explanation: A parte de preguntarle cosas, al generador de funciones le puedo setear condiciones, como el voltaje, la frecuencia o el offset de la señal que quiero que genere: End of explanation """ fungen.close() """ Explanation: Notemos que la función write pretende un caracter como argumento para enviar al generador, lo cual coincide con lo que veniamos diciendo antes. Qué palabras y cómo escribirlas dependerá siempre del instrumental que usemos y de su manual. Es importante que siempre que terminemos de hacer una medición, cerremos la comunicación con el aparato. End of explanation """ rm = visa.ResourceManager() rm.list_resources() resource_name = 'USB0::0x0699::0x0363::C065089::INSTR' osci = rm.open_resource(resource_name) osci.query('*IDN?') """ Explanation: Osciloscopio Con el osciloscopio será distinto, porque es más común leer datos que pedirle que haga cosas. Al igual que antes, iniciamos un Resource Manager y abrimos comunicación con el osciloscopio. End of explanation """ osci.write('DAT:ENC RPB') # Recordar que esto puede depender del instrumental usado y de su sintaxis osci.write('DAT:WID 1') """ Explanation: Bien, para lo siguiente es importante reconocer que los tipos de datos que el osciloscopio nos puede ofrecer se pueden escribir en ASCII o en binario. El ASCII es una forma de enumerar a todos los dígitos y teclas conocidas, es un estándar de representación numérica. El binario es lo que es, un número escrito en binario. Generalmente, el ASCII es más fácil de leer (aunque con cierta dificultad) por un humano, pero eso lo hace más difícil de manipular. En cambio información en binario es mucho más difícil de leer, pero mejor para cálculos. Nosotros le pediremos al osciloscopio que nos escriba en binario End of explanation """ xze, xin, yze, ymu, yoff = osci.query_ascii_values('WFMPRE:XZE?;XIN?;YZE?;YMU?;YOFF?;', separator=';') """ Explanation: Luego, los datos del osciloscopio los necesitaremos calibrar. Para eso necesitamos ciertos parámetros: End of explanation """ data = osci.query_binary_values('CURV?', datatype='B', container=np.array) tiempo = xze + np.arange(len(data)) * xin plt.plot(tiempo, data) """ Explanation: Luego sí, pedimos que levante la curva en pantalla, y la magia está cuando la ploteamos. End of explanation """ fungen.close() """ Explanation: Y no olvidemos cerrar la comunicación con el aparato End of explanation """
robertclf/FAFT
FAFT_64-points_R2C/nbFAFT128_offset_xyz_3D.ipynb
bsd-3-clause
import numpy as np import ctypes from ctypes import * import pycuda.gpuarray as gpuarray import pycuda.driver as cuda import pycuda.autoinit from pycuda.compiler import SourceModule import matplotlib.pyplot as plt import matplotlib.mlab as mlab import math import time %matplotlib inline """ Explanation: 3D Fast Accurate Fourier Transform with an extra gpu array for the 33th complex values End of explanation """ gridDIM = 64 size = gridDIM*gridDIM*gridDIM axes0 = 0 axes1 = 1 axes2 = 2 makeC2C = 0 makeR2C = 1 makeC2R = 1 axesSplit_0 = 0 axesSplit_1 = 1 axesSplit_2 = 2 segment_axes0 = 0 segment_axes1 = 0 segment_axes2 = 0 DIR_BASE = "/home/robert/Documents/new1/FFT/code/" # FAFT _faft128_3D = ctypes.cdll.LoadLibrary( DIR_BASE+'FAFT128_3D_R2C.so' ) _faft128_3D.FAFT128_3D_R2C.restype = int _faft128_3D.FAFT128_3D_R2C.argtypes = [ctypes.c_void_p, ctypes.c_void_p, ctypes.c_float, ctypes.c_float, ctypes.c_int, ctypes.c_int, ctypes.c_int, ctypes.c_int] cuda_faft = _faft128_3D.FAFT128_3D_R2C # Inv FAFT _ifaft128_3D = ctypes.cdll.LoadLibrary(DIR_BASE+'IFAFT128_3D_C2R.so') _ifaft128_3D.IFAFT128_3D_C2R.restype = int _ifaft128_3D.IFAFT128_3D_C2R.argtypes = [ctypes.c_void_p, ctypes.c_void_p, ctypes.c_float, ctypes.c_float, ctypes.c_int, ctypes.c_int, ctypes.c_int, ctypes.c_int] cuda_ifaft = _ifaft128_3D.IFAFT128_3D_C2R """ Explanation: Loading FFT routines End of explanation """ def Gaussian(x,mu,sigma): return np.exp( - (x-mu)**2/sigma**2/2. )/(sigma*np.sqrt( 2*np.pi )) def fftGaussian(p,mu,sigma): return np.exp(-1j*mu*p)*np.exp( - p**2*sigma**2/2. ) # Gaussian parameters mu_x = 1.5 sigma_x = 1. mu_y = 1.5 sigma_y = 1. mu_z = 1.5 sigma_z = 1. # Grid parameters x_amplitude = 5. p_amplitude = 6. # With the traditional method p amplitude is fixed to: 2 * np.pi /( 2*x_amplitude ) dx = 2*x_amplitude/float(gridDIM) # This is dx in Bailey's paper dp = 2*p_amplitude/float(gridDIM) # This is gamma in Bailey's paper delta = dx*dp/(2*np.pi) x_range = np.linspace( -x_amplitude, x_amplitude-dx, gridDIM) p = np.linspace( -p_amplitude, p_amplitude-dp, gridDIM) x = x_range[ np.newaxis, np.newaxis, : ] y = x_range[ np.newaxis, :, np.newaxis ] z = x_range[ :, np.newaxis, np.newaxis ] f = Gaussian(x,mu_x,sigma_x)*Gaussian(y,mu_y,sigma_y)*Gaussian(z,mu_z,sigma_z) plt.imshow( f[:, :, 0], extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) axis_font = {'size':'24'} plt.text( 0., 5.1, '$W$' , **axis_font) plt.colorbar() #plt.ylim(0,0.44) print ' Amplitude x = ',x_amplitude print ' Amplitude p = ',p_amplitude print ' ' print 'mu_x = ', mu_x print 'mu_y = ', mu_y print 'mu_z = ', mu_z print 'sigma_x = ', sigma_x print 'sigma_y = ', sigma_y print 'sigma_z = ', sigma_z print ' ' print 'n = ', x.size print 'dx = ', dx print 'dp = ', dp print ' standard fft dp = ',2 * np.pi /( 2*x_amplitude ) , ' ' print ' ' print 'delta = ', delta print ' ' print 'The Gaussian extends to the numerical error in single precision:' print ' min = ', np.min(f) """ Explanation: Initializing Data Gaussian End of explanation """ # Matrix for the 33th. complex values f33 = np.zeros( [64, 1 ,64], dtype = np.complex64 ) # Copy to GPU if 'f_gpu' in globals(): f_gpu.gpudata.free() if 'f33_gpu' in globals(): f33_gpu.gpudata.free() f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) ) f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) ) """ Explanation: $W$ TRANSFORM FROM AXES-0 After the transfom, f_gpu[:, :32, :] contains real values and f_gpu[:, 32:, :] contains imaginary values. f33_gpu contains the 33th. complex values End of explanation """ # Executing FFT t_init = time.time() cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeR2C, axesSplit_0 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeC2C, axesSplit_0 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes2, axes2, makeC2C, axesSplit_0 ) t_end = time.time() print 'computation time = ', t_end - t_init plt.imshow( np.append( f_gpu.get()[:, :32, :], f33_gpu.get().real, axis=1 )[32,:,:] /float(np.sqrt(size)), extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] ) plt.colorbar() axis_font = {'size':'24'} plt.text( 0., 5.2, '$Re \\mathcal{F}(W)$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(0 , x_amplitude) plt.imshow( np.append( f_gpu.get()[:, 32:, :], f33_gpu.get().imag, axis=1 )[32,:,:] /float(np.sqrt(size)), extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] ) plt.colorbar() axis_font = {'size':'24'} plt.text( 0., 5.2, '$Im \\mathcal{F}(W)$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(0 , x_amplitude) """ Explanation: Forward Transform End of explanation """ # Executing iFFT cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes2, axes2, makeC2C, axesSplit_0 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2C, axesSplit_0 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2R, axesSplit_0 ) plt.imshow( f_gpu.get()[32,:,:]/float(size) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 5.2, '$W_{xy}$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) plt.imshow( f_gpu.get()[:,32,:]/float(size) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 5.2, '$W_{xz}$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) plt.imshow( f_gpu.get()[:,:,32]/float(size) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 5.2, '$W_{yz}$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) """ Explanation: Inverse Transform End of explanation """ # Matrix for the 33th. complex values f33 = np.zeros( [64, 64, 1], dtype = np.complex64 ) # One gpu array. if 'f_gpu' in globals(): f_gpu.gpudata.free() if 'f33_gpu' in globals(): f33_gpu.gpudata.free() f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) ) f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) ) """ Explanation: $W$ TRANSFORM FROM AXES-1 After the transfom, f_gpu[:, :, :64] contains real values and f_gpu[:, :, 64:] contains imaginary values. f33_gpu contains the 33th. complex values End of explanation """ # Executing FFT t_init = time.time() cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeR2C, axesSplit_1 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeC2C, axesSplit_1 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes2, axes2, makeC2C, axesSplit_1 ) t_end = time.time() print 'computation time = ', t_end - t_init plt.imshow( np.append( f_gpu.get()[:, :, :32], f33_gpu.get().real, axis=2 )[32,:,:] /float(np.sqrt(size)), extent=[-p_amplitude , 0, -p_amplitude , p_amplitude-dp] ) plt.colorbar() axis_font = {'size':'24'} plt.text( 0., 5.2, '$Re \\mathcal{F}(W)$', **axis_font ) plt.xlim(-x_amplitude , 0) plt.ylim(-x_amplitude , x_amplitude-dx) plt.imshow( np.append( f_gpu.get()[:, :, 32:], f33_gpu.get().imag, axis=2 )[32,:,:] /float(np.sqrt(size)), extent=[-p_amplitude , 0, -p_amplitude , p_amplitude-dp] ) plt.colorbar() axis_font = {'size':'24'} plt.text( 0., 5.2, '$Im \\mathcal{F}(W)$', **axis_font ) plt.xlim(-x_amplitude , 0) plt.ylim(-x_amplitude , x_amplitude-dx) """ Explanation: Forward Transform End of explanation """ # Executing iFFT cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes2, axes2, makeC2C, axesSplit_1 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2C, axesSplit_1 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2R, axesSplit_1 ) plt.imshow( f_gpu.get()[32,:,:]/float(size) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 5.2, '$W_{xy}$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) plt.imshow( f_gpu.get()[:,32,:]/float(size) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 5.2, '$W_{xz}$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) plt.imshow( f_gpu.get()[:,:,32]/float(size) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 5.2, '$W_{yz}$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) """ Explanation: Inverse Transform End of explanation """ # Matrix for the 33th. complex values f33 = np.zeros( [1, 64, 64], dtype = np.complex64 ) # One gpu array. if 'f_gpu' in globals(): f_gpu.gpudata.free() if 'f33_gpu' in globals(): f33_gpu.gpudata.free() f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) ) f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) ) """ Explanation: $W$ TRANSFORM FROM AXES-2 After the transfom, f_gpu[:64, :, :] contains real values and f_gpu[64:, :, :] contains imaginary values. f33_gpu contains the 33th. complex values End of explanation """ # Executing FFT t_init = time.time() cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes2, axes2, makeR2C, axesSplit_2 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeC2C, axesSplit_2 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeC2C, axesSplit_2 ) t_end = time.time() print 'computation time = ', t_end - t_init plt.imshow( np.append( f_gpu.get()[:32, :, :], f33_gpu.get().real, axis=0 )[:,:,32] /float(np.sqrt(size)), extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] ) plt.colorbar() axis_font = {'size':'24'} plt.text( 0., 5.2, '$Re \\mathcal{F}(W)$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(0 , x_amplitude-dx) plt.imshow( np.append( f_gpu.get()[32:, :, :], f33_gpu.get().imag, axis=0 )[:,:,32] /float(np.sqrt(size)), extent=[-p_amplitude , p_amplitude-dp, 0, p_amplitude-dp] ) plt.colorbar() axis_font = {'size':'24'} plt.text( 0., 5.2, '$Im \\mathcal{F}(W)$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(0 , x_amplitude-dx) """ Explanation: Forward Transform End of explanation """ # Executing iFFT cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2C, axesSplit_2 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2C, axesSplit_2 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes2, axes2, makeC2R, axesSplit_2 ) plt.imshow( f_gpu.get()[32,:,:]/float(size) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 5.2, '$W_{xy}$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) plt.imshow( f_gpu.get()[:,32,:]/float(size) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 5.2, '$W_{xz}$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) plt.imshow( f_gpu.get()[:,:,32]/float(size) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 5.2, '$W_{yz}$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) """ Explanation: Inverse Transform End of explanation """
stevetjoa/stanford-mir
mfcc.ipynb
mit
url = 'http://audio.musicinformationretrieval.com/simple_loop.wav' urllib.urlretrieve(url, filename='simple_loop.wav') """ Explanation: &larr; Back to Index Mel Frequency Cepstral Coefficients (MFCCs) The mel frequency cepstral coefficients (MFCCs) of a signal are a small set of features (usually about 10-20) which concisely describe the overall shape of a spectral envelope. In MIR, it is often used to describe timbre. Download an audio file: End of explanation """ x, fs = librosa.load('simple_loop.wav') librosa.display.waveplot(x, sr=fs) """ Explanation: Plot the audio signal: End of explanation """ IPython.display.Audio(x, rate=fs) """ Explanation: Play the audio: End of explanation """ mfccs = librosa.feature.mfcc(x, sr=fs) print(mfccs.shape) """ Explanation: librosa.feature.mfcc librosa.feature.mfcc computes MFCCs across an audio signal: End of explanation """ librosa.display.specshow(mfccs, sr=fs, x_axis='time') """ Explanation: In this case, mfcc computed 20 MFCCs over 130 frames. The very first MFCC, the 0th coefficient, does not convey information relevant to the overall shape of the spectrum. It only conveys a constant offset, i.e. adding a constant value to the entire spectrum. Therefore, many practitioners will discard the first MFCC when performing classification. For now, we will use the MFCCs as is. Display the MFCCs: End of explanation """ mfccs = sklearn.preprocessing.scale(mfccs, axis=1) print(mfccs.mean(axis=1)) print(mfccs.var(axis=1)) """ Explanation: Feature Scaling Let's scale the MFCCs such that each coefficient dimension has zero mean and unit variance: End of explanation """ librosa.display.specshow(mfccs, sr=fs, x_axis='time') """ Explanation: Display the scaled MFCCs: End of explanation """ hamming_window = ess.Windowing(type='hamming') spectrum = ess.Spectrum() # we just want the magnitude spectrum mfcc = ess.MFCC(numberCoefficients=13) frame_sz = 1024 hop_sz = 500 mfccs = numpy.array([mfcc(spectrum(hamming_window(frame)))[1] for frame in ess.FrameGenerator(x, frameSize=frame_sz, hopSize=hop_sz)]) print(mfccs.shape) """ Explanation: essentia.standard.MFCC We can also use essentia.standard.MFCC to compute MFCCs across a signal, and we will display them as a "MFCC-gram": End of explanation """ mfccs = sklearn.preprocessing.scale(mfccs) """ Explanation: Scale the MFCCs: End of explanation """ plt.imshow(mfccs.T, origin='lower', aspect='auto', interpolation='nearest') plt.ylabel('MFCC Coefficient Index') plt.xlabel('Frame Index') """ Explanation: Plot the MFCCs: End of explanation """
karlnapf/shogun
doc/ipython-notebooks/metric/LMNN.ipynb
bsd-3-clause
import numpy import os import shogun as sg SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') x = numpy.array([[0,0],[-1,0.1],[0.3,-0.05],[0.7,0.3],[-0.2,-0.6],[-0.15,-0.63],[-0.25,0.55],[-0.28,0.67]]) y = numpy.array([0,0,0,0,1,1,2,2]) """ Explanation: Metric Learning with the Shogun Machine Learning Toolbox By Fernando J. Iglesias Garcia (GitHub ID: iglesias) as project report for GSoC 2013 (project details). This notebook illustrates <a href="http://en.wikipedia.org/wiki/Statistical_classification">classification</a> and <a href="http://en.wikipedia.org/wiki/Feature_selection">feature selection</a> using <a href="http://en.wikipedia.org/wiki/Similarity_learning#Metric_learning">metric learning</a> in Shogun. To overcome the limitations of <a href="http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm">knn</a> with Euclidean distance as the distance measure, <a href="http://en.wikipedia.org/wiki/Large_margin_nearest_neighbor">Large Margin Nearest Neighbour</a>(LMNN) is discussed. This is consolidated by applying LMNN over the metagenomics data set. Building up the intuition to understand LMNN First of all, let us introduce LMNN through a simple example. For this purpose, we will be using the following two-dimensional toy data set: End of explanation """ import matplotlib.pyplot as pyplot %matplotlib inline def plot_data(feats,labels,axis,alpha=1.0): # separate features according to their class X0,X1,X2 = feats[labels==0], feats[labels==1], feats[labels==2] # class 0 data axis.plot(X0[:,0], X0[:,1], 'o', color='green', markersize=12, alpha=alpha) # class 1 data axis.plot(X1[:,0], X1[:,1], 'o', color='red', markersize=12, alpha=alpha) # class 2 data axis.plot(X2[:,0], X2[:,1], 'o', color='blue', markersize=12, alpha=alpha) # set axes limits axis.set_xlim(-1.5,1.5) axis.set_ylim(-1.5,1.5) axis.set_aspect('equal') axis.set_xlabel('x') axis.set_ylabel('y') figure,axis = pyplot.subplots(1,1) plot_data(x,y,axis) axis.set_title('Toy data set') pyplot.show() """ Explanation: That is, there are eight feature vectors where each of them belongs to one out of three different classes (identified by either 0, 1, or 2). Let us have a look at this data: End of explanation """ def make_covariance_ellipse(covariance): import matplotlib.patches as patches import scipy.linalg as linalg # the ellipse is centered at (0,0) mean = numpy.array([0,0]) # eigenvalue decomposition of the covariance matrix (w are eigenvalues and v eigenvectors), # keeping only the real part w,v = linalg.eigh(covariance) # normalize the eigenvector corresponding to the largest eigenvalue u = v[0]/linalg.norm(v[0]) # angle in degrees angle = 180.0/numpy.pi*numpy.arctan(u[1]/u[0]) # fill Gaussian ellipse at 2 standard deviation ellipse = patches.Ellipse(mean, 2*w[0]**0.5, 2*w[1]**0.5, 180+angle, color='orange', alpha=0.3) return ellipse # represent the Euclidean distance figure,axis = pyplot.subplots(1,1) plot_data(x,y,axis) ellipse = make_covariance_ellipse(numpy.eye(2)) axis.add_artist(ellipse) axis.set_title('Euclidean distance') pyplot.show() """ Explanation: In the figure above, we can see that two of the classes are represented by two points that are, for each of these classes, very close to each other. The third class, however, has four points that are close to each other with respect to the y-axis, but spread along the x-axis. If we were to apply kNN (k-nearest neighbors) in a data set like this, we would expect quite some errors using the standard Euclidean distance. This is due to the fact that the spread of the data is not similar amongst the feature dimensions. The following piece of code plots an ellipse on top of the data set. The ellipse in this case is in fact a circunference that helps to visualize how the Euclidean distance weights equally both feature dimensions. End of explanation """ from shogun import features, MulticlassLabels feats = features(x.T) labels = MulticlassLabels(y.astype(numpy.float64)) """ Explanation: A possible workaround to improve the performance of kNN in a data set like this would be to input to the kNN routine a distance measure. For instance, in the example above a good distance measure would give more weight to the y-direction than to the x-direction to account for the large spread along the x-axis. Nonetheless, it would be nicer (and, in fact, much more useful in practice) if this distance could be learnt automatically from the data at hand. Actually, LMNN is based upon this principle: given a number of neighbours k, find the Mahalanobis distance measure which maximizes kNN accuracy (using the given value for k) in a training data set. As we usually do in machine learning, under the assumption that the training data is an accurate enough representation of the underlying process, the distance learnt will not only perform well in the training data, but also have good generalization properties. Now, let us use the LMNN class implemented in Shogun to find the distance and plot its associated ellipse. If everything goes well, we will see that the new ellipse only overlaps with the data points of the green class. First, we need to wrap the data into Shogun's feature and label objects: End of explanation """ from shogun import LMNN # number of target neighbours per example k = 1 lmnn = LMNN(feats,labels,k) # set an initial transform as a start point of the optimization init_transform = numpy.eye(2) lmnn.put('maxiter', 2000) lmnn.train(init_transform) """ Explanation: Secondly, perform LMNN training: End of explanation """ # get the linear transform from LMNN L = lmnn.get_real_matrix('linear_transform') # square the linear transform to obtain the Mahalanobis distance matrix M = numpy.matrix(numpy.dot(L.T,L)) # represent the distance given by LMNN figure,axis = pyplot.subplots(1,1) plot_data(x,y,axis) ellipse = make_covariance_ellipse(M.I) axis.add_artist(ellipse) axis.set_title('LMNN distance') pyplot.show() """ Explanation: LMNN is an iterative algorithm. The argument given to train represents the initial state of the solution. By default, if no argument is given, then LMNN uses PCA to obtain this initial value. Finally, we retrieve the distance measure learnt by LMNN during training and visualize it together with the data: End of explanation """ # project original data using L lx = numpy.dot(L,x.T) # represent the data in the projected space figure,axis = pyplot.subplots(1,1) plot_data(lx.T,y,axis) plot_data(x,y,axis,0.3) ellipse = make_covariance_ellipse(numpy.eye(2)) axis.add_artist(ellipse) axis.set_title('LMNN\'s linear transform') pyplot.show() """ Explanation: Beyond the main idea LMNN is one of the so-called linear metric learning methods. What this means is that we can understand LMNN's output in two different ways: on the one hand, as a distance measure, this was explained above; on the other hand, as a linear transformation of the input data. Like any other linear transformation, LMNN's output can be written as a matrix, that we will call $L$. In other words, if the input data is represented by the matrix $X$, then LMNN can be understood as the data transformation expressed by $X'=L X$. We use the convention that each column is a feature vector; thus, the number of rows of $X$ is equal to the input dimension of the data, and the number of columns is equal to the number of vectors. So far, so good. But, if the output of the same method can be interpreted in two different ways, then there must be a relation between them! And that is precisely the case! As mentioned above, the ellipses that were plotted in the previous section represent a distance measure. This distance measure can be thought of as a matrix $M$, being the distance between two vectors $\vec{x_i}$ and $\vec{x_j}$ equal to $d(\vec{x_i},\vec{x_j})=(\vec{x_i}-\vec{x_j})^T M (\vec{x_i}-\vec{x_j})$. In general, this type of matrices are known as Mahalanobis matrices. In LMNN, the matrix $M$ is precisely the 'square' of the linear transformation $L$, i.e. $M=L^T L$. Note that a direct consequence of this is that $M$ is guaranteed to be positive semi-definite (PSD), and therefore define a valid metric. This distance measure/linear transform duality in LMNN has its own advantages. An important one is that the optimization problem can go back and forth between the $L$ and the $M$ representations, giving raise to a very efficient solution. Let us now visualize LMNN using the linear transform interpretation. In the following figure we have taken our original toy data, transform it using $L$ and plot both the before and after versions of the data together. End of explanation """ import numpy import matplotlib.pyplot as pyplot %matplotlib inline def sandwich_data(): from numpy.random import normal # number of distinct classes num_classes = 6 # number of points per class num_points = 9 # distance between layers, the points of each class are in a layer dist = 0.7 # memory pre-allocation x = numpy.zeros((num_classes*num_points, 2)) y = numpy.zeros(num_classes*num_points) for i,j in zip(range(num_classes), range(-num_classes//2, num_classes//2 + 1)): for k,l in zip(range(num_points), range(-num_points//2, num_points//2 + 1)): x[i*num_points + k, :] = numpy.array([normal(l, 0.1), normal(dist*j, 0.1)]) y[i*num_points:i*num_points + num_points] = i return x,y def plot_sandwich_data(x, y, axis=pyplot, cols=['r', 'b', 'g', 'm', 'k', 'y']): for idx,val in enumerate(numpy.unique(y)): xi = x[y==val] axis.scatter(xi[:,0], xi[:,1], s=50, facecolors='none', edgecolors=cols[idx]) x, y = sandwich_data() figure, axis = pyplot.subplots(1, 1, figsize=(5,5)) plot_sandwich_data(x, y, axis) axis.set_aspect('equal') axis.set_title('"Sandwich" toy data set') axis.set_xlabel('x') axis.set_ylabel('y') pyplot.show() """ Explanation: In the figure above, the transparent points represent the original data and are shown to ease the visualization of the LMNN transformation. Note also that the ellipse plotted is the one corresponding to the common Euclidean distance. This is actually an important consideration: if we think of LMNN as a linear transformation, the distance considered in the projected space is the Euclidean distance, and no any Mahalanobis distance given by M. To sum up, we can think of LMNN as a linear transform of the input space, or as method to obtain a distance measure to be used in the input space. It is an error to apply both the projection and the learnt Mahalanobis distance. Neighbourhood graphs An alternative way to visualize the effect of using the distance found by LMNN together with kNN consists of using neighbourhood graphs. Despite the fancy name, these are actually pretty simple. The idea is just to construct a graph in the Euclidean space, where the points in the data set are the nodes of the graph, and a directed edge from one point to another denotes that the destination node is the 1-nearest neighbour of the origin node. Of course, it is also possible to work with neighbourhood graphs where $k \gt 1$. Here we have taken the simplification of $k = 1$ so that the forthcoming plots are not too cluttered. Let us define a data set for which the Euclidean distance performs considerably bad. In this data set there are several levels or layers in the y-direction. Each layer is populated by points that belong to the same class spread along the x-direction. The layers are close to each other in pairs, whereas the spread along x is larger. Let us define a function to generate such a data set and have a look at it. End of explanation """ from shogun import KNN, LMNN, features, MulticlassLabels def plot_neighborhood_graph(x, nn, axis=pyplot, cols=['r', 'b', 'g', 'm', 'k', 'y']): for i in range(x.shape[0]): xs = [x[i,0], x[nn[1,i], 0]] ys = [x[i,1], x[nn[1,i], 1]] axis.plot(xs, ys, cols[int(y[i])]) feats = features(x.T) labels = MulticlassLabels(y) fig, axes = pyplot.subplots(1, 3, figsize=(15, 10)) # use k = 2 instead of 1 because otherwise the method nearest_neighbors just returns the same # points as their own 1-nearest neighbours k = 2 distance = sg.distance('EuclideanDistance') distance.init(feats, feats) knn = KNN(k, distance, labels) plot_sandwich_data(x, y, axes[0]) plot_neighborhood_graph(x, knn.nearest_neighbors(), axes[0]) axes[0].set_title('Euclidean neighbourhood in the input space') lmnn = LMNN(feats, labels, k) # set a large number of iterations. The data set is small so it does not cost a lot, and this way # we ensure a robust solution lmnn.put('maxiter', 3000) lmnn.train() knn.put('distance', lmnn.get_distance()) plot_sandwich_data(x, y, axes[1]) plot_neighborhood_graph(x, knn.nearest_neighbors(), axes[1]) axes[1].set_title('LMNN neighbourhood in the input space') # plot features in the transformed space, with the neighbourhood graph computed using the Euclidean distance L = lmnn.get_real_matrix('linear_transform') xl = numpy.dot(x, L.T) feats = features(xl.T) dist = sg.distance('EuclideanDistance') dist.init(feats, feats) knn.put('distance', dist) plot_sandwich_data(xl, y, axes[2]) plot_neighborhood_graph(xl, knn.nearest_neighbors(), axes[2]) axes[2].set_ylim(-3, 2.5) axes[2].set_title('Euclidean neighbourhood in the transformed space') [axes[i].set_xlabel('x') for i in range(len(axes))] [axes[i].set_ylabel('y') for i in range(len(axes))] [axes[i].set_aspect('equal') for i in range(len(axes))] pyplot.show() """ Explanation: Let the fun begin now! In the following block of code, we create an instance of a kNN classifier, compute the nearest neighbours using the Euclidean distance and, afterwards, using the distance computed by LMNN. The data set in the space result of the linear transformation given by LMNN is also shown. End of explanation """ from shogun import CSVFile, features, MulticlassLabels ape_features = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'multiclass/fm_ape_gut.dat'))) ape_labels = MulticlassLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'multiclass/label_ape_gut.dat'))) """ Explanation: Notice how all the lines that go across the different layers in the left hand side figure have disappeared in the figure in the middle. Indeed, LMNN did a pretty good job here. The figure in the right hand side shows the disposition of the points in the transformed space; from which the neighbourhoods in the middle figure should be clear. In any case, this toy example is just an illustration to give an idea of the power of LMNN. In the next section we will see how after applying a couple methods for feature normalization (e.g. scaling, whitening) the Euclidean distance is not so sensitive against different feature scales. Real data sets Feature selection in metagenomics Metagenomics is a modern field in charge of the study of the DNA of microorganisms. The data set we have chosen for this section contains information about three different types of apes; in particular, gorillas, chimpanzees, and bonobos. Taking an approach based on metagenomics, the main idea is to study the DNA of the microorganisms (e.g. bacteria) which live inside the body of the apes. Owing to the many chemical reactions produced by these microorganisms, it is not only the DNA of the host itself important when studying, for instance, sickness or health, but also the DNA of the microorganisms inhabitants. First of all, let us load the ape data set. This data set contains features taken from the bacteria inhabitant in the gut of the apes. End of explanation """ print('Number of examples = %d, number of features = %d.' % (ape_features.get_num_vectors(), ape_features.get_num_features())) """ Explanation: It is of course important to have a good insight of the data we are dealing with. For instance, how many examples and different features do we have? End of explanation """ def visualize_tdsne(features, labels): from shogun import TDistributedStochasticNeighborEmbedding converter = TDistributedStochasticNeighborEmbedding() converter.put('target_dim', 2) converter.put('perplexity', 25) embedding = converter.embed(features) import matplotlib.pyplot as pyplot % matplotlib inline x = embedding.get_real_matrix('feature_matrix') y = labels.get_real_vector('labels') pyplot.scatter(x[0, y==0], x[1, y==0], color='green') pyplot.scatter(x[0, y==1], x[1, y==1], color='red') pyplot.scatter(x[0, y==2], x[1, y==2], color='blue') pyplot.show() visualize_tdsne(ape_features, ape_labels) """ Explanation: So, 1472 features! Those are quite many features indeed. In other words, the feature vectors at hand lie on a 1472-dimensional space. We cannot visualize in the input feature space how the feature vectors look like. However, in order to gain a little bit more of understanding of the data, we can apply dimension reduction, embed the feature vectors in a two-dimensional space, and plot the vectors in the embedded space. To this end, we are going to use one of the many methods for dimension reduction included in Shogun. In this case, we are using t-distributed stochastic neighbour embedding (or t-dsne). This method is particularly suited to produce low-dimensional embeddings (two or three dimensions) that are straightforward to visualize. End of explanation """ from shogun import KNN from shogun import StratifiedCrossValidationSplitting, CrossValidation from shogun import CrossValidationResult, MulticlassAccuracy # set up the classifier knn = KNN() knn.put('k', 3) knn.put('distance', sg.distance('EuclideanDistance')) # set up 5-fold cross-validation splitting = StratifiedCrossValidationSplitting(ape_labels, 5) # evaluation method evaluator = MulticlassAccuracy() cross_validation = CrossValidation(knn, ape_features, ape_labels, splitting, evaluator) # locking is not supported for kNN, deactivate it to avoid an inoffensive warning cross_validation.put('m_autolock', False) # number of experiments, the more we do, the less variance in the result num_runs = 200 cross_validation.put('num_runs', num_runs) # perform cross-validation and print the result! result = cross_validation.evaluate() result = CrossValidationResult.obtain_from_generic(result) print('kNN mean accuracy in a total of %d runs is %.4f.' % (num_runs, result.get_real('mean'))) """ Explanation: In the figure above, the green points represent chimpanzees, the red ones bonobos, and the blue points gorillas. Providing the results in the figure, we can rapidly draw the conclusion that the three classes of apes are somewhat easy to discriminate in the data set since the classes are more or less well separated in two dimensions. Note that t-dsne use randomness in the embedding process. Thus, the figure result of the experiment in the previous block of code will be different after different executions. Feel free to play around and observe the results after different runs! After this, it should be clear that the bonobos form most of the times a very compact cluster, whereas the chimpanzee and gorillas clusters are more spread. Also, there tends to be a chimpanzee (a green point) closer to the gorillas' cluster. This is probably a outlier in the data set. Even before applying LMNN to the ape gut data set, let us apply kNN classification and study how it performs using the typical Euclidean distance. Furthermore, since this data set is rather small in terms of number of examples, the kNN error above may vary considerably (I have observed variation of almost 20% a few times) across different runs. To get a robust estimate of how kNN performs in the data set, we will perform cross-validation using Shogun's framework for evaluation. This will give us a reliable result regarding how well kNN performs in this data set. End of explanation """ from shogun import LMNN import numpy # to make training faster, use a portion of the features fm = ape_features.get_real_matrix('feature_matrix') ape_features_subset = features(fm[:150, :]) # number of targer neighbours in LMNN, here we just use the same value that was used for KNN before k = 3 lmnn = LMNN(ape_features_subset, ape_labels, k) lmnn.put('m_diagonal', True) lmnn.put('maxiter', 1000) init_transform = numpy.eye(ape_features_subset.get_num_features()) lmnn.train(init_transform) diagonal = numpy.diag(lmnn.get_real_matrix('linear_transform')) print('%d out of %d elements are non-zero.' % (numpy.sum(diagonal != 0), diagonal.size)) """ Explanation: Finally, we can say that KNN performs actually pretty well in this data set. The average test classification error is less than between 2%. This error rate is already low and we should not really expect a significant improvement applying LMNN. This ought not be a surprise. Recall that the points in this data set have more than one thousand features and, as we saw before in the dimension reduction experiment, only two dimensions in an embedded space were enough to discern arguably well the chimpanzees, gorillas and bonobos. Note that we have used stratified splitting for cross-validation. Stratified splitting divides the folds used during cross-validation so that the proportion of the classes in the initial data set is approximately maintained for each of the folds. This is particular useful in skewed data sets, where the number of examples among classes varies significantly. Nonetheless, LMNN may still turn out to be very useful in a data set like this one. Making a small modification of the vanilla LMNN algorithm, we can enforce that the linear transform found by LMNN is diagonal. This means that LMNN can be used to weight each of the features and, once the training is performed, read from these weights which features are relevant to apply kNN and which ones are not. This is indeed a form of feature selection. Using Shogun, it is extremely easy to switch to this so-called diagonal mode for LMNN: just call the method set_diagonal(use_diagonal) with use_diagonal set to True. The following experiment takes about five minutes until it is completed (using Shogun Release, i.e. compiled with optimizations enabled). This is mostly due to the high dimension of the data (1492 features) and the fact that, during training, LMNN has to compute many outer products of feature vectors, which is a computation whose time complexity is proportional to the square of the number of features. For the illustration purposes of this notebook, in the following cell we are just going to use a small subset of all the features so that the training finishes faster. End of explanation """ import matplotlib.pyplot as pyplot %matplotlib inline statistics = lmnn.get_statistics() pyplot.plot(statistics.obj.get()) pyplot.grid(True) pyplot.xlabel('Number of iterations') pyplot.ylabel('LMNN objective') pyplot.show() """ Explanation: So only 64 out of the 150 first features are important according to the result transform! The rest of them have been given a weight exactly equal to zero, even if all of the features were weighted equally with a value of one at the beginnning of the training. In fact, if all the 1472 features were used, only about 158 would have received a non-zero weight. Please, feel free to experiment using all the features! It is a fair question to ask how did we know that the maximum number of iterations in this experiment should be around 1200 iterations. Well, the truth is that we know this only because we have run this experiment with this same data beforehand, and we know that after this number of iterations the algorithm has converged. This is not something nice, and the ideal case would be if one could completely forget about this parameter, so that LMNN uses as many iterations as it needs until it converges. Nevertheless, this is not practical at least because of two reasons: If you are dealing with many examples or with very high dimensional feature vectors, you might not want to wait until the algorithm converges and have a look at what LMNN has found before it has completely converged. As with any other algorithm based on gradient descent, the termination criteria can be tricky. Let us illustrate this further: End of explanation """ from shogun import CSVFile, features, MulticlassLabels wine_features = features(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat'))) wine_labels = MulticlassLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat'))) assert(wine_features.get_num_vectors() == wine_labels.get_num_labels()) print('%d feature vectors with %d features from %d different classes.' % (wine_features.get_num_vectors(), \ wine_features.get_num_features(), wine_labels.get_num_classes())) """ Explanation: Along approximately the first three hundred iterations, there is not much variation in the objective. In other words, the objective curve is pretty much flat. If we are not careful and use termination criteria that are not demanding enough, training could be stopped at this point. This would be wrong, and might have terrible results as the training had not clearly converged yet at that moment. In order to avoid disastrous situations, in Shogun we have implemented LMNN with really demanding criteria for automatic termination of the training process. Albeit, it is possible to tune the termination criteria using the methods set_stepsize_threshold and set_obj_threshold. These methods can be used to modify the lower bound required in the step size and the increment in the objective (relative to its absolute value), respectively, to stop training. Also, it is possible to set a hard upper bound on the number of iterations using set_maxiter as we have done above. In case the internal termination criteria did not fire before the maximum number of iterations was reached, you will receive a warning message, similar to the one shown above. This is not a synonym that the training went wrong; but it is strongly recommended at this event to have a look at the objective plot as we have done in the previous block of code. Multiclass classification In addition to feature selection, LMNN can be of course used for multiclass classification. I like to think about LMNN in multiclass classification as a way to empower kNN. That is, the idea is basically to apply kNN using the distance found by LMNN $-$ in contrast with using one of the other most common distances, such as the Euclidean one. To this end we will use the wine data set from the UCI Machine Learning repository. End of explanation """ from shogun import KNN, EuclideanDistance from shogun import StratifiedCrossValidationSplitting, CrossValidation from shogun import CrossValidationResult, MulticlassAccuracy import numpy # kNN classifier k = 5 knn = KNN() knn.put('k', k) knn.put('distance', EuclideanDistance()) splitting = StratifiedCrossValidationSplitting(wine_labels, 5) evaluator = MulticlassAccuracy() cross_validation = CrossValidation(knn, wine_features, wine_labels, splitting, evaluator) cross_validation.put('m_autolock', False) num_runs = 200 cross_validation.put('num_runs', num_runs) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) euclidean_means = numpy.zeros(3) euclidean_means[0] = result.get_real('mean') print('kNN accuracy with the Euclidean distance %.4f.' % result.get_real('mean')) """ Explanation: First, let us evaluate the performance of kNN in this data set using the same cross-validation setting used in the previous section: End of explanation """ from shogun import LMNN # train LMNN lmnn = LMNN(wine_features, wine_labels, k) lmnn.put('maxiter', 1500) lmnn.train() # evaluate kNN using the distance learnt by LMNN knn.set_distance(lmnn.get_distance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) lmnn_means = numpy.zeros(3) lmnn_means[0] = result.get_real('mean') print('kNN accuracy with the distance obtained by LMNN %.4f.' % result.get_real('mean')) """ Explanation: Seconly, we will use LMNN to find a distance measure and use it with kNN: End of explanation """ print('minima = ' + str(numpy.min(wine_features, axis=1))) print('maxima = ' + str(numpy.max(wine_features, axis=1))) """ Explanation: The warning is fine in this case, we have made sure that the objective variation was really small after 1500 iterations. In any case, do not hesitate to check it yourself studying the objective plot as it was shown in the previous section. As the results point out, LMNN really helps here to achieve better classification performance. However, this comparison is not entirely fair since the Euclidean distance is very sensitive to the scaling that different feature dimensions may have, whereas LMNN can adjust to this during training. Let us have a closer look to this fact. Next, we are going to retrieve the feature matrix and see what are the maxima and minima for every dimension. End of explanation """ from shogun import RescaleFeatures # preprocess features so that all of them vary within [0,1] preprocessor = RescaleFeatures() preprocessor.init(wine_features) wine_features.add_preprocessor(preprocessor) wine_features.apply_preprocessor() # sanity check assert(numpy.min(wine_features) >= 0.0 and numpy.max(wine_features) <= 1.0) # perform kNN classification after the feature rescaling knn.put('distance', EuclideanDistance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) euclidean_means[1] = result.get_real('mean') print('kNN accuracy with the Euclidean distance after feature rescaling %.4f.' % result.get_real('mean')) # train kNN in the new features and classify with kNN lmnn.train() knn.put('distance', lmnn.get_distance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) lmnn_means[1] = result.get_real('mean') print('kNN accuracy with the distance obtained by LMNN after feature rescaling %.4f.' % result.get_real('mean')) """ Explanation: Examine the second and the last dimensions, for instance. The second dimension has values ranging from 0.74 to 5.8, while the values of the last dimension range from 278 to 1680. This will cause that the Euclidean distance works specially wrong in this data set. You can realize of this considering that the total distance between two points will almost certainly just take into account the contributions of the dimensions with largest range. In order to produce a more fair comparison, we will rescale the data so that all the feature dimensions are within the interval [0,1]. Luckily, there is a preprocessor class in Shogun that makes this straightforward. End of explanation """ import scipy.linalg as linalg # shorthand for the feature matrix -- this makes a copy of the feature matrix data = wine_features.get_real_matrix('feature_matrix') # remove mean data = data.T data-= numpy.mean(data, axis=0) # compute the square of the covariance matrix and its inverse M = linalg.sqrtm(numpy.cov(data.T)) # keep only the real part, although the imaginary that pops up in the sqrtm operation should be equal to zero N = linalg.inv(M).real # apply whitening transform white_data = numpy.dot(N, data.T) wine_white_features = features(white_data) """ Explanation: Another different preprocessing that can be applied to the data is called whitening. Whitening, which is explained in an article in wikipedia, transforms the covariance matrix of the data into the identity matrix. End of explanation """ import matplotlib.pyplot as pyplot %matplotlib inline fig, axarr = pyplot.subplots(1,2) axarr[0].matshow(numpy.cov(wine_features)) axarr[1].matshow(numpy.cov(wine_white_features)) pyplot.show() """ Explanation: The covariance matrices before and after the transformation can be compared to see that the covariance really becomes the identity matrix. End of explanation """ wine_features = wine_white_features # perform kNN classification after whitening knn.set_distance(EuclideanDistance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) euclidean_means[2] = result.get_real('mean') print('kNN accuracy with the Euclidean distance after whitening %.4f.' % result.get_real('mean')) # train kNN in the new features and classify with kNN lmnn.train() knn.put('distance', lmnn.get_distance()) result = CrossValidationResult.obtain_from_generic(cross_validation.evaluate()) lmnn_means[2] = result.get_real('mean') print('kNN accuracy with the distance obtained by LMNN after whitening %.4f.' % result.get_real('mean')) """ Explanation: Finally, we evaluate again the performance obtained with kNN using the Euclidean distance and the distance found by LMNN using the whitened features. End of explanation """ assert(euclidean_means.shape[0] == lmnn_means.shape[0]) N = euclidean_means.shape[0] # the x locations for the groups ind = 0.5*numpy.arange(N) # bar width width = 0.15 figure, axes = pyplot.subplots() figure.set_size_inches(6, 5) euclidean_rects = axes.bar(ind, euclidean_means, width, color='y') lmnn_rects = axes.bar(ind+width, lmnn_means, width, color='r') # attach information to chart axes.set_ylabel('Accuracies') axes.set_ylim(top=1.4) axes.set_title('kNN accuracy by distance and feature preprocessing') axes.set_xticks(ind+width) axes.set_xticklabels(('Raw', 'Rescaling', 'Whitening')) axes.legend(( euclidean_rects[0], lmnn_rects[0]), ('Euclidean', 'LMNN'), loc='upper right') def autolabel(rects): # attach text labels to bars for rect in rects: height = rect.get_height() axes.text(rect.get_x()+rect.get_width()/2., 1.05*height, '%.3f' % height, ha='center', va='bottom') autolabel(euclidean_rects) autolabel(lmnn_rects) pyplot.show() """ Explanation: As it can be seen, it did not really help to whiten the features in this data set with respect to only applying feature rescaling; the accuracy was already rather large after rescaling. In any case, it is good to know that this transformation exists, as it can become useful with other data sets, or before applying other machine learning algorithms. Let us summarize the results obtained in this section with a bar chart grouping the accuracy results by distance (Euclidean or the one found by LMNN), and feature preprocessing: End of explanation """
hetaodie/hetaodie.github.io
assets/media/uda-ml/fjd/ccjl/层次聚类/.ipynb_checkpoints/Hierarchical Clustering Lab-zh-checkpoint.ipynb
mit
from sklearn import datasets iris = datasets.load_iris() """ Explanation: 层次聚类 Lab 在此 notebook 中,我们将使用 sklearn 对鸢尾花数据集执行层次聚类。该数据集包含 4 个维度/属性和 150 个样本。每个样本都标记为某种鸢尾花品种(共三种)。 在此练习中,我们将忽略标签和基于属性的聚类,并将不同层次聚类技巧的结果与实际标签进行比较,看看在这种情形下哪种技巧的效果最好。然后,我们将可视化生成的聚类层次。 1. 导入鸢尾花数据集 End of explanation """ iris.data[:10] iris.target """ Explanation: 查看数据集中的前 10 个样本 End of explanation """ from sklearn.cluster import AgglomerativeClustering # Hierarchical clustering # Ward is the default linkage algorithm, so we'll start with that ward = AgglomerativeClustering(n_clusters=3) ward_pred = ward.fit_predict(iris.data) """ Explanation: 2. 聚类 现在使用 sklearn 的 AgglomerativeClustering 进行层次聚类 End of explanation """ # Hierarchical clustering using complete linkage # TODO: Create an instance of AgglomerativeClustering with the appropriate parameters complete = AgglomerativeClustering(n_clusters=3, linkage="complete") # Fit & predict # TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels complete_pred = complete.fit_predict(iris.data) # Hierarchical clustering using average linkage # TODO: Create an instance of AgglomerativeClustering with the appropriate parameters avg = AgglomerativeClustering(n_clusters=3, linkage="average") # Fit & predict # TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels avg_pred = vg.fit_predict(iris.data) """ Explanation: 并且尝试完全连接法和平均连接法 练习: * 通过完全连接法进行层次聚类,将预测的标签存储在变量 complete_pred 中 * 通过平均连接法进行层次聚类,将预测的标签存储在变量 avg_pred 中 注意:请查看 AgglomerativeClustering 文档以查找要作为 linkage 值传递的合适值 End of explanation """ from sklearn.metrics import adjusted_rand_score ward_ar_score = adjusted_rand_score(iris.target, ward_pred) """ Explanation: 为了判断哪个聚类结果与样本的原始标签更匹配,我们可以使用 adjusted_rand_score,它是一个外部聚类有效性指标,分数在 -1 到 1 之间,1 表示两个聚类在对数据集中的样本进行分组时完全一样(无论每个聚类分配的标签如何)。 在这门课程的稍后部分会讨论聚类有效性指标。 End of explanation """ # TODO: Calculated the adjusted Rand score for the complete linkage clustering labels complete_ar_score = # TODO: Calculated the adjusted Rand score for the average linkage clustering labels avg_ar_score = """ Explanation: 练习: * 计算通过完全连接法和平均连接法得出的聚类的调整离差平方和(ward)分数 End of explanation """ print( "Scores: \nWard:", ward_ar_score,"\nComplete: ", complete_ar_score, "\nAverage: ", avg_ar_score) """ Explanation: 哪个算法的调整兰德分数更高? End of explanation """ iris.data[:15] """ Explanation: 3. 标准化对聚类的影响 可以改进该聚类结果吗? 我们再看看数据集 End of explanation """ from sklearn import preprocessing normalized_X = preprocessing.normalize(iris.data) normalized_X[:10] """ Explanation: 查看该数据集后,可以看出第四列的值比其他列要小,因此它的方差对聚类处理流程的影响更新(因为聚类是基于距离的)。我们对数据集进行标准化 ,使每个维度都位于 0 到 1 之间,以便在聚类流程中具有相等的权重。 方法是让每列减去最小值,然后除以范围。 sklearn 提供了一个叫做 preprocessing.normalize() 的实用工具,可以帮助我们完成这一步 End of explanation """ ward = AgglomerativeClustering(n_clusters=3) ward_pred = ward.fit_predict(normalized_X) complete = AgglomerativeClustering(n_clusters=3, linkage="complete") complete_pred = complete.fit_predict(normalized_X) avg = AgglomerativeClustering(n_clusters=3, linkage="average") avg_pred = avg.fit_predict(normalized_X) ward_ar_score = adjusted_rand_score(iris.target, ward_pred) complete_ar_score = adjusted_rand_score(iris.target, complete_pred) avg_ar_score = adjusted_rand_score(iris.target, avg_pred) print( "Scores: \nWard:", ward_ar_score,"\nComplete: ", complete_ar_score, "\nAverage: ", avg_ar_score) """ Explanation: 现在所有列都在 0 到 1 这一范围内了。这么转换之后对数据集进行聚类会形成更好的聚类吗?(与样本的原始标签更匹配) End of explanation """ # Import scipy's linkage function to conduct the clustering from scipy.cluster.hierarchy import linkage # Specify the linkage type. Scipy accepts 'ward', 'complete', 'average', as well as other values # Pick the one that resulted in the highest Adjusted Rand Score linkage_type = 'ward' linkage_matrix = linkage(normalized_X, linkage_type) """ Explanation: 4. 通过 scipy 进行谱系图可视化 我们来可视化分数最高的聚类结果。 为此,我们需要使用 Scipy 的 linkage 函数再次进行聚类,以便获取稍后用来可视化层次关系的连接矩阵 End of explanation """ from scipy.cluster.hierarchy import dendrogram import matplotlib.pyplot as plt plt.figure(figsize=(22,18)) # plot using 'dendrogram()' dendrogram(linkage_matrix) plt.show() """ Explanation: 使用 scipy 的 dendrogram 函数进行绘制 End of explanation """ import seaborn as sns sns.clustermap(normalized_X, figsize=(12,18), method=linkage_type, cmap='viridis') # Expand figsize to a value like (18, 50) if you want the sample labels to be readable # Draw back is that you'll need more scrolling to observe the dendrogram plt.show() """ Explanation: 5. 通过 Seaborn 的 clustermap 进行可视化 python 的 seaborn 绘制库可以绘制聚类图,它是一种更详细地可视化数据集的谱系图。它也会进行聚类,因此我们只需传入数据集和想要的连接类型,它将在后台使用 scipy 进行聚类 End of explanation """
VlachosGroup/VlachosGroupAdditivity
docs/source/WorkshopJupyterNotebooks/OpenMKM_demo/batch/batch.ipynb
mit
import matplotlib as mpl mpl.rcParams['figure.dpi'] = 500 import os import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format = 'retina' """ Explanation: Simulating Batch Reactor Here a batch reactor simulation is demoed with pure gas phase mechanism. The gas phase mechanims is called GRI-Mech v3.0 from University of Berkeley. GRI-Mech 3.0 is an optimized mechanism designed to model natural gas combustion, including NO formation and reburn chemistry. The mechanims consists of 53 species and 325 reactions. For more details, refer to http://combustion.berkeley.edu/gri-mech/version30/text30.html. For this demo, we are interested in Hydrogen detonation in a batch reactor under different operating conditions. The conventional overall reaction is written as 2H<sub>2</sub> + O<sub>2</sub> = 2H<sub>2</sub>O. Note: The actual simulations are executed from command line. In the jupyter notebook, analysis of the resulting data is carried out. End of explanation """ ls adiab/*.out def read_file(fname): with open(fname) as fp: lines = fp.readlines() for line in lines: print(line) read_file("adiab/kf.out") """ Explanation: 1. Adiabatic batch reactor Under adiabatic conditions, no heat is supplied/taken from the reactor. The temperature within the reactor is allowed to change. We try to understand the detonation process via the mole fractions of the reactants, products, and intermediates. Diagnostic data Files First lets look at the diagnostic files with .out extension. These files have .out extension. species.out: Lists the species participating in the mechanism, phase and the composition Hform.out, Sform.out: Formation enthalpy and entropy of species reactions.out: List of reactions Hrxn.out, Srxn.out, Grxn.out: Enthalpy, entropy, and Gibbs energies of reactions kc.out, kf.out, kr.out: Equilibrium constant, forward and reverse rate constants. End of explanation """ ls adiab/*.csv df = pd.read_csv(os.path.join('adiab', 'gas_mole_tr.csv')) df.columns = df.columns.str.strip() df["t_ms"] = df["t(s)"]*1e3 plt.clf() ax1 = plt.subplot(1, 1, 1) ax1.plot('t_ms', 'H', data=df, marker='^', markersize=0.5, label="H mole frac") ax1.plot('t_ms', 'OH', data=df, marker='v', markersize=0.5, label="OH mole frac") ax1.plot('t_ms', 'H2O', data=df, marker='*', markersize=0.5, label="H2O mole frac") ax1.plot('t_ms', 'H2', data=df, marker='o', markersize=0.5, label="H2 mole frac") ax1.plot('t_ms', 'O2', data=df, marker='<', markersize=0.5, label="O2 mole frac") ax1.set_xlabel('Time (ms)') ax1.ticklabel_format(axis='y', style='sci', scilimits=(0,0)) ax1.set_ylabel('Mass Fraction') #ax1.set_xlim([0,1]) ax1.legend(loc="upper left", bbox_to_anchor=(1,1)) plt.tight_layout() plt.savefig('GRI30_Hdetonation_adiab_mole.png', dpi=500) plt.clf() ax1 = plt.subplot(1, 1, 1) ax1.plot('t_ms', 'H', data=df, marker='^', markersize=0.5, label="H mole frac") ax1.plot('t_ms', 'OH', data=df, marker='v', markersize=0.5, label="OH mole frac") ax1.plot('t_ms', 'H2O', data=df, marker='*', markersize=0.5, label="H2O mole frac") ax1.plot('t_ms', 'H2', data=df, marker='o', markersize=0.5, label="H2 mole frac") ax1.plot('t_ms', 'O2', data=df, marker='<', markersize=0.5, label="O2 mole frac") ax1.set_xlabel('Time (ms)') ax1.ticklabel_format(axis='y', style='sci', scilimits=(0,0)) ax1.set_ylabel('Mass Fraction') #ax1.set_xlim([0,1]) ax1.legend(loc="upper left", bbox_to_anchor=(1,1)) ax1.set_xlim(0.25,0.4) plt.tight_layout() plt.savefig('GRI30_Hdetonation_adiab_mole_zoomed.png', dpi=500) df_isothermal = pd.read_csv(os.path.join('isother', 'gas_mole_tr.csv')) df_isothermal.columns = df_isothermal.columns.str.strip() df_isothermal["t_ms"] = df_isothermal["t(s)"]*1e3 plt.clf() ax = plt.subplot(1, 1, 1) ax.plot('t_ms', 'H', data=df, marker='^', markersize=0.5, label="H mole frac") ax.plot('t_ms', 'OH', data=df, marker='v', markersize=0.5, label="OH mole frac") ax.plot('t_ms', 'H2O', data=df, marker='*', markersize=0.5, label="H2O mole frac") ax.plot('t_ms', 'H2', data=df, marker='o', markersize=0.5, label="H2 mole frac") ax.plot('t_ms', 'O2', data=df, marker='<', markersize=0.5, label="O2 mole frac") ax.set_xlabel('Time (ms)') ax.ticklabel_format(axis='y', style='sci', scilimits=(0,0)) ax.set_ylabel('Mass Fraction') ax.set_xlim(0.25,0.4) #ax1.set_xlim([0,1]) ax.legend(loc="upper left", bbox_to_anchor=(1,1)) plt.tight_layout() plt.savefig('GRI30_H-detonation_isother_mole.png', dpi=500) """ Explanation: Data Files The files are given as _ss.csv and _tr.csv or _ss.dat and _tr.dat depending on the output format selected. _tr indicates transient output, and _ss indicates steady state. gas_mass_, gas_mole_: Lists the mass fraction and mole fractions of gas phase species respectively gas_msdot_: Production rate of the gas space species from the surface. surf_cov_: Coverages of the surface species rctr_state: State of the reactor (Temperature, Pressure, Density, Internal Energy) End of explanation """ adiab_state_df = pd.read_csv(os.path.join('adiab','rctr_state_tr.csv')) isotherm_state_df = pd.read_csv(os.path.join('isother','rctr_state_tr.csv')) adiab_state_df.columns = adiab_state_df.columns.str.strip() isotherm_state_df.columns = isotherm_state_df.columns.str.strip() isotherm_state_df["t_ms"] = isotherm_state_df["t(s)"]*1e3 adiab_state_df["t_ms"] = adiab_state_df["t(s)"]*1e3 plt.clf() ax_comp = plt.subplot(1, 1, 1) ax_comp.plot('t_ms', 'Temperature(K)', data=isotherm_state_df, marker='^', markersize=0.5, label="Isothermal T") ax_comp.plot('t_ms', 'Temperature(K)', data=adiab_state_df, marker='v', markersize=0.5, label="Adiabatic T") ax_comp.set_xlabel('Time (ms)') ax_comp.ticklabel_format(axis='y', style='sci', scilimits=(0,0)) ax_comp.set_ylabel('Temp (K)') ax_comp.set_ylim([500,3000]) ax_comp.legend(loc="upper left", bbox_to_anchor=(1,1)) plt.tight_layout() plt.savefig('GRI30_H-detonation_T-comp.png', dpi=500) plt.clf() ax_comp = plt.subplot(1, 1, 1) ax_comp.plot('t_ms', 'Pressure(Pa)', data=isotherm_state_df, marker='^', markersize=0.5, label="Isothermal T") ax_comp.plot('t_ms', 'Pressure(Pa)', data=adiab_state_df, marker='v', markersize=0.5, label="Adiabatic T") ax_comp.set_xlabel('Time (ms)') ax_comp.ticklabel_format(axis='y', style='sci', scilimits=(0,0)) ax_comp.set_ylabel('Press (Pa)') #ax_comp.set_ylim([500,3000]) ax_comp.legend(loc="upper left", bbox_to_anchor=(1,1)) plt.tight_layout() plt.savefig('GRI30_H-detonation_P-comp.png', dpi=500) """ Explanation: Reactor State Comparison How do the reactor temperature and pressure evolve for the two different operating conditions? End of explanation """
tensorflow/text
docs/tutorials/classify_text_with_bert.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Hub Authors. End of explanation """ # A dependency of the preprocessing for BERT inputs !pip install -q -U "tensorflow-text==2.8.*" """ Explanation: <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/tutorials/classify_text_with_bert"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/classify_text_with_bert.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/classify_text_with_bert.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> <td> <a href="https://tfhub.dev/google/collections/bert/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a> </td> </table> Classify text with BERT This tutorial contains complete code to fine-tune BERT to perform sentiment analysis on a dataset of plain-text IMDB movie reviews. In addition to training a model, you will learn how to preprocess text into an appropriate format. In this notebook, you will: Load the IMDB dataset Load a BERT model from TensorFlow Hub Build your own model by combining BERT with a classifier Train your own model, fine-tuning BERT as part of that Save your model and use it to classify sentences If you're new to working with the IMDB dataset, please see Basic text classification for more details. About BERT BERT and other Transformer encoder architectures have been wildly successful on a variety of tasks in NLP (natural language processing). They compute vector-space representations of natural language that are suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after, hence the name: Bidirectional Encoder Representations from Transformers. BERT models are usually pre-trained on a large corpus of text, then fine-tuned for specific tasks. Setup End of explanation """ !pip install -q tf-models-official==2.7.0 import os import shutil import tensorflow as tf import tensorflow_hub as hub import tensorflow_text as text from official.nlp import optimization # to create AdamW optimizer import matplotlib.pyplot as plt tf.get_logger().setLevel('ERROR') """ Explanation: You will use the AdamW optimizer from tensorflow/models. End of explanation """ url = 'https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz' dataset = tf.keras.utils.get_file('aclImdb_v1.tar.gz', url, untar=True, cache_dir='.', cache_subdir='') dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb') train_dir = os.path.join(dataset_dir, 'train') # remove unused folders to make it easier to load the data remove_dir = os.path.join(train_dir, 'unsup') shutil.rmtree(remove_dir) """ Explanation: Sentiment analysis This notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review. You'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database. Download the IMDB dataset Let's download and extract the dataset, then explore the directory structure. End of explanation """ AUTOTUNE = tf.data.AUTOTUNE batch_size = 32 seed = 42 raw_train_ds = tf.keras.utils.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='training', seed=seed) class_names = raw_train_ds.class_names train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE) val_ds = tf.keras.utils.text_dataset_from_directory( 'aclImdb/train', batch_size=batch_size, validation_split=0.2, subset='validation', seed=seed) val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE) test_ds = tf.keras.utils.text_dataset_from_directory( 'aclImdb/test', batch_size=batch_size) test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE) """ Explanation: Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset. The IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the validation_split argument below. Note: When using the validation_split and subset arguments, make sure to either specify a random seed, or to pass shuffle=False, so that the validation and training splits have no overlap. End of explanation """ for text_batch, label_batch in train_ds.take(1): for i in range(3): print(f'Review: {text_batch.numpy()[i]}') label = label_batch.numpy()[i] print(f'Label : {label} ({class_names[label]})') """ Explanation: Let's take a look at a few reviews. End of explanation """ #@title Choose a BERT model to fine-tune bert_model_name = 'small_bert/bert_en_uncased_L-4_H-512_A-8' #@param ["bert_en_uncased_L-12_H-768_A-12", "bert_en_cased_L-12_H-768_A-12", "bert_multi_cased_L-12_H-768_A-12", "small_bert/bert_en_uncased_L-2_H-128_A-2", "small_bert/bert_en_uncased_L-2_H-256_A-4", "small_bert/bert_en_uncased_L-2_H-512_A-8", "small_bert/bert_en_uncased_L-2_H-768_A-12", "small_bert/bert_en_uncased_L-4_H-128_A-2", "small_bert/bert_en_uncased_L-4_H-256_A-4", "small_bert/bert_en_uncased_L-4_H-512_A-8", "small_bert/bert_en_uncased_L-4_H-768_A-12", "small_bert/bert_en_uncased_L-6_H-128_A-2", "small_bert/bert_en_uncased_L-6_H-256_A-4", "small_bert/bert_en_uncased_L-6_H-512_A-8", "small_bert/bert_en_uncased_L-6_H-768_A-12", "small_bert/bert_en_uncased_L-8_H-128_A-2", "small_bert/bert_en_uncased_L-8_H-256_A-4", "small_bert/bert_en_uncased_L-8_H-512_A-8", "small_bert/bert_en_uncased_L-8_H-768_A-12", "small_bert/bert_en_uncased_L-10_H-128_A-2", "small_bert/bert_en_uncased_L-10_H-256_A-4", "small_bert/bert_en_uncased_L-10_H-512_A-8", "small_bert/bert_en_uncased_L-10_H-768_A-12", "small_bert/bert_en_uncased_L-12_H-128_A-2", "small_bert/bert_en_uncased_L-12_H-256_A-4", "small_bert/bert_en_uncased_L-12_H-512_A-8", "small_bert/bert_en_uncased_L-12_H-768_A-12", "albert_en_base", "electra_small", "electra_base", "experts_pubmed", "experts_wiki_books", "talking-heads_base"] map_name_to_handle = { 'bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3', 'bert_en_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3', 'bert_multi_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3', 'small_bert/bert_en_uncased_L-2_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1', 'small_bert/bert_en_uncased_L-2_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1', 'small_bert/bert_en_uncased_L-2_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1', 'small_bert/bert_en_uncased_L-2_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1', 'small_bert/bert_en_uncased_L-4_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1', 'small_bert/bert_en_uncased_L-4_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1', 'small_bert/bert_en_uncased_L-4_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1', 'small_bert/bert_en_uncased_L-4_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1', 'small_bert/bert_en_uncased_L-6_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1', 'small_bert/bert_en_uncased_L-6_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1', 'small_bert/bert_en_uncased_L-6_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1', 'small_bert/bert_en_uncased_L-6_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1', 'small_bert/bert_en_uncased_L-8_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1', 'small_bert/bert_en_uncased_L-8_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1', 'small_bert/bert_en_uncased_L-8_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1', 'small_bert/bert_en_uncased_L-8_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1', 'small_bert/bert_en_uncased_L-10_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1', 'small_bert/bert_en_uncased_L-10_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1', 'small_bert/bert_en_uncased_L-10_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1', 'small_bert/bert_en_uncased_L-10_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1', 'small_bert/bert_en_uncased_L-12_H-128_A-2': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1', 'small_bert/bert_en_uncased_L-12_H-256_A-4': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1', 'small_bert/bert_en_uncased_L-12_H-512_A-8': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1', 'small_bert/bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1', 'albert_en_base': 'https://tfhub.dev/tensorflow/albert_en_base/2', 'electra_small': 'https://tfhub.dev/google/electra_small/2', 'electra_base': 'https://tfhub.dev/google/electra_base/2', 'experts_pubmed': 'https://tfhub.dev/google/experts/bert/pubmed/2', 'experts_wiki_books': 'https://tfhub.dev/google/experts/bert/wiki_books/2', 'talking-heads_base': 'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1', } map_model_to_preprocess = { 'bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'bert_en_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-2_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-4_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-6_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-8_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-10_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-128_A-2': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-256_A-4': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-512_A-8': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'small_bert/bert_en_uncased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'bert_multi_cased_L-12_H-768_A-12': 'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3', 'albert_en_base': 'https://tfhub.dev/tensorflow/albert_en_preprocess/3', 'electra_small': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'electra_base': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'experts_pubmed': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'experts_wiki_books': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', 'talking-heads_base': 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3', } tfhub_handle_encoder = map_name_to_handle[bert_model_name] tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name] print(f'BERT model selected : {tfhub_handle_encoder}') print(f'Preprocess model auto-selected: {tfhub_handle_preprocess}') """ Explanation: Loading models from TensorFlow Hub Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune. There are multiple BERT models available. BERT-Base, Uncased and seven more models with trained weights released by the original BERT authors. Small BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality. ALBERT: four different sizes of "A Lite BERT" that reduces model size (but not computation time) by sharing parameters between layers. BERT Experts: eight models that all have the BERT-base architecture but offer a choice between different pre-training domains, to align more closely with the target task. Electra has the same architecture as BERT (in three different sizes), but gets pre-trained as a discriminator in a set-up that resembles a Generative Adversarial Network (GAN). BERT with Talking-Heads Attention and Gated GELU [base, large] has two improvements to the core of the Transformer architecture. The model documentation on TensorFlow Hub has more details and references to the research literature. Follow the links above, or click on the tfhub.dev URL printed after the next cell execution. The suggestion is to start with a Small BERT (with fewer parameters) since they are faster to fine-tune. If you like a small model but with higher accuracy, ALBERT might be your next option. If you want even better accuracy, choose one of the classic BERT sizes or their recent refinements like Electra, Talking Heads, or a BERT Expert. Aside from the models available below, there are multiple versions of the models that are larger and can yield even better accuracy, but they are too big to be fine-tuned on a single GPU. You will be able to do that on the Solve GLUE tasks using BERT on a TPU colab. You'll see in the code below that switching the tfhub.dev URL is enough to try any of these models, because all the differences between them are encapsulated in the SavedModels from TF Hub. End of explanation """ bert_preprocess_model = hub.KerasLayer(tfhub_handle_preprocess) """ Explanation: The preprocessing model Text inputs need to be transformed to numeric token ids and arranged in several Tensors before being input to BERT. TensorFlow Hub provides a matching preprocessing model for each of the BERT models discussed above, which implements this transformation using TF ops from the TF.text library. It is not necessary to run pure Python code outside your TensorFlow model to preprocess text. The preprocessing model must be the one referenced by the documentation of the BERT model, which you can read at the URL printed above. For BERT models from the drop-down above, the preprocessing model is selected automatically. Note: You will load the preprocessing model into a hub.KerasLayer to compose your fine-tuned model. This is the preferred API to load a TF2-style SavedModel from TF Hub into a Keras model. End of explanation """ text_test = ['this is such an amazing movie!'] text_preprocessed = bert_preprocess_model(text_test) print(f'Keys : {list(text_preprocessed.keys())}') print(f'Shape : {text_preprocessed["input_word_ids"].shape}') print(f'Word Ids : {text_preprocessed["input_word_ids"][0, :12]}') print(f'Input Mask : {text_preprocessed["input_mask"][0, :12]}') print(f'Type Ids : {text_preprocessed["input_type_ids"][0, :12]}') """ Explanation: Let's try the preprocessing model on some text and see the output: End of explanation """ bert_model = hub.KerasLayer(tfhub_handle_encoder) bert_results = bert_model(text_preprocessed) print(f'Loaded BERT: {tfhub_handle_encoder}') print(f'Pooled Outputs Shape:{bert_results["pooled_output"].shape}') print(f'Pooled Outputs Values:{bert_results["pooled_output"][0, :12]}') print(f'Sequence Outputs Shape:{bert_results["sequence_output"].shape}') print(f'Sequence Outputs Values:{bert_results["sequence_output"][0, :12]}') """ Explanation: As you can see, now you have the 3 outputs from the preprocessing that a BERT model would use (input_words_id, input_mask and input_type_ids). Some other important points: - The input is truncated to 128 tokens. The number of tokens can be customized, and you can see more details on the Solve GLUE tasks using BERT on a TPU colab. - The input_type_ids only have one value (0) because this is a single sentence input. For a multiple sentence input, it would have one number for each input. Since this text preprocessor is a TensorFlow model, It can be included in your model directly. Using the BERT model Before putting BERT into your own model, let's take a look at its outputs. You will load it from TF Hub and see the returned values. End of explanation """ def build_classifier_model(): text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text') preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess, name='preprocessing') encoder_inputs = preprocessing_layer(text_input) encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT_encoder') outputs = encoder(encoder_inputs) net = outputs['pooled_output'] net = tf.keras.layers.Dropout(0.1)(net) net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net) return tf.keras.Model(text_input, net) """ Explanation: The BERT models return a map with 3 important keys: pooled_output, sequence_output, encoder_outputs: pooled_output represents each input sequence as a whole. The shape is [batch_size, H]. You can think of this as an embedding for the entire movie review. sequence_output represents each input token in the context. The shape is [batch_size, seq_length, H]. You can think of this as a contextual embedding for every token in the movie review. encoder_outputs are the intermediate activations of the L Transformer blocks. outputs["encoder_outputs"][i] is a Tensor of shape [batch_size, seq_length, 1024] with the outputs of the i-th Transformer block, for 0 &lt;= i &lt; L. The last value of the list is equal to sequence_output. For the fine-tuning you are going to use the pooled_output array. Define your model You will create a very simple fine-tuned model, with the preprocessing model, the selected BERT model, one Dense and a Dropout layer. Note: for more information about the base model's input and output you can follow the model's URL for documentation. Here specifically, you don't need to worry about it because the preprocessing model will take care of that for you. End of explanation """ classifier_model = build_classifier_model() bert_raw_result = classifier_model(tf.constant(text_test)) print(tf.sigmoid(bert_raw_result)) """ Explanation: Let's check that the model runs with the output of the preprocessing model. End of explanation """ tf.keras.utils.plot_model(classifier_model) """ Explanation: The output is meaningless, of course, because the model has not been trained yet. Let's take a look at the model's structure. End of explanation """ loss = tf.keras.losses.BinaryCrossentropy(from_logits=True) metrics = tf.metrics.BinaryAccuracy() """ Explanation: Model training You now have all the pieces to train a model, including the preprocessing module, BERT encoder, data, and classifier. Loss function Since this is a binary classification problem and the model outputs a probability (a single-unit layer), you'll use losses.BinaryCrossentropy loss function. End of explanation """ epochs = 5 steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy() num_train_steps = steps_per_epoch * epochs num_warmup_steps = int(0.1*num_train_steps) init_lr = 3e-5 optimizer = optimization.create_optimizer(init_lr=init_lr, num_train_steps=num_train_steps, num_warmup_steps=num_warmup_steps, optimizer_type='adamw') """ Explanation: Optimizer For fine-tuning, let's use the same optimizer that BERT was originally trained with: the "Adaptive Moments" (Adam). This optimizer minimizes the prediction loss and does regularization by weight decay (not using moments), which is also known as AdamW. For the learning rate (init_lr), you will use the same schedule as BERT pre-training: linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (num_warmup_steps). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5). End of explanation """ classifier_model.compile(optimizer=optimizer, loss=loss, metrics=metrics) """ Explanation: Loading the BERT model and training Using the classifier_model you created earlier, you can compile the model with the loss, metric and optimizer. End of explanation """ print(f'Training model with {tfhub_handle_encoder}') history = classifier_model.fit(x=train_ds, validation_data=val_ds, epochs=epochs) """ Explanation: Note: training time will vary depending on the complexity of the BERT model you have selected. End of explanation """ loss, accuracy = classifier_model.evaluate(test_ds) print(f'Loss: {loss}') print(f'Accuracy: {accuracy}') """ Explanation: Evaluate the model Let's see how the model performs. Two values will be returned. Loss (a number which represents the error, lower values are better), and accuracy. End of explanation """ history_dict = history.history print(history_dict.keys()) acc = history_dict['binary_accuracy'] val_acc = history_dict['val_binary_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) fig = plt.figure(figsize=(10, 6)) fig.tight_layout() plt.subplot(2, 1, 1) # r is for "solid red line" plt.plot(epochs, loss, 'r', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') # plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.subplot(2, 1, 2) plt.plot(epochs, acc, 'r', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend(loc='lower right') """ Explanation: Plot the accuracy and loss over time Based on the History object returned by model.fit(). You can plot the training and validation loss for comparison, as well as the training and validation accuracy: End of explanation """ dataset_name = 'imdb' saved_model_path = './{}_bert'.format(dataset_name.replace('/', '_')) classifier_model.save(saved_model_path, include_optimizer=False) """ Explanation: In this plot, the red lines represent the training loss and accuracy, and the blue lines are the validation loss and accuracy. Export for inference Now you just save your fine-tuned model for later use. End of explanation """ reloaded_model = tf.saved_model.load(saved_model_path) """ Explanation: Let's reload the model, so you can try it side by side with the model that is still in memory. End of explanation """ def print_my_examples(inputs, results): result_for_printing = \ [f'input: {inputs[i]:<30} : score: {results[i][0]:.6f}' for i in range(len(inputs))] print(*result_for_printing, sep='\n') print() examples = [ 'this is such an amazing movie!', # this is the same sentence tried earlier 'The movie was great!', 'The movie was meh.', 'The movie was okish.', 'The movie was terrible...' ] reloaded_results = tf.sigmoid(reloaded_model(tf.constant(examples))) original_results = tf.sigmoid(classifier_model(tf.constant(examples))) print('Results from the saved model:') print_my_examples(examples, reloaded_results) print('Results from the model in memory:') print_my_examples(examples, original_results) """ Explanation: Here you can test your model on any sentence you want, just add to the examples variable below. End of explanation """ serving_results = reloaded_model \ .signatures['serving_default'](tf.constant(examples)) serving_results = tf.sigmoid(serving_results['classifier']) print_my_examples(examples, serving_results) """ Explanation: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. In Python, you can test them as follows: End of explanation """
stevetjoa/stanford-mir
evaluation_beat.ipynb
mit
y, sr = librosa.load('audio/prelude_cmaj.wav') ipd.Audio(y, rate=sr) """ Explanation: &larr; Back to Index Evaluation Example: Beat Tracking Documentation: mir_eval.beat Evaluation method: compute the error between the estimated beat times and some reference list of beat locations. Many metrics additionally compare the beat sequences at different metric levels in order to deal with the ambiguity of tempo. Let's evaluate a beat detector on the following audio: End of explanation """ est_tempo, est_beats = librosa.beat.beat_track(y=y, sr=sr, bpm=120) est_beats = librosa.frames_to_time(est_beats, sr=sr) est_beats """ Explanation: Detect Beats Estimate the beats using beat_track: End of explanation """ ref_beats = numpy.array([0, 0.50, 1.02, 1.53, 1.99, 2.48, 2.97, 3.43, 3.90, 4.41, 4.89, 5.38, 5.85, 6.33, 6.82, 7.29, 7.70]) """ Explanation: Load a fictional reference annotation. End of explanation """ D = librosa.stft(y) S = abs(D) S_db = librosa.amplitude_to_db(S) librosa.display.specshow(S_db, sr=sr, x_axis='time', y_axis='log') plt.ylim(0, 8192) plt.vlines(est_beats, 0, 8192, color='#00ff00') plt.scatter(ref_beats, 5000*numpy.ones_like(ref_beats), color='k', s=100) """ Explanation: Plot the estimated and reference beats together. End of explanation """ mir_eval.beat.evaluate(ref_beats, est_beats) """ Explanation: Evaluate Evaluate using mir_eval.beat.evaluate: End of explanation """ mir_eval.chord.evaluate() """ Explanation: Example: Chord Estimation End of explanation """ import librosa.display import mir_eval.display """ Explanation: Hidden benefits Input validation! Many errors can be traced back to ill-formatted data. Standardized behavior, full test coverage. More than metrics mir_eval has tools for display and sonification. End of explanation """ librosa.display.specshow(S, x_axis='time', y_axis='mel') mir_eval.display.events(ref_beats, color='w', alpha=0.8, linewidth=3) mir_eval.display.events(est_beats, color='c', alpha=0.8, linewidth=3, linestyle='--') """ Explanation: Common plots: events, labeled_intervals pitch, multipitch, piano_roll segments, hierarchy, separation Example: Events End of explanation """ y_harm, y_perc = librosa.effects.hpss(y, margin=8) plt.figure(figsize=(12, 4)) mir_eval.display.separation([y_perc, y_harm], sr, labels=['percussive', 'harmonic']) plt.legend() Audio(data=numpy.vstack([ mir_eval.sonify.chords() """ Explanation: Example: Labeled Intervals Example: Source Separation End of explanation """
CGATOxford/CGATPipelines
CGATPipelines/pipeline_docs/pipeline_peakcalling/notebooks/template_peakcalling_filtering_Report_reads_per_chr.ipynb
mit
import sqlite3 import pandas as pd import numpy as np %matplotlib inline import matplotlib import numpy as np import matplotlib.pyplot as plt import CGATPipelines.Pipeline as P import os import statistics import collections #load R and the R packages required %load_ext rpy2.ipython %R require(ggplot2) # use these functions to display tables nicely as html from IPython.display import display, HTML plt.style.use('bmh') #look at other available styles for plotting #plt.style.available """ Explanation: Peakcalling Bam Stats and Filtering Report - Reads per Chromsome This notebook is for the analysis of outputs from the peakcalling pipeline There are severals stats that you want collected and graphed (topics covered in this notebook in bold). These are: how many reads input how many reads removed at each step (numbers and percentages) how many reads left after filtering how many reads mapping to each chromosome before filtering? how many reads mapping to each chromosome after filtering? X:Y reads ratio inset size distribution after filtering for PE reads samtools flags - check how many reads are in categories they shouldn't be picard stats - check how many reads are in categories they shouldn't be This notebook takes the sqlite3 database created by CGAT peakcalling_pipeline.py and uses it for plotting the above statistics It assumes a file directory of: location of database = project_folder/csvdb location of this notebook = project_folder/notebooks.dir/ Reads per Chromosome This section get the reads per chromosome counts - this is helpful to see whether all reads are mapping to a particular contig. This is especially usefull for checking ATAC-Seq quality as Mitocondrial reads are over repressented in ATAC-Seq samples Firstly lets load all the things that might be needed End of explanation """ !pwd !date """ Explanation: This is where we are and when the notebook was run End of explanation """ database_path ='../csvdb' output_path = '.' """ Explanation: First lets set the output path for where we want our plots to be saved and the database path and see what tables it contains End of explanation """ HTML('''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> <form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''') """ Explanation: This code adds a button to see/hide code in html End of explanation """ def getTableNamesFromDB(database_path): # Create a SQL connection to our SQLite database con = sqlite3.connect(database_path) cur = con.cursor() # the result of a "cursor.execute" can be iterated over by row cur.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name;") available_tables = (cur.fetchall()) #Be sure to close the connection. con.close() return available_tables db_tables = getTableNamesFromDB(database_path) print('Tables contained by the database:') for x in db_tables: print('\t\t%s' % x[0]) #This function retrieves a table from sql database and indexes it with track name def getTableFromDB(statement,database_path): '''gets table from sql database depending on statement and set track as index if contains track in column names''' conn = sqlite3.connect(database_path) df = pd.read_sql_query(statement,conn) if 'track' in df.columns: df.index = df['track'] return df """ Explanation: The code below provides functions for accessing the project database and extract a table names so you can see what tables have been loaded into the database and are available for plotting. It also has a function for geting table from the database and indexing the table with the track name End of explanation """ ###These are functions used to manipulate the table so order of chromsomes is consistent with numbers def StrIsInt(string): '''function that takes string and tests if it can be represented as an int e.g. returns true for "3", but False for "Chr3" ''' try: int(string) return True except ValueError: return False def orderListOfChr(unordered_chr_list): '''take a list of chromosomes and return them in order of chromosome number not string order e.g. input = ["chr1",'chr11","chr2","chrM"] output = ["chr1",'chr2","chr11","chrM"]''' #make a empty list same length as chromosomes chr_id = [None]* len(unordered_chr_list) for value in unordered_chr_list: x = value.split("chr")[-1] # check if chr name is int or str if StrIsInt(x): chr_id[int(x)-1] = value else: chr_id.append(value) #remove none values from list ordered_chr_list = [x for x in chr_id if x is not None] return ordered_chr_list def reorderDFbyChrOrder(df): '''Takes a dataframe indexed on chr name and returns dataframe so that index is sorted based on the chromosome number e.g.dataframe with index chr1,chr11,chr12,chr2,chrM will be returned with rows in the order "chr1, chr2, chr11, chr12, chrM" ''' list_of_reordered_chr = orderListOfChr(df.index) return df.reindex(list_of_reordered_chr) # this subsets dataframe so only includes columns containing chr def getChrNames(df): '''takes dataframe with chromocome names in columns and returns a list of the chromosomes present''' to_keep = [] for item in df.columns: if 'chr' in item: to_keep.append(item) return to_keep """ Explanation: Here are some functions we need End of explanation """ idxstats_df = getTableFromDB('select * from idxstats_reads_per_chromosome;',database_path) idxstats_df.index = idxstats_df['region'] reads_per_chr_df = reorderDFbyChrOrder(idxstats_df.drop('region', 1)) print ('this table shows million reads per chromosome') reads_per_chr_df.divide(1000000) """ Explanation: Reads per Chromsome 1) lets get IDXstats tabel from database lets look at total number of maapped reads per chromosome for each sample End of explanation """ def makeReadsPerChrPlot(df,path): '''takes table from database of chromosome lengths and makes individual plot for each sample of how many reads map to each chromosome''' to_keep = [] for item in df.columns: if 'chr' in item: to_keep.append(item) df = df[to_keep] df = df.divide(1000000) #where plot will be sent to file_path = "/".join([path,'mapped_reads_per_chromosome_plot.pdf']) print ('figure_saved_to %s' % file_path) ax = df.T.plot(figsize=(11,5), xticks = range(len(to_keep)), title = 'Million reads mapped to each chromosome', ylim=(0,10)) #set labels for plots ax.set_xlabel("Contig") ax.set_ylabel("million reads") fig = matplotlib.figure.Figure() ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) matplotlib.pyplot.savefig(file_path, bbox_inches='tight') matplotlib.pyplot.show() makeReadsPerChrPlot(reads_per_chr_df.T,output_path) def makePercentReadsPerChrPlot(df,path): '''takes the idxstats_reads_per_chromosome table from database and calculates percentage of reads mapping to each chromosome and plots this for each chromosome and returns percentage table''' c = df.copy() for item in c.columns: if 'chr' not in item and item != 'total_reads': c = c.drop(item,1) y = c.div(c.total_reads, axis ='index')*100 y = y.drop('total_reads',1) file_path = "/".join([path,'percentage_mapped_reads_per_chromosome_plot.pdf']) print ('figure_saved_to %s' % file_path) ax = y.T.plot(figsize=(10,5), xticks = range(len(y.columns)), title = 'Percentage of total input reads that map to each contig', ylim=(0,100)) ax.set_xlabel("Contig") ax.set_ylabel("percentage_reads") ax.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) fig = matplotlib.figure.Figure() matplotlib.pyplot.savefig(file_path, bbox_inches='tight') matplotlib.pyplot.show() return y percent_idxdf = makePercentReadsPerChrPlot(reads_per_chr_df.T,output_path) percent_idxdf.T len(reads_per_chr_df.columns) def makeReadsPerSampleChrPlot(df,path,subplot_dims): '''takes table from database of chromosome lengths and makes individual plot for each sample of how many reads map to each chromosome subplot dims = tuples of the format (num_rows,num_cols)''' to_keep = [] for item in df.columns: if 'chr' in item: to_keep.append(item) df = df[to_keep] df = df.divide(1000000) #where plot will be sent to file_path = "/".join([path,'mapped_reads_per_chromosome_per_sample_plot.pdf']) print ('figure_saved_to %s' % file_path) #plot as subplots- # can change layout to be better layout=(num_rows,num_cols) # returns a list of axis of the subplots - select the right axis to add labels ax = df.T.plot(subplots=True, figsize=(10,10), layout = subplot_dims, xticks = range(len(to_keep)), title = 'Million reads mapped to each chromosome per sample', ylim=(0,10)) #set labels for plots bottom_plot = ax[-1][0] middle_plot = ((int(subplot_dims[0]/2), int(subplot_dims[1]/2))) a = ax[middle_plot] a.set_ylabel("million reads") fig = matplotlib.figure.Figure() matplotlib.pyplot.savefig(file_path, bbox_inches='tight') matplotlib.pyplot.show() makeReadsPerSampleChrPlot(reads_per_chr_df.T,output_path,(len(reads_per_chr_df.T.columns),1)) def makePercentReadsPerSampleChrPlot(df,path,subplot_dims): '''takes the idxstats_reads_per_chromosome table from database and calculates percentage of reads mapping to each chromosome and plots this for each chromosome and returns percentage table''' c = df.copy() for item in c.columns: if 'chr' not in item and item != 'total_reads': c = c.drop(item,1) y = c.div(c['total_reads'], axis ='index')*100 y = y.drop('total_reads',1) file_path = "/".join([path,'percentage_mapped_reads_per_chromosome_per_sample_plot.pdf']) print ('figure_saved_to %s' % file_path) ax = y.T.plot(subplots=True, layout = subplot_dims, figsize=(10,10), xticks = range(len(y.columns)), title = 'Percentage of total input reads that map to each contig', ylim=(0,100)) ax[-1][0].set_xlabel("Contig") middle_plot = ((int(subplot_dims[0]/2), int(subplot_dims[1]/2))) ax[middle_plot].set_ylabel("percentage_reads") fig = matplotlib.figure.Figure() matplotlib.pyplot.savefig(file_path, bbox_inches='tight') matplotlib.pyplot.show() makePercentReadsPerSampleChrPlot(reads_per_chr_df.T,output_path,(len(reads_per_chr_df.columns),1)) """ Explanation: Contigs that have been filtered should clearly show up with 0 reads across the row End of explanation """ x_vs_y_df = idxstats_df.drop('region', 1).T[['chrX','chrY']].copy() print (x_vs_y_df.head()) x_vs_y_df['total_xy'] = x_vs_y_df.chrX + x_vs_y_df.chrY x_vs_y_df['percentX'] = x_vs_y_df.chrX/x_vs_y_df.total_xy * 100 x_vs_y_df['percentY'] = x_vs_y_df.chrY/x_vs_y_df.total_xy * 100 display(x_vs_y_df) #plot bar graph of number of thousand reads mapping to chrX vs chrY ax = x_vs_y_df[['chrX','chrY']].divide(1000).plot.bar() ax.set_ylabel('Thousand Reads (not pairs)') ax.legend(['chrX','chrY'], loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. ) ax.set_title('number of reads (not pairs) \n mapping to chrX or chrY') # plot graph of percentage of reads mapping to either chr X or Y ax = x_vs_y_df[['percentX', 'percentY']].plot.bar(stacked=True) ax.legend(['chrX','chrY'], loc=2,bbox_to_anchor=(1.05, 1),borderaxespad=0. ) ax.set_ylabel('percentage reads') ax.set_title('percentage of sex chromosome reads mapping \n to chrX or chrY') ax.set_ylim((0,110)) """ Explanation: THIS IS WHERE YOU CAN WRITE YOU OWN SUMMARY: From this notebook you will see how many reads map to each contig - hopefully it will show no reads mapping to any that you filtered out in the peakcalling pipeline - it shpould also shwo you id some chromsomes unexpectedly high mapping rates compared to others - remember chromsomes are often names in order of size so in therory chr1 is more likely to have the most reads mapping to it purely becuase it is the biggest Comparision of Reads mapping to X vs Y Lets look at the number of reads mapping to chrX compared to chrY this is helpful to determine and double check the sex of the samples End of explanation """ def add_expt_to_df(dataframe): ''' splits track name for example HsTh1-RATotal-R1.star into expt featues, expt, sample_treatment and replicate and adds these as collumns to the dataframe''' expt = [] treatment = [] replicate = [] for value in dataframe.track: #remone star label #print value x = value.split(".") # split into design features y = x[0].split('-') expt.append(y[0]) treatment.append(y[1]) replicate.append(y[2]) if len(expt) == len(treatment) and len(expt)== len(replicate): print ('all values in list correctly') else: print ('error in loading values into lists') #add collums to dataframe dataframe['expt_name'] = expt dataframe['sample_treatment'] = treatment dataframe['replicate'] = replicate return dataframe """ Explanation: WRITE YOUR COMMENTS HERE From the plots above you should be able to see which samples are male and which are female depending on the percentage of reads mapping to the Y chromosome End of explanation """
boffi/boffi.github.io
dati_2020/04/EP_Exact+Numerical.ipynb
mit
def resp_elas(m,c,k, cC,cS,w, F, x0,v0): wn2 = k/m ; wn = sqrt(wn2) ; beta = w/wn z = c/(2*m*wn) wd = wn*sqrt(1-z*z) # xi(t) = R sin(w t) + S cos(w t) + D det = (1.-beta**2)**2+(2*beta*z)**2 R = ((1-beta**2)*cS + (2*beta*z)*cC)/det/k S = ((1-beta**2)*cC - (2*beta*z)*cS)/det/k D = F/k A = x0-S-D B = (v0+z*wn*A-w*R)/wd def x(t): return exp(-z*wn*t)*(A*cos(wd*t)+B*sin(wd*t))+R*sin(w*t)+S*cos(w*t)+D def v(t): return (-z*wn*exp(-z*wn*t)*(A*cos(wd*t)+B*sin(wd*t)) +wd*exp(-z*wn*t)*(B*cos(wd*t)-A*sin(wd*t)) +w*(R*cos(w*t)-S*sin(w*t))) return x,v """ Explanation: Exact Integration for an EP SDOF System We want to compute the response, using the constant acceleration algorithm plus MNR, of an Elasto Plastic (EP) system... but how we can confirm or reject our results? It turns out that computing the exact response of an EP system with a single degree of freedom is relatively simple. Here we discuss a program that computes the analytical solution of our problem. The main building blocks of the program will be two functions that compute, for the elastic phase and for the plastic phase, the analytical functions that give the displacement and the velocity as functions of time. Elastic response We are definining a function that, for a linear dynamic system, returns not the displacement or the velocity at a given time, but rather a couple of functions of time that we can use afterwards to compute displacements and velecities at any time of interest. The response depends on the parameters of the dynamic system $m,c,k,$ on the initial conditions $x_0, v_0,$ and on the characteristics of the external load. Here the external load is limited to a linear combination of a cosine modulated, a sine modulated (both with the same frequency $\omega$) and a constant force, <center>$P(t) = c_C \cos\omega t + c_S \sin\omega t + F,$</center> but that's all that is needed for the present problem. The particular integral being <center>$\xi(t) = S \cos\omega t + R \sin\omega t + D,$</center> substituting in the equation of motion and equating all the corresponding terms gives the undetermined coefficients in $\xi(t)$, then evaluation of the general integral and its time derivative for $t=0$ permits to find the constants in the homogeneous part of the integral. The final step is to define the displacement and the velocity function, according to the constants we have determined, and to return these two function to the caller End of explanation """ def resp_yield(m,c, cC,cS,w, F, x0,v0): # csi(t) = R sin(w t) + S cos(w t) + Q t Q = F/c In this case the equation of motion is 𝑚𝑥¨+𝑐𝑥˙=𝑃(𝑡), the homogeneous response is 𝑥(𝑡)=𝐴exp(−𝑐𝑚𝑡)+𝐵, and the particular integral, for a load described as in the previous case, is (slightly different...) 𝜉(𝑡)=𝑆cos𝜔𝑡+𝑅sin𝜔𝑡+𝐷𝑡. Having computed 𝑅,𝑆, and 𝐷 from substituting 𝜉 in the equation of motion, 𝐴 and 𝐵 by imposing the initial conditions,we can define the displacement and velocity functions and, finally, return these two functions to the caller. det = w**2*(c**2+w**2*m**2) R = (+w*c*cC-w*w*m*cS)/det S = (-w*c*cS-w*w*m*cC)/det # x(t) = A exp(-c t/m) + B + R sin(w t) + S cos(w t) + Q t # v(t) = - c A/m exp(-c t/m) + w R cos(w t) - w S sin(w t) + Q # # v(0) = -c A / m + w R + Q = v0 A = m*(w*R + Q - v0)/c # x(0) = A + B + S = x0 B = x0 - A - S def x(t): return A*exp(-c*t/m)+B+R*sin(w*t)+S*cos(w*t)+Q*t def v(t): return -c*A*exp(-c*t/m)/m+w*R*cos(w*t)-w*S*sin(w*t)+Q return x,v """ Explanation: Plastic response In this case the equation of motion is <center>$m\ddot x + c \dot x = P(t),$</center> the homogeneous response is <center>$x(t)=A\exp(-\frac{c}{m}t)+B,$</center> and the particular integral, for a load described as in the previous case, is (slightly different...) <center>$\xi(t) = S \cos\omega t + R \sin\omega t + Dt.$</center> Having computed $R, S,$ and $D$ from substituting $\xi$ in the equation of motion, $A$ and $B$ by imposing the initial conditions,we can define the displacement and velocity functions and, finally, return these two functions to the caller. End of explanation """ def bisect(f,val,x0,x1): h = (x0+x1)/2.0 fh = f(h)-val if abs(fh)<1e-8 : return h f0 = f(x0)-val if f0*fh > 0 : return bisect(f, val, h, x1) else: return bisect(f, val, x0, h) """ Explanation: An utility function We need to find when the spring yields the velocity is zero to individuate the three ranges of different behaviour elastic plastic elastic, with permanent deformation. We can use the simple and robust algorithm of bisection to find the roots for <center>$x_{el}(t)=x_y \text{ and } \dot{x}_{ep}(t)=0$.</center> End of explanation """ mass = 1000. # kg k = 40000. # N/m zeta = 0.03 # damping ratio fy = 2500. # N print('Limit displacement Uy =', fy*1000/k, 'mm') """ Explanation: The system parameters End of explanation """ damp = 2*zeta*sqrt(k*mass) xy = fy/k # m """ Explanation: Derived quantities The damping coefficient $c$ and the first yielding displacement, $x_y$. End of explanation """ t1 = 0.3 # s w = pi/t1 # rad/s Po = 6000. # N """ Explanation: Load definition Our load is a half-sine impulse <center>$p(t)=\begin{cases}p_0\sin(\frac{\pi t}{t_1})&0\leq t\leq t_1,\\ 0&\text{otherwise.}\end{cases}$</center> In our exercise End of explanation """ x0=0.0 # m v0=0.0 # m/s x_next, v_next = resp_elas(mass,damp,k, 0.0,Po,w, 0.0, x0,v0) """ Explanation: The actual computations Elastic, initial conditions, get system functions End of explanation """ t_yield = bisect(x_next, xy, 0.0, t1) print(t_yield, x_next(t_yield)*k) """ Explanation: Yielding time is The time of yielding is found solving the equation $x_\text{next}(t) = x_y$ End of explanation """ t_el = linspace( 0.0, t_yield, 201) x_el = vectorize(x_next)(t_el) v_el = vectorize(v_next)(t_el) # ------------------------------ figure(0) plot(t_el,x_el, (0,0.25),(xy,xy),'--b', (t_yield,t_yield),(0,0.0699),'--b') title("$x_{el}(t)$") xlabel("Time, s") ylabel("Displacement, m") # ------------------------------ figure(1) plot(t_el,v_el) title("$\dot x_{el}(t)$") xlabel("Time, s") ylabel("Velocity, m/s") """ Explanation: Forced response in elastic range is End of explanation """ x0=x_next(t_yield) v0=v_next(t_yield) print(x0, v0) """ Explanation: Preparing for EP response First, the system state at $t_y$ is the initial condition for the EP response End of explanation """ cS = Po*cos(w*t_yield) cC = Po*sin(w*t_yield) print(Po*sin(w*0.55), cS*sin(w*(0.55-t_yield))+cC*cos(w*(0.55-t_yield))) """ Explanation: now, the load must be expressed in function of a restarted time, <center> $\tau=t-t_y\;\rightarrow\;t=\tau+t_y\;\rightarrow\;\sin(\omega t)=\sin(\omega\tau+\omega t_y)$ </center> <center> $\rightarrow\;\sin(\omega t)=\sin(\omega\tau)\cos(\omega t_y)+\cos(\omega\tau)\sin(\omega t_y)$ </center> End of explanation """ x_next, v_next = resp_yield(mass, damp, cC,cS,w, -fy, x0,v0) """ Explanation: Now we generate the displacement and velocity functions for the yielded phase, please note that the yielded spring still exerts a constant force $f_y$ on the mass, and that this fact must be (and it is) taken into account. End of explanation """ t_y1 = linspace(t_yield, t1, 101) x_y1 = vectorize(x_next)(t_y1-t_yield) v_y1 = vectorize(v_next)(t_y1-t_yield) figure(3) plot(t_el,x_el, t_y1,x_y1, (0,0.25),(xy,xy),'--b', (t_yield,t_yield),(0,0.0699),'--b') xlabel("Time, s") ylabel("Displacement, m") # ------------------------------ figure(4) plot(t_el, v_el, t_y1, v_y1) xlabel("Time, s") ylabel("Velocity, m/s") """ Explanation: At this point I must confess that I have already peeked the numerical solution, hence I know that the velocity at $t=t_1$ is still greater than 0 and I know that the current solution is valid in the interval $t_y\leq t\leq t_1$. End of explanation """ x0 = x_next(t1-t_yield) v0 = v_next(t1-t_yield) print(x0, v0) x_next, v_next = resp_yield(mass, damp, 0, 0, w, -fy, x0, v0) t2 = t1 + bisect( v_next, 0.0, 0, 0.3) print(t2) t_y2 = linspace( t1, t2, 101) x_y2 = vectorize(x_next)(t_y2-t1) v_y2 = vectorize(v_next)(t_y2-t1) print(x_next(t2-t1)) figure(5) plot(t_el,x_el, t_y1,x_y1, t_y2, x_y2, (0,0.25),(xy,xy),'--b', (t_yield,t_yield),(0,0.0699),'--b') xlabel("Time, s") ylabel("Displacement, m") # ------------------------------ figure(6) plot(t_el, v_el, t_y1, v_y1, t_y2, v_y2) xlabel("Time, s") ylabel("Velocity, m/s") """ Explanation: In the next phase, still it is $\dot x> 0$ so that the spring is still yielding, but now $p(t)=0$, so we must compute two new state functions, starting as usual from the initial conditions (note that the yielding force is still applied) End of explanation """ x0 = x_next(t2-t1) ; v0 = 0.0 x_next, v_next = resp_elas(mass,damp,k, 0.0,0.0,w, k*x0-fy, x0,v0) t_e2 = linspace(t2,4.0,201) x_e2 = vectorize(x_next)(t_e2-t2) v_e2 = vectorize(v_next)(t_e2-t2) """ Explanation: Elastic unloading The only point worth commenting is the constant force that we apply to our system. The force-displacement relationship for an EP spring is <center>$f_\text{E} = k(x-x_\text{pl})= k x - k (x_\text{max}-x_y)$</center> taking the negative, constant part of the last expression into the right member of the equation of equilibrium we have a constant term, as follows End of explanation """ # ------------------------------ figure(7) ; plot(t_el, x_el, '-b', t_y1, x_y1, '-r', t_y2, x_y2, '-r', t_e2, x_e2, '-b', (0.6, 4.0), (x0-xy, x0-xy), '--y') title("In blue: elastic phases.\n"+ "In red: yielding phases.\n"+ "Dashed: permanent plastic deformation.") xlabel("Time, s") ylabel("Displacement, m") """ Explanation: now we are ready to plot the whole response End of explanation """ def make_p(p0,t1): """make_p(p0,t1) returns a 1/2 sine impulse load function, p(t)""" def p(t): "" if t<t1: return p0*sin(t*pi/t1) else: return 0.0 return p """ Explanation: Numerical solution first, we need the load function End of explanation """ def make_kt(k,fy): "make_kt(k,fy) returns a function kt(u,v,up) returning kt, up" def kt(u,v,up): f=k*(u-up) if (-fy)<f<fy: return k,up if fy<=f and v>0: up=u-uy;return 0,up if fy<=f and v<=0: up=u-uy;return k,up if f<=(-fy) and v<0: up=u+uy;return 0,up else: up=u+uy;return k,up return kt """ Explanation: and also a function that, given the displacement, the velocity and the total plastic deformation, returns the stiffness and the new p.d.; this function is defined in terms of the initial stiffness and the yielding load End of explanation """ # Exercise from lesson 04 # mass = 1000.00 # kilograms k = 40000.00 # Newtons per metre zeta = 0.03 # zeta is the damping ratio fy = 2500.00 # yelding force, Newtons t1 = 0.30 # half-sine impulse duration, seconds p0 = 6000.00 # half-sine impulse peak value, Newtons uy = fy/k # yelding displacement, metres """ Explanation: Problem data End of explanation """ # using the above constants, define the loading function p=make_p(p0,t1) # the following function, given the final displacement, the final # velocity and the initial plastic deformation returns a) the tangent # stiffness b) the final plastic deformation kt=make_kt(k,fy) # we need the damping coefficient "c", to compute its value from the # damping ratio we must first compute the undamped natural frequency wn=sqrt(k/mass) # natural frequency of the undamped system damp=2*mass*wn*zeta # the damping coefficient # the time step h=0.005 # required duration for the response t_end = 4.0 # the number of time steps to arrive at t_end nsteps=int((t_end+h/100)/h)+1 # the maximum number of iterations in the Newton-Raphson procedure maxiters = 30 # using the constant acceleration algorithm # below we define the relevant algorithmic constants gamma=0.5 beta=1./4. gb=gamma/beta a=mass/(beta*h)+damp*gb b=0.5*mass/beta+h*damp*(0.5*gb-1.0) """ Explanation: Initialize the algorithm compute the functions that return the load and the tangent sstiffness + plastic deformation compute the damping constant for a given time step, compute all the relevant algorithmic constants, with $\gamma=\frac12$ and $\beta=\frac14$ End of explanation """ t0=0.0 u0=0.0 up=0.0 v0=0.0 p0=p(t0) (k0, up)=kt(u0,v0,up) a0=(p0-damp*v0-k0*(u0-up))/mass time = []; disp = [] """ Explanation: System state initialization and a bit more, in species we create two empty vectors to hold the computation results End of explanation """ for i in range(nsteps): time.append(t0); disp.append(u0) # advance time, next external load value, etc t1 = t0 + h p1 = p(t1) Dp = p1 - p0 Dp_= Dp + a*v0 + b*a0 k_ = k0 + gb*damp/h + mass/(beta*h*h) # We prepare the machinery for the modified Newton-Raphson algorithm. # If we have no state change in the time step N-R algorithm is equivalent to the standard procedure u_init=u0; v_init=v0 # initial state f_spring=k*(u0-up) # the force in the spring DR=Dp_ # the unbalanced force, initially equal to the external load increment for j in range(maxiters): Du=DR/k_ # the disp increment according to the initial stiffness u_next = u_init + Du v_next = v_init + gb*Du/h - gb*v_init + h*(1.0-0.5*gb)*a0 # we are interested in the total plastic elongation oops,up=kt(u_next,v_next,up) # because we need the spring force at the end of the time step f_spring_next=k*(u_next-up) # so that we can compute the fraction of the incremental force # that's equilibrated at the end of the time step df=f_spring_next-f_spring+(k_-k0)*Du # and finally the incremental forces still unbalanced the end of the time step DR=DR-df # finish updating the system state u_init=u_next; v_init=v_next; f_spring=f_spring_next # if the unbalanced load is small enough (the criteria used in practical # programs are energy based) exit the loop - if we # have no plasticization/unloading DR==0 at the end of the first iteration if abs(DR)<fy*1E-6: break # now the load increment is balanced by the spring force and increments in inertial and damping forces, # we need to compute the full state at the end of the time step, and to change all denominations # to reflect the fact that we are starting a new time step. Du=u_init-u0 Dv=gamma*Du/(beta*h)-gamma*v0/beta+h*(1.0-0.5*gamma/beta)*a0 u1=u0+Du ; v1=v0+Dv k1,up=kt(u1,v1,up) a1=(p(t1)-damp*v1-k*(u1-up))/mass t0=t1; v0=v1; u0=u1 ; a0=a1 ; k0=k1 ; p0=p1 """ Explanation: Iteration We iterate over time and, if there is a state change, over the single time step to equilibrate the unbalanced loadings End of explanation """ figure(8) plot(time[::4],disp[::4],'xr') plot(t_el, x_el, '-b', t_y1, x_y1, '-r', t_y2, x_y2, '-r', t_e2, x_e2, '-b', (0.6, 4.0), (x0-xy, x0-xy), '--y') title("Continuous line: exact response.\n"+ "Red crosses: constant acceleration + MNR.\n") xlabel("Time, s") ylabel("Displacement, m"); """ Explanation: Plotting our results we plot red crosses for the numericaly computed response and a continuous line for the results of the analytical integration of the equation of motion. End of explanation """
tpin3694/tpin3694.github.io
python/geocoding_and_reverse_geocoding.ipynb
mit
# Load packages from pygeocoder import Geocoder import pandas as pd import numpy as np """ Explanation: Title: Geocoding And Reverse Geocoding Slug: geocoding_and_reverse_geocoding Summary: Geocoding And Reverse Geocoding Date: 2016-05-01 12:00 Category: Python Tags: Data Wrangling Authors: Chris Albon Geocoding (converting a physical address or location into latitude/longitude) and reverse geocoding (converting a lat/long to a physical address or location) are common tasks when working with geo-data. Python offers a number of packages to make the task incredibly easy. In the tutorial below, I use pygeocoder, a wrapper for Google's geo-API, to both geocode and reverse geocode. Preliminaries First we want to load the packages we will want to use in the script. Specifically, I am loading pygeocoder for its geo-functionality, pandas for its dataframe structures, and numpy for its missing value (np.nan) functionality. End of explanation """ # Create a dictionary of raw data data = {'Site 1': '31.336968, -109.560959', 'Site 2': '31.347745, -108.229963', 'Site 3': '32.277621, -107.734724', 'Site 4': '31.655494, -106.420484', 'Site 5': '30.295053, -104.014528'} """ Explanation: Create some simulated geo data Geo-data comes in a wide variety of forms, in this case we have a Python dictionary of five latitude and longitude strings, with each coordinate in a coordinate pair separated by a comma. End of explanation """ # Convert the dictionary into a pandas dataframe df = pd.DataFrame.from_dict(data, orient='index') # View the dataframe df """ Explanation: While technically unnecessary, because I originally come from R, I am a big fan of dataframes, so let us turn the dictionary of simulated data into a dataframe. End of explanation """ # Create two lists for the loop results to be placed lat = [] lon = [] # For each row in a varible, for row in df[0]: # Try to, try: # Split the row by comma, convert to float, and append # everything before the comma to lat lat.append(float(row.split(',')[0])) # Split the row by comma, convert to float, and append # everything after the comma to lon lon.append(float(row.split(',')[1])) # But if you get an error except: # append a missing value to lat lat.append(np.NaN) # append a missing value to lon lon.append(np.NaN) # Create two new columns from lat and lon df['latitude'] = lat df['longitude'] = lon """ Explanation: You can see now that we have a a dataframe with five rows, with each now containing a string of latitude and longitude. Before we can work with the data, we'll need to 1) seperate the strings into latitude and longitude and 2) convert them into floats. The function below does just that. End of explanation """ # View the dataframe df """ Explanation: Let's take a took a what we have now. End of explanation """ # Convert longitude and latitude to a location results = Geocoder.reverse_geocode(df['latitude'][0], df['longitude'][0]) """ Explanation: Awesome. This is exactly what we want to see, one column of floats for latitude and one column of floats for longitude. Reverse Geocoding To reverse geocode, we feed a specific latitude and longitude pair, in this case the first row (indexed as '0') into pygeocoder's reverse_geocoder function. End of explanation """ # Print the lat/long results.coordinates # Print the city results.city # Print the country results.country # Print the street address (if applicable) results.street_address # Print the admin1 level results.administrative_area_level_1 """ Explanation: Now we can take can start pulling out the data that we want. End of explanation """ # Verify that an address is valid (i.e. in Google's system) Geocoder.geocode("4207 N Washington Ave, Douglas, AZ 85607").valid_address """ Explanation: Geocoding For geocoding, we need to submit a string containing an address or location (such as a city) into the geocode function. However, not all strings are formatted in a way that Google's geo-API can make sense of them. We can text if an input is valid by using the .geocode().valid_address function. End of explanation """ # Print the lat/long results.coordinates """ Explanation: Because the output was True, we now know that this is a valid address and thus can print the latitude and longitude coordinates. End of explanation """ # Find the lat/long of a certain address result = Geocoder.geocode("7250 South Tucson Boulevard, Tucson, AZ 85756") # Print the street number result.street_number # Print the street name result.route """ Explanation: But even more interesting, once the address is processed by the Google geo API, we can parse it and easily separate street numbers, street names, etc. End of explanation """
mvdbosch/AtosCodexDemo
Jupyter Notebooks/Explore the CBS Crime and Demographics Dataset.ipynb
gpl-3.0
%%bash cat /proc/cpuinfo | grep 'processor\|model name' %%bash free -g """ Explanation: Atos Codex - Data Scientist Workbench Explore the CBS Crime and Demographics Dataset First check some of the environment specs and see what we have here End of explanation """ from __future__ import print_function import pandas as pd import geopandas as gpd import matplotlib as mpl import matplotlib.pyplot as plt from ipywidgets.widgets import interact, Text from IPython.display import display import numpy as np """ Explanation: Import Python packages End of explanation """ # use the notebook definition for interactive embedded graphics # %matplotlib notebook # use the inline definition for static embedded graphics %matplotlib inline rcParam = { 'figure.figsize': (12,6), 'font.weight': 'bold', 'axes.labelsize': 20.0, 'axes.titlesize': 20.0, 'axes.titleweight': 'bold', 'legend.fontsize': 14, 'xtick.labelsize': 14, 'ytick.labelsize': 14, } for key in rcParam: mpl.rcParams[key] = rcParam[key] """ Explanation: Set Jupyter Notebook graphical parameters End of explanation """ cbs_data = pd.read_csv('combined_data.csv',sep=',',na_values=['NA','.'],error_bad_lines=False); """ Explanation: Read the combines CBS dataset This is the file that we downladen & merged using Talend Open Studio for Big Data. (Note: please check the file path) End of explanation """ cbs_data.head() cbs_data_2015 = cbs_data.loc[cbs_data['YEAR'] == 2015]; #list(cbs_data_2015) """ Explanation: Let's inspect the contents of this file by looking at the first 5 rows. As you can see, this file has a lot of columns. For a description of the fieldnames, please see the description file End of explanation """ cbs_data_2015.describe() #cbs_data_2015.YEAR.describe() cbs_data_2015 = cbs_data_2015.dropna(); cbs_data_2015.describe() """ Explanation: We will subset the entire 2010-2015 into just the year 2015. In the table below you will see summary statistics End of explanation """ cbs_data_2015.iloc[:,35:216].describe() """ Explanation: Description of some of the demographic features of this dataset End of explanation """ labels = cbs_data_2015["Vermogensmisdrijven_rel"].values columns = list(cbs_data_2015.iloc[:,37:215]) features = cbs_data_2015[list(columns)]; features = features.apply(lambda columns : pd.to_numeric(columns, errors='ignore')) """ Explanation: We want to make a label and a set of features out of our data Labelling: The relative amount of money and property crimes ( Vermogensmisdrijven_rel) Features : All neighbourhood demographic columns in the dataset End of explanation """ print(labels[1:10]) features.head() """ Explanation: Inspect our labels and features End of explanation """ from sklearn.linear_model import RandomizedLasso """ Explanation: Feature selection using Randomized Lasso Import Randomized Lasso from the Python Scikit-learn package End of explanation """ rlasso = RandomizedLasso(alpha='aic',verbose =True,normalize =True,n_resampling=3000,max_iter=100) rlasso.fit(features, labels) """ Explanation: Run Randomized Lasso, with 3000 resampling and 100 iterations. End of explanation """ dfResults = pd.DataFrame.from_dict(sorted(zip(map(lambda x: round(x, 4), rlasso.scores_), list(features)), reverse=True)) dfResults.columns = ['Score', 'FeatureName'] dfResults.head(10) """ Explanation: Features sorted by their score In the table below the top10 best features (i.e. columns) are shown with their score End of explanation """ dfResults.plot('FeatureName', 'Score', kind='bar', color='navy') ax1 = plt.axes() x_axis = ax1.axes.get_xaxis() x_axis.set_visible(False) plt.show() """ Explanation: Because in the beginning of the lasso results table, a lot of high-scoring features are present, we want to check how the scores are devided across all features End of explanation """ plt.scatter(y=pd.to_numeric(cbs_data_2015['Vermogensmisdrijven_rel']),x=pd.to_numeric(cbs_data_2015['A_BED_GI'])); plt.ylabel('Vermogensmisdrijven_rel') plt.xlabel('A_BED_GI ( Bedrijfsvestigingen: Handel en horeca )') plt.show() dfResults.tail(10) """ Explanation: Scatterplot Let's inspect one of the top variables and make a scatterplot for this one End of explanation """ plt.scatter(y=pd.to_numeric(cbs_data_2015['Vermogensmisdrijven_rel']),x=pd.to_numeric(cbs_data_2015['P_LAAGINKH'])); plt.ylabel('Vermogensmisdrijven_rel') plt.xlabel('Perc. Laaginkomen Huish.') plt.show() """ Explanation: Let's also inspect one of the worst variables (Perc% of Low income households) and plot this one too End of explanation """ plt.scatter(y=pd.to_numeric(cbs_data_2015['Gewelds_en_seksuele_misdrijven_rel']),x=pd.to_numeric(cbs_data_2015['P_GESCHEID'])); plt.ylabel('Gewelds_en_seksuele_misdrijven_rel') plt.xlabel('Perc_Gescheiden') plt.show() """ Explanation: Try-out another hypothese (e.g. Perc% of divorced vs. Rel% Domestic and Sexual violence crimes) End of explanation """
gururajl/deep-learning
image-classification/dlnd_image_classification.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path) """ Explanation: Image Classification In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the Data Run the following cell to download the CIFAR-10 dataset for python. End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id) """ Explanation: Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions. End of explanation """ def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function return x / 255 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize) """ Explanation: Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. End of explanation """ encoded_labels = None def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function global encoded_labels if encoded_labels is None: from sklearn import preprocessing lb = preprocessing.LabelBinarizer() lb.fit(range(0,10)) encoded_labels = lb return encoded_labels.transform(x) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode) """ Explanation: One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode) """ Explanation: Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save it Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb')) """ Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation """ import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a bach of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return tf.placeholder(tf.float32, [None] + list(image_shape), name="x") def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return tf.placeholder(tf.float32, [None, n_classes], name="y") def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, name="keep_prob") """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input) """ Explanation: Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size. End of explanation """ def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function # calculate filter size and initialize channels_axis = 3 input_channels = x_tensor.shape[3].value kernel_shape = list(conv_ksize) + [input_channels] + [conv_num_outputs] W = tf.Variable(tf.random_normal(kernel_shape, stddev=0.1)) # init bias b = tf.Variable(tf.random_normal([conv_num_outputs], stddev=0.1)) # calculate convolution strides conv_strides_input = [1] + list(conv_strides) + [1] # conv layer, bias, relu x = tf.nn.conv2d(x_tensor, W, strides=conv_strides_input, padding='SAME') x = tf.nn.bias_add(x, b) x = tf.nn.relu(x) # calc pool kernel ans strides size pool_kernel = [1] + list(pool_ksize) + [1] pool_strides = [1] + list(pool_strides) + [1] # max pool x = tf.nn.max_pool(x, ksize=pool_kernel, strides=pool_strides, padding='SAME') return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool) """ Explanation: Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers. End of explanation """ def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function return tf.reshape(x_tensor, [-1, np.prod(x_tensor.shape.as_list()[1:])]) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten) """ Explanation: Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function num_inputs = x_tensor.shape[1].value W = tf.Variable(tf.random_normal([num_inputs, num_outputs], stddev=0.1)) b = tf.Variable(tf.random_normal([num_outputs], stddev=0.1)) fc = tf.add(tf.matmul(x_tensor, W), b) return tf.nn.relu(fc) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn) """ Explanation: Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. End of explanation """ def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function num_inputs = x_tensor.shape[1].value W = tf.Variable(tf.random_normal([num_inputs, num_outputs], stddev=0.1)) b = tf.Variable(tf.random_normal([num_outputs], stddev=0.1)) return tf.add(tf.matmul(x_tensor, W), b) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output) """ Explanation: Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this. End of explanation """ def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv1 = conv2d_maxpool(x, 32, (3,3), (1,1), (2,2), (2,2)) conv2 = conv2d_maxpool(conv1, 64, (3,3), (1,1), (2,2), (2,2)) conv3 = conv2d_maxpool(conv2, 128, (3,3), (1,1), (2,2), (2,2)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) flat = flatten(conv3) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) fc1 = fully_conn(flat, 1024) fc1 = tf.nn.dropout(fc1, keep_prob) fc2 = fully_conn(fc1, 512) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) return output(fc2, 10) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net) """ Explanation: Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob. End of explanation """ def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function session.run(optimizer, feed_dict={ x: feature_batch, y: label_batch, keep_prob: keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network) """ Explanation: Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network. End of explanation """ def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function loss = sess.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.}) valid_acc = sess.run(accuracy, feed_dict={ x: valid_features, y: valid_labels, keep_prob: 1.}) print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format( loss, valid_acc)) """ Explanation: Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. End of explanation """ # TODO: Tune Parameters epochs = 35 batch_size = 256 keep_probability = 0.5 """ Explanation: Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) """ Explanation: Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path) """ Explanation: Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model() """ Explanation: Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. End of explanation """
robertoalotufo/ia898
master/Rampa_solucoes.ipynb
mit
def rr_indices( lado): import numpy as np r,c = np.indices( (lado, lado), dtype='uint16' ) return c print(rr_indices(11)) """ Explanation: Análise das soluções do programa Rampa Esta página apresenta as principais soluções apresentadas no programa Rampa. O objetivo é entender as discrepâncias entre elas e comparar suas vantagens e desvantagens. Solução conceitual com indices: Nesta abordagem, a matriz das coordenadas das colunas "c" já é a rampa desejada. O único cuidado é que como um dos valores de teste a rampa ultrapassa o valor 255, é importante que o tipo de pixel seja inteiro com no mínimo 16 bits. End of explanation """ def rr_broadcast( lado): import numpy as np row = np.arange(lado, dtype='uint16') g = np.empty ( (lado,lado), dtype='uint16') g[:,:] = row return g print(rr_broadcast(11)) """ Explanation: Soluções mais eficientes: Existem no mínimo três soluções que se classificam como as mais eficientes. Todas elas tem o custo de escrever uma linha e depois copiar esta linha em todas as linhas da imagem de saída. Cópia com broadcast A mais interessante e mais eficiente é a cópia por broadcast (assunto do módulo 3). Nesta solução declara-se a imagem final com empty e depois copia-se uma linha criada pelo arange para cada linha da imagem utilizando-se da propriedade de broadcast do numpy. End of explanation """ def rr_tile( lado): import numpy as np f = np.arange(lado, dtype='uint16') return np.tile( f, (lado,1)) print(rr_tile(11)) """ Explanation: Cópia na forma de ladrilho Na solução usando ladrilho (tile), cria-se um ladrilho com a linha rampa pelo arange e depois ladrilha-se esta linha com "lado" linhas. End of explanation """ def rr_resize( lado): import numpy as np f = np.arange(lado, dtype='uint16') return np.resize(f, (lado,lado)) print(rr_resize(11)) """ Explanation: Utilizando resize A solução do resize utiliza-se da propriedade da função resize do numpy que completa o raster da imagem repetindo-o até o tamanho final. End of explanation """ def rr_repeat( lado): import numpy as np f = np.arange(lado, dtype='uint16') return np.repeat( f, lado).reshape(lado, lado).transpose() print(rr_repeat(11)) """ Explanation: Utilizando repeat O repeat to numpy repete cada pixel n vezes. Para utilizá-lo no neste problema, repetindo-se a linha rampa (arange) por lado, depois de reformatá-lo para duas dimensões há necessidade de fazer a transposição pois a repetição se dá na horizontal e o que se quer é a repetição na veritical, pois na imagem final as linhas são repetidas. End of explanation """ def rr_modulo( lado): import numpy as np f = np.arange(lado * lado, dtype='int32') return (f % lado).reshape(lado,lado) print(rr_modulo(11)) """ Explanation: Utilizando a operação módulo Uma solução que não foi encontrada é a que utiliza o operador "módulo", isto é, o resto da divisão. Nesta solução, cria-se um vetor único do tamanho final da imagem, aplicando-se o operador "modulo lado". Depois basta reformatar para duas dimensões. O inconveniente desta solução é que o vetor precisa ser de 32 bits, pois o número total de pixels normalmente é maior que 65535, que é o máximo que se pode representar em 'uint16'. End of explanation """ p = [rr_broadcast, rr_resize, rr_repeat, rr_tile, rr_indices, rr_modulo] lado = 101 f = rr_indices(lado) for func in p: if not (func(lado) == f).all(): print('func %s failed' % func.__name__) """ Explanation: Testando End of explanation """ import numpy as np p = [rr_broadcast, rr_tile, rr_resize, rr_repeat, rr_modulo, rr_indices] lado = 20057 for func in p: print(func.__name__) %timeit f = func(lado) print() """ Explanation: Comparando tempo O tempo de cada função é calculado executando-a mil vezes e calculando o percentil 2. End of explanation """
souljourner/fab
EDA/temp.ipynb
mit
import matplotlib.pyplot as plt # Import matplotlib # This line is necessary for the plot to appear in a Jupyter notebook %matplotlib inline # Control the default size of figures in this Jupyter notebook %pylab inline pylab.rcParams['figure.figsize'] = (15, 9) # Change the size of plots import glob from collections import Counter, defaultdict import pandas as pd from pandas_datareader import data from matplotlib.dates import DateFormatter, WeekdayLocator, DayLocator, MONDAY from matplotlib.finance import candlestick_ohlc import matplotlib.pyplot as plt from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model import LogisticRegression as LR from FOMC import FOMC from yahoo_finance import Currency, Share from spacy.en import English import pickle import datetime as dt from __future__ import print_function from pprint import pprint fomc = FOMC() df = fomc.get_statements() fomc.pick_df() # fomc.pick_df('../data/minutes_df.pickle') with open(r'../data/minutes_df.pickle', 'rb') as f: minutes_df = pickle.load(f) print(minutes_df.index[70]) print(minutes_df.ix['2017-03-15'][0]) nlp = English() doc = nlp(unicode(minutes_df.ix['2017-03-15'][0])) doc.__class__ doc.sents for sent in doc.sents: print('new: ', sent) VXX = Share('VXX') # Volatility float(VXX.get_price()) - float(VXX.get_prev_close()) VXX_historical = VXX.get_historical('2009-01-01', '2010-12-31') VXX_historical[0] str(dt.date.today()) # We will look at stock prices over the past year, starting at January 1, 2016 start = dt.datetime(2014,1,1) end = dt.date.today() # Let's get Apple stock data; Apple's ticker symbol is AAPL # First argument is the series we want, second is the source ("yahoo" for Yahoo! Finance), third is the start date, fourth is the end date apple = data.DataReader("AAPL", "yahoo", start, end) type(apple) apple def pandas_candlestick_ohlc(dat, stick = "day", otherseries = None): """ :param dat: pandas DataFrame object with datetime64 index, and float columns "Open", "High", "Low", and "Close", likely created via DataReader from "yahoo" :param stick: A string or number indicating the period of time covered by a single candlestick. Valid string inputs include "day", "week", "month", and "year", ("day" default), and any numeric input indicates the number of trading days included in a period :param otherseries: An iterable that will be coerced into a list, containing the columns of dat that hold other series to be plotted as lines This will show a Japanese candlestick plot for stock data stored in dat, also plotting other series if passed. """ mondays = WeekdayLocator(MONDAY) # major ticks on the mondays alldays = DayLocator() # minor ticks on the days dayFormatter = DateFormatter('%d') # e.g., 12 # Create a new DataFrame which includes OHLC data for each period specified by stick input transdat = dat.loc[:,["Open", "High", "Low", "Close"]] if (type(stick) == str): if stick == "day": plotdat = transdat stick = 1 # Used for plotting elif stick in ["week", "month", "year"]: if stick == "week": transdat["week"] = pd.to_datetime(transdat.index).map(lambda x: x.isocalendar()[1]) # Identify weeks elif stick == "month": transdat["month"] = pd.to_datetime(transdat.index).map(lambda x: x.month) # Identify months transdat["year"] = pd.to_datetime(transdat.index).map(lambda x: x.isocalendar()[0]) # Identify years grouped = transdat.groupby(list(set(["year",stick]))) # Group by year and other appropriate variable plotdat = pd.DataFrame({"Open": [], "High": [], "Low": [], "Close": []}) # Create empty data frame containing what will be plotted for name, group in grouped: plotdat = plotdat.append(pd.DataFrame({"Open": group.iloc[0,0], "High": max(group.High), "Low": min(group.Low), "Close": group.iloc[-1,3]}, index = [group.index[0]])) if stick == "week": stick = 5 elif stick == "month": stick = 30 elif stick == "year": stick = 365 elif (type(stick) == int and stick >= 1): transdat["stick"] = [np.floor(i / stick) for i in range(len(transdat.index))] grouped = transdat.groupby("stick") plotdat = pd.DataFrame({"Open": [], "High": [], "Low": [], "Close": []}) # Create empty data frame containing what will be plotted for name, group in grouped: plotdat = plotdat.append(pd.DataFrame({"Open": group.iloc[0,0], "High": max(group.High), "Low": min(group.Low), "Close": group.iloc[-1,3]}, index = [group.index[0]])) else: raise ValueError('Valid inputs to argument "stick" include the strings "day", "week", "month", "year", or a positive integer') # Set plot parameters, including the axis object ax used for plotting fig, ax = plt.subplots() fig.subplots_adjust(bottom=0.2) if plotdat.index[-1] - plotdat.index[0] < pd.Timedelta('730 days'): weekFormatter = DateFormatter('%b %d') # e.g., Jan 12 ax.xaxis.set_major_locator(mondays) ax.xaxis.set_minor_locator(alldays) else: weekFormatter = DateFormatter('%b %d, %Y') ax.xaxis.set_major_formatter(weekFormatter) ax.grid(True) # Create the candelstick chart candlestick_ohlc(ax, list(zip(list(date2num(plotdat.index.tolist())), plotdat["Open"].tolist(), plotdat["High"].tolist(), plotdat["Low"].tolist(), plotdat["Close"].tolist())), colorup = "black", colordown = "red", width = stick * .4) # Plot other series (such as moving averages) as lines if otherseries != None: if type(otherseries) != list: otherseries = [otherseries] dat.loc[:,otherseries].plot(ax = ax, lw = 1.3, grid = True) ax.xaxis_date() ax.autoscale_view() plt.setp(plt.gca().get_xticklabels(), rotation=45, horizontalalignment='right') plt.show() apple["Adj Close"].plot(grid = True) # Plot the adjusted closing price of AAPL pandas_candlestick_ohlc(apple) doc = nlp(unicode("Apples and oranges are similar. Boots and hippos aren't.")) apples = doc[0] oranges = doc[2] boots = doc[6] hippos = doc[8] apples.similarity(oranges) import spacy nlp = spacy.load('en') doc = nlp(u'They told us to duck.') for word in doc: print(word.text, word.lemma, word.lemma_, word.tag, word.tag_, word.pos, word.pos_) def find_person_occurences(processed_text): """ Return a list of actors from `doc` with corresponding occurences. :param doc: Spacy NLP parsed document :return: list of tuples in form [('elizabeth', 622), ('darcy', 312), ('jane', 286), ('bennet', 266)] """ characters = Counter() for ent in processed_text.ents: if ent.label_ == 'PERSON': characters[ent.lemma_] += 1 return characters.most_common() def find_place_occurences(processed_text): characters = Counter() for ent in processed_text.ents: if ent.label_ == 'GPE': characters[ent.lemma_] += 1 return characters.most_common() def find_rate_occurences(processed_text): characters = Counter() for ent in processed_text.ents: if ent.label_ in ['CARDINAL','PERCENT']: characters[ent.lemma_] += 1 return characters.most_common() def find_date_occurences(processed_text): characters = Counter() for ent in processed_text.ents: if ent.label_ == 'DATE': characters[ent.lemma_] += 1 return characters.most_common() def find_org_occurences(processed_text): characters = Counter() for ent in processed_text.ents: if ent.label_ == 'ORG': characters[ent.lemma_] += 1 return characters.most_common() def find_occurences(processed_text, list_): characters = Counter() for ent in processed_text.ents: if ent.label_ in list_: characters[ent.lemma_] += 1 return characters.most_common() find_occurences(doc, ['MONEY','ORG']) for ent in doc.ents: print(ent.lemma_, ent.label_) doc = nlp(unicode(minutes_df.iloc[0,0]), ) find_person_occurences(doc) print(doc.text) list(doc.noun_chunks) # Process sentences 'Hello, world. Natural Language Processing in 10 lines of code.' using spaCy doc = nlp(u'Hello, world. Natural Language Processing in 10 lines of code.') # Get first token of the processed document token = doc[0] print(token) # Print sentences (one sentence per line) for sent in doc.sents: print(sent) print() # For each token, print corresponding part of speech tag for token in doc: print('{} - {}'.format(token, token.pos_)) doc1 = nlp(unicode(minutes_df.iloc[0,0])) doc2 = nlp(unicode(minutes_df.iloc[1,0])) doc99 = nlp(unicode(minutes_df.iloc[-1,0])) doc1.similarity(doc99) word = nlp(unicode('marry'))[0] doc = nlp(unicode("her mother was talking to that one person (Lady Lucas) freely, openly, and of nothing else but her expectation that Jane would soon be married to Mr. Bingley.")) VERB_LEMMA = "marry" for ent in doc.ents: if ent.label_ == 'PERSON': print(ent.root.head.lemma_,'.') """ Explanation: EDA for FOMC Meeting Minutes And testing out spaCy Goal Test out and create a pipeline for the creation of a model for parsing FOMC minutes and creating predictions End of explanation """ def plot_trend_data(ax, name, series): ax.plot(series.index, series) ax.set_title("{}".format(name)) def fit_moving_average_trend(series, window=6): return series.rolling(window=window,center=False).mean() def plot_moving_average_trend(ax, name, series, window=6): moving_average_trend = fit_moving_average_trend(series, window) plot_trend_data(ax, name, series) ax.plot(series.index, moving_average_trend, color='green') prices = dict() col_names = ['date', 'open', 'high', 'low', 'close', 'volume', 'count', 'WAP'] for filename in glob.glob('../data/*.csv'): this_file = filename.split('/')[-1].split('.')[0] prices[this_file] = pd.read_csv(filename, parse_dates=['date'], infer_datetime_format=True,names=col_names).drop_duplicates() prices[this_file].set_index('date', inplace=True,) prices[this_file].index = prices[this_file].index.tz_localize('America/Los_Angeles').tz_convert('America/New_York').tz_localize(None) prices[this_file]['close-MA-4'] = fit_moving_average_trend(prices[this_file]['close'], window=4) prices.keys() """ Explanation: Price data for engineering the Y values End of explanation """ for key in prices.keys(): print(len(prices[key]), "observations in {}".format(key)) """ Explanation: How many observations are there in each? End of explanation """ for key in prices.keys(): if len(key) > 8: plt.plot(prices[key].index, prices[key]['close']) plt.title(key) plt.show() for key in prices.keys(): if key[:3] in ['USD','EUR']: plt.plot(prices[key].index, prices[key]['close']) plt.title(key) plt.show() fig, axs = plt.subplots(4, figsize=(14, 6)) plot_moving_average_trend(axs[0], 'open', prices['SPY-USD-TRADES']['open'][:100], window=10) plot_moving_average_trend(axs[1], 'high', prices['SPY-USD-TRADES']['high'][:100], window=10) plot_moving_average_trend(axs[2], 'low', prices['SPY-USD-TRADES']['low'][:100], window=10) plot_moving_average_trend(axs[3], 'close', prices['SPY-USD-TRADES']['close'][:100], window=10) plt.tight_layout() prices['SHY-USD-TRADES']['close'].plot() """ Explanation: Lets plot the prices to check for anomalies and outlies (usually misprints) End of explanation """ pre_post_FOMC_time_selector = [] for date in minutes_df.index: pre_post_FOMC_time_selector.extend(pd.date_range(date.replace(hour=13, minute=30), periods=2, freq='2 H')) prices_FOMC = dict() for key in prices.keys(): prices_FOMC[key] = prices[key].loc[pre_post_FOMC_time_selector][['close-MA-4']].dropna() prices_FOMC['SHY-USD-TRADES'].head() this_df = prices_FOMC['SHY-USD-TRADES'] this_df.head() y = this_df.groupby(this_df.index.date).diff().dropna() sum(y > 0) sum(y < 0) """ Explanation: Looks like data is pretty clean now. Lets gather the y variables End of explanation """ y_dfs = dict() for key in prices_FOMC: y_dfs[key] = prices_FOMC[key].groupby(prices_FOMC[key].index.date).diff().dropna() y_dfs[key]['fomc-close-MA-4-pct'] = y_dfs[key]['close-MA-4'] / prices[key].loc[y_dfs[key].index]['close'] y_dfs[key].index = y_dfs[key].index.normalize() y_dfs['SPY-USD-TRADES'] """ Explanation: We seem to have a pretty balanced number of up days and down days. Now lets convert everything to target variables End of explanation """ tfidf_vectorizer = TfidfVectorizer(min_df=1, stop_words='english') tfidf_matrix = tfidf_vectorizer.fit_transform(minutes_df['statements'].values.tolist()) tfidf_vectorizer.vocabulary_ tfidf_matrix.todense() tfidf_matrix.shape minutes_list = minutes_df['statements'].values.tolist() minutes_list[0] minutes_df.iloc[0] minutes_df.__class__ type(minutes_df) """ Explanation: Time to TF-IDF!!! Lets build the X matrix first End of explanation """ from nltk.stem.wordnet import WordNetLemmatizer from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.naive_bayes import MultinomialNB from sklearn.neighbors import KNeighborsClassifier from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import GradientBoostingClassifier from sklearn.svm import SVC from sklearn.model_selection import TimeSeriesSplit from sklearn.model_selection import GridSearchCV svn_grid = {'C':[0.01,0.1,1,8,9,10,11,12,15,30,15000], 'gamma':[0.1,1,2,3,4,5]} svn_gridsearch = GridSearchCV(rbf, svn_grid, n_jobs=-1, verbose=True, scoring="accuracy", cv=10) fit = svn_gridsearch.fit(X_train, y_train) """ Explanation: Tuning RF End of explanation """ string = "{}/{}".format((1),2) string """ Explanation: Digging into the classes End of explanation """
cherryc/dynet
examples/python/tutorials/RNNs.ipynb
apache-2.0
# we assume that we have the dynet module in your path. # OUTDATED: we also assume that LD_LIBRARY_PATH includes a pointer to where libcnn_shared.so is. from dynet import * """ Explanation: RNNs tutorial End of explanation """ model = Model() NUM_LAYERS=2 INPUT_DIM=50 HIDDEN_DIM=10 builder = LSTMBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model) # or: # builder = SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model) """ Explanation: An LSTM/RNN overview: An (1-layer) RNN can be thought of as a sequence of cells, $h_1,...,h_k$, where $h_i$ indicates the time dimenstion. Each cell $h_i$ has an input $x_i$ and an output $r_i$. In addition to $x_i$, cell $h_i$ receives as input also $r_{i-1}$. In a deep (multi-layer) RNN, we don't have a sequence, but a grid. That is we have several layers of sequences: $h_1^3,...,h_k^3$ $h_1^2,...,h_k^2$ $h_1^1,...h_k^1$, Let $r_i^j$ be the output of cell $h_i^j$. Then: The input to $h_i^1$ is $x_i$ and $r_{i-1}^1$. The input to $h_i^2$ is $r_i^1$ and $r_{i-1}^2$, and so on. The LSTM (RNN) Interface RNN / LSTM / GRU follow the same interface. We have a "builder" which is in charge of creating definining the parameters for the sequence. End of explanation """ s0 = builder.initial_state() x1 = vecInput(INPUT_DIM) s1=s0.add_input(x1) y1 = s1.output() # here, we add x1 to the RNN, and the output we get from the top is y (a HIDEN_DIM-dim vector) y1.npvalue().shape s2=s1.add_input(x1) # we can add another input y2=s2.output() """ Explanation: Note that when we create the builder, it adds the internal RNN parameters to the model. We do not need to care about them, but they will be optimized together with the rest of the network's parameters. End of explanation """ print s2.h() """ Explanation: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer. If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer: End of explanation """ # create a simple rnn builder rnnbuilder=SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, model) # initialize a new graph, and a new sequence rs0 = rnnbuilder.initial_state() # add inputs rs1 = rs0.add_input(x1) ry1 = rs1.output() print "all layers:", s1.h() print s1.s() """ Explanation: The same interface that we saw until now for the LSTM, holds also for the Simple RNN: End of explanation """ rnn_h = rs1.h() rnn_s = rs1.s() print "RNN h:", rnn_h print "RNN s:", rnn_s lstm_h = s1.h() lstm_s = s1.s() print "LSTM h:", lstm_h print "LSTM s:", lstm_s """ Explanation: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it: 1. the state of the current RNN column 2. the input x The state is then returned, and we can call it's output() method to get the output y, which is the output at the top of the column. We can access the outputs of all the layers (not only the last one) using the .h() method of the state. .s() The internal state of the RNN may be more involved than just the outputs $h$. This is the case for the LSTM, that keeps an extra "memory" cell, that is used when calculating $h$, and which is also passed to the next column. To access the entire hidden state, we use the .s() method. The output of .s() differs by the type of RNN being used. For the simple-RNN, it is the same as .h(). For the LSTM, it is more involved. End of explanation """ s2=s1.add_input(x1) s3=s2.add_input(x1) s4=s3.add_input(x1) # let's continue s3 with a new input. s5=s3.add_input(x1) # we now have two different sequences: # s0,s1,s2,s3,s4 # s0,s1,s2,s3,s5 # the two sequences share parameters. assert(s5.prev() == s3) assert(s4.prev() == s3) s6=s3.prev().add_input(x1) # we now have an additional sequence: # s0,s1,s2,s6 s6.h() s6.s() """ Explanation: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h. Extra options in the RNN/LSTM interface Stack LSTM The RNN's are shaped as a stack: we can remove the top and continue from the previous state. This is done either by remembering the previous state and continuing it with a new .add_input(), or using we can access the previous state of a given state using the .prev() method of state. Initializing a new sequence with a given state When we call builder.initial_state(), we are assuming the state has random /0 initialization. If we want, we can specify a list of expressions that will serve as the initial state. The expected format is the same as the results of a call to .final_s(). TODO: this is not supported yet. End of explanation """ state = rnnbuilder.initial_state() xs = [x1,x1,x1] states = state.add_inputs(xs) outputs = [s.output() for s in states] hs = [s.h() for s in states] print outputs, hs """ Explanation: Aside: memory efficient transduction The RNNState interface is convenient, and allows for incremental input construction. However, sometimes we know the sequence of inputs in advance, and care only about the sequence of output expressions. In this case, we can use the add_inputs(xs) method, where xs is a list of Expression. End of explanation """ state = rnnbuilder.initial_state() xs = [x1,x1,x1] outputs = state.transduce(xs) print outputs """ Explanation: This is convenient. What if we do not care about .s() and .h(), and do not need to access the previous vectors? In such cases we can use the transduce(xs) method instead of add_inputs(xs). transduce takes in a sequence of Expressions, and returns a sequence of Expressions. As a consequence of not returning RNNStates, trnasduce is much more memory efficient than add_inputs or a series of calls to add_input. End of explanation """ import random from collections import defaultdict from itertools import count import sys LAYERS = 2 INPUT_DIM = 50 HIDDEN_DIM = 50 characters = list("abcdefghijklmnopqrstuvwxyz ") characters.append("<EOS>") int2char = list(characters) char2int = {c:i for i,c in enumerate(characters)} VOCAB_SIZE = len(characters) model = Model() srnn = SimpleRNNBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model) lstm = LSTMBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, model) params = {} params["lookup"] = model.add_lookup_parameters((VOCAB_SIZE, INPUT_DIM)) params["R"] = model.add_parameters((VOCAB_SIZE, HIDDEN_DIM)) params["bias"] = model.add_parameters((VOCAB_SIZE)) # return compute loss of RNN for one sentence def do_one_sentence(rnn, sentence): # setup the sentence renew_cg() s0 = rnn.initial_state() R = parameter(params["R"]) bias = parameter(params["bias"]) lookup = params["lookup"] sentence = ["<EOS>"] + list(sentence) + ["<EOS>"] sentence = [char2int[c] for c in sentence] s = s0 loss = [] for char,next_char in zip(sentence,sentence[1:]): s = s.add_input(lookup[char]) probs = softmax(R*s.output() + bias) loss.append( -log(pick(probs,next_char)) ) loss = esum(loss) return loss # generate from model: def generate(rnn): def sample(probs): rnd = random.random() for i,p in enumerate(probs): rnd -= p if rnd <= 0: break return i # setup the sentence renew_cg() s0 = rnn.initial_state() R = parameter(params["R"]) bias = parameter(params["bias"]) lookup = params["lookup"] s = s0.add_input(lookup[char2int["<EOS>"]]) out=[] while True: probs = softmax(R*s.output() + bias) probs = probs.vec_value() next_char = sample(probs) out.append(int2char[next_char]) if out[-1] == "<EOS>": break s = s.add_input(lookup[next_char]) return "".join(out[:-1]) # strip the <EOS> # train, and generate every 5 samples def train(rnn, sentence): trainer = SimpleSGDTrainer(model) for i in xrange(200): loss = do_one_sentence(rnn, sentence) loss_value = loss.value() loss.backward() trainer.update() if i % 5 == 0: print loss_value, print generate(rnn) """ Explanation: Character-level LSTM Now that we know the basics of RNNs, let's build a character-level LSTM language-model. We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character. End of explanation """ sentence = "a quick brown fox jumped over the lazy dog" train(srnn, sentence) sentence = "a quick brown fox jumped over the lazy dog" train(lstm, sentence) """ Explanation: Notice that: 1. We pass the same rnn-builder to do_one_sentence over and over again. We must re-use the same rnn-builder, as this is where the shared parameters are kept. 2. We renew_cg() before each sentence -- because we want to have a new graph (new network) for this sentence. The parameters will be shared through the model and the shared rnn-builder. End of explanation """ train(srnn, "these pretzels are making me thirsty") """ Explanation: The model seem to learn the sentence quite well. Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM! How can that be? The answer is that we are cheating a bit. The sentence we are trying to learn has each letter-bigram exactly once. This means a simple trigram model can memorize it very well. Try it out with more complex sequences. End of explanation """
google/flax
examples/imagenet/imagenet.ipynb
apache-2.0
# Install ml-collections & latest Flax version from Github. !pip install -q clu ml-collections git+https://github.com/google/flax example_directory = 'examples/imagenet' editor_relpaths = ('configs/default.py', 'input_pipeline.py', 'models.py', 'train.py') repo, branch = 'https://github.com/google/flax', 'main' # (If you run this code in Jupyter[lab], then you're already in the # example directory and nothing needs to be done.) #@markdown **Fetch newest Flax, copy example code** #@markdown #@markdown **If you select no** below, then the files will be stored on the #@markdown *ephemeral* Colab VM. **After some time of inactivity, this VM will #@markdown be restarted an any changes are lost**. #@markdown #@markdown **If you select yes** below, then you will be asked for your #@markdown credentials to mount your personal Google Drive. In this case, all #@markdown changes you make will be *persisted*, and even if you re-run the #@markdown Colab later on, the files will still be the same (you can of course #@markdown remove directories inside your Drive's `flax/` root if you want to #@markdown manually revert these files). if 'google.colab' in str(get_ipython()): import os os.chdir('/content') # Download Flax repo from Github. if not os.path.isdir('flaxrepo'): !git clone --depth=1 -b $branch $repo flaxrepo # Copy example files & change directory. mount_gdrive = 'no' #@param ['yes', 'no'] if mount_gdrive == 'yes': DISCLAIMER = 'Note : Editing in your Google Drive, changes will persist.' from google.colab import drive drive.mount('/content/gdrive') example_root_path = f'/content/gdrive/My Drive/flax/{example_directory}' else: DISCLAIMER = 'WARNING : Editing in VM - changes lost after reboot!!' example_root_path = f'/content/{example_directory}' from IPython import display display.display(display.HTML( f'<h1 style="color:red;" class="blink">{DISCLAIMER}</h1>')) if not os.path.isdir(example_root_path): os.makedirs(example_root_path) !cp -r flaxrepo/$example_directory/* "$example_root_path" os.chdir(example_root_path) from google.colab import files for relpath in editor_relpaths: s = open(f'{example_root_path}/{relpath}').read() open(f'{example_root_path}/{relpath}', 'w').write( f'## {DISCLAIMER}\n' + '#' * (len(DISCLAIMER) + 3) + '\n\n' + s) files.view(f'{example_root_path}/{relpath}') # Note : In Colab, above cell changed the working direcoty. !pwd """ Explanation: Flax Imagenet Example <a href="https://colab.research.google.com/github/google/flax/blob/main/examples/imagenet/imagenet.ipynb" ><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Demonstration notebook for https://github.com/google/flax/tree/main/examples/imagenet The Flax Notebook Workflow: Run the entire notebook end-to-end and check out the outputs. This will open Python files in the right-hand editor! You'll be able to interactively explore metrics in TensorBoard. Change config and train for different hyperparameters. Check out the updated TensorBoard plots. Update the code in train.py. Thanks to %autoreload, any changes you make in the file will automatically appear in the notebook. Some ideas to get you started: Change the model. Log some per-batch metrics during training. Add new hyperparameters to configs/default.py and use them in train.py. At any time, feel free to paste code from train.py into the notebook and modify it directly there! Setup End of explanation """ # TPU setup : Boilerplate for connecting JAX to TPU. import os if 'google.colab' in str(get_ipython()) and 'COLAB_TPU_ADDR' in os.environ: # Make sure the Colab Runtime is set to Accelerator: TPU. import requests if 'TPU_DRIVER_MODE' not in globals(): url = 'http://' + os.environ['COLAB_TPU_ADDR'].split(':')[0] + ':8475/requestversion/tpu_driver0.1-dev20191206' resp = requests.post(url) TPU_DRIVER_MODE = 1 # The following is required to use TPU Driver as JAX's backend. from jax.config import config config.FLAGS.jax_xla_backend = "tpu_driver" config.FLAGS.jax_backend_target = "grpc://" + os.environ['COLAB_TPU_ADDR'] print('Registered TPU:', config.FLAGS.jax_backend_target) else: print('No TPU detected. Can be changed under "Runtime/Change runtime type".') import json from absl import logging import flax import jax import jax.numpy as jnp from matplotlib import pyplot as plt import numpy as np import tensorflow as tf import tensorflow_datasets as tfds logging.set_verbosity(logging.INFO) assert len(jax.devices()) == 8, f'Expected 8 TPU cores : {jax.devices()}' # Helper functions for images. def show_img(img, ax=None, title=None): """Shows a single image.""" if ax is None: ax = plt.gca() img *= tf.constant(input_pipeline.STDDEV_RGB, shape=[1, 1, 3], dtype=img.dtype) img += tf.constant(input_pipeline.MEAN_RGB, shape=[1, 1, 3], dtype=img.dtype) img = np.clip(img.numpy().astype(int), 0, 255) ax.imshow(img) ax.set_xticks([]) ax.set_yticks([]) if title: ax.set_title(title) def show_img_grid(imgs, titles): """Shows a grid of images.""" n = int(np.ceil(len(imgs)**.5)) _, axs = plt.subplots(n, n, figsize=(3 * n, 3 * n)) for i, (img, title) in enumerate(zip(imgs, titles)): show_img(img, axs[i // n][i % n], title) # Local imports from current directory - auto reload. # Any changes you make to train.py will appear automatically. %load_ext autoreload %autoreload 2 import input_pipeline import models import train from configs import default as config_lib """ Explanation: Imports / Helpers End of explanation """ # We load "imagenette" that has similar pictures to "imagenet2012" but can be # downloaded automatically and is much smaller. dataset_builder = tfds.builder('imagenette') dataset_builder.download_and_prepare() ds = dataset_builder.as_dataset('train') dataset_builder.info # Utilities to help with Imagenette labels. ![ ! -f mapping_imagenet.json ] && wget --no-check-certificate https://raw.githubusercontent.com/ozendelait/wordnet-to-json/master/mapping_imagenet.json with open('mapping_imagenet.json') as f: mapping_imagenet = json.load(f) # Mapping imagenette label name to imagenet label index. imagenette_labels = { d['v3p0']: d['label'] for d in mapping_imagenet } # Mapping imagenette label name to human-readable label. imagenette_idx = { d['v3p0']: idx for idx, d in enumerate(mapping_imagenet) } def imagenette_label(idx): """Returns a short human-readable string for provided imagenette index.""" net = dataset_builder.info.features['label'].int2str(idx) return imagenette_labels[net].split(',')[0] def imagenette_imagenet2012(idx): """Returns the imagenet2012 index for provided imagenette index.""" net = dataset_builder.info.features['label'].int2str(idx) return imagenette_idx[net] def imagenet2012_label(idx): """Returns a short human-readable string for provided imagenet2012 index.""" return mapping_imagenet[idx]['label'].split(',')[0] train_ds = input_pipeline.create_split( dataset_builder, 128, train=True, ) eval_ds = input_pipeline.create_split( dataset_builder, 128, train=False, ) train_batch = next(iter(train_ds)) {k: (v.shape, v.dtype) for k, v in train_batch.items()} """ Explanation: Dataset End of explanation """ # Get a live update during training - use the "refresh" button! # (In Jupyter[lab] start "tensorbaord" in the local directory instead.) if 'google.colab' in str(get_ipython()): %load_ext tensorboard %tensorboard --logdir=. config = config_lib.get_config() config.dataset = 'imagenette' config.model = 'ResNet18' config.half_precision = True batch_size = 512 config.learning_rate *= batch_size / config.batch_size config.batch_size = batch_size config # Regenerate datasets with updated batch_size. train_ds = input_pipeline.create_split( dataset_builder, config.batch_size, train=True, ) eval_ds = input_pipeline.create_split( dataset_builder, config.batch_size, train=False, ) # Takes ~1.5 min / epoch. for num_epochs in (5, 10): config.num_epochs = num_epochs config.warmup_epochs = config.num_epochs / 10 name = f'{config.model}_{config.learning_rate}_{config.num_epochs}' print(f'\n\n{name}') state = train.train_and_evaluate(config, workdir=f'./models/{name}') if 'google.colab' in str(get_ipython()): #@markdown You can upload the training results directly to https://tensorbaord.dev #@markdown #@markdown Note that everbody with the link will be able to see the data. upload_data = 'no' #@param ['yes', 'no'] if upload_data == 'yes': !tensorboard dev upload --one_shot --logdir ./models --name 'Flax examples/mnist' """ Explanation: Training from scratch End of explanation """ # Load model checkpoint from cloud. from flax.training import checkpoints config_name = 'v100_x8' pretrained_path = f'gs://flax_public/examples/imagenet/{config_name}' latest_checkpoint = checkpoints.natural_sort( tf.io.gfile.glob(f'{pretrained_path}/checkpoint_*'))[0] if not os.path.exists(os.path.basename(latest_checkpoint)): tf.io.gfile.copy(latest_checkpoint, os.path.basename(latest_checkpoint)) !ls -lh checkpoint_* # Load config that was used to train checkpoint. import importlib config = importlib.import_module(f'configs.{config_name}').get_config() # Load models & state (takes ~1 min to load the model). model_cls = getattr(models, config.model) model = train.create_model( model_cls=model_cls, half_precision=config.half_precision) base_learning_rate = config.learning_rate * config.batch_size / 256. steps_per_epoch = ( dataset_builder.info.splits['train'].num_examples // config.batch_size ) learning_rate_fn = train.create_learning_rate_fn( config, base_learning_rate, steps_per_epoch) state = train.create_train_state( jax.random.PRNGKey(0), config, model, image_size=input_pipeline.IMAGE_SIZE, learning_rate_fn=learning_rate_fn) state = train.restore_checkpoint(state, './') """ Explanation: Load pre-trained model End of explanation """ # Load batch from imagenette eval set. batch = next(iter(eval_ds)) {k: v.shape for k, v in batch.items()} # Evaluate using model trained on imagenet. logits = model.apply({'params': state.params, 'batch_stats': state.batch_stats}, batch['image'][:128], train=False) # Find classification mistakes. preds_labels = list(zip(logits.argmax(axis=-1), map(imagenette_imagenet2012, batch['label']))) error_idxs = [idx for idx, (pred, label) in enumerate(preds_labels) if pred != label] error_idxs # The mistakes look all quite reasonable. show_img_grid( [batch['image'][idx] for idx in error_idxs[:9]], [f'pred: {imagenet2012_label(preds_labels[idx][0])}\n' f'label: {imagenet2012_label(preds_labels[idx][1])}' for idx in error_idxs[:9]], ) plt.tight_layout() # Define parallelized inference function in separate cell so the the cached # compilation can be used if below cell is executed multiple times. @jax.pmap def p_get_logits(images): return model.apply({'params': state.params, 'batch_stats': state.batch_stats}, images, train=False) eval_iter = train.create_input_iter(dataset_builder, config.batch_size, input_pipeline.IMAGE_SIZE, tf.float32, train=False, cache=True) # Compute accuracy. eval_steps = dataset_builder.info.splits['validation'].num_examples // config.batch_size count = correct = 0 for step, batch in zip(range(eval_steps), eval_iter): labels = [imagenette_imagenet2012(label) for label in batch['label'].flatten()] logits = p_get_logits(batch['image']) logits = logits.reshape([-1, logits.shape[-1]]) print(f'Step {step+1}/{eval_steps}...') count += len(labels) correct += (logits.argmax(axis=-1) == jnp.array(labels)).sum() correct / count """ Explanation: Inference End of explanation """
kitu2007/dl_class
gan_mnist/Intro_to_GANs_Exercises.ipynb
mit
%matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') """ Explanation: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN A whole list The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. End of explanation """ def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32,[None,real_dim]) inputs_z = tf.placeholder(tf.float32,[None,z_dim]) return inputs_real, inputs_z """ Explanation: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively. End of explanation """ def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('generator', reuse=reuse) as scope: # finish this # Hidden layer h1 = tf.layers.dense(z,n_units) # Leaky ReLU h1 = tf.maximum(alpha*h1,h1) # Logits and tanh output logits = tf.layers.dense(h1,out_dim) out = tf.tanh(logits) return out """ Explanation: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope. End of explanation """ def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('discriminator', reuse=reuse) as scope: # finish this # Hidden layer h1 = tf.layers.dense(x, n_units) # Leaky ReLU h1 = tf.maximum(alpha*h1, h1) logits = tf.layers.dense(h1,1) out = tf.sigmoid(logits) return out, logits """ Explanation: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope. End of explanation """ # Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1 """ Explanation: Hyperparameters End of explanation """ tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(real_dim=input_size,z_dim=z_size) # Generator network here g_model = generator(input_z, input_size, n_units=g_hidden_size, reuse=False, alpha = alpha) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real, n_units= d_hidden_size, reuse=False, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, n_units=d_hidden_size, reuse = True, alpha=alpha) """ Explanation: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise: Build the network from the functions you defined earlier. End of explanation """ # Calculate losses real_labels = tf.ones_like(d_logits_real) * (1 - smooth) d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels= real_labels)) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_fake))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake))) """ Explanation: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. End of explanation """ # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if 'generator' in var.name] d_vars = [var for var in t_vars if 'discriminator' in var.name] d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars) """ Explanation: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately. End of explanation """ batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) """ Explanation: Training End of explanation """ %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() """ Explanation: Training loss Here we'll check out the training losses for the generator and discriminator. End of explanation """ def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) """ Explanation: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. End of explanation """ _ = view_samples(-1, samples) """ Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. End of explanation """ rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) """ Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! End of explanation """ saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) view_samples(0, [gen_samples]) """ Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! End of explanation """
NEONScience/NEON-Data-Skills
tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Flightlines_py/Intro_NEON_AOP_HDF5_Reflectance_Flightlines_py.ipynb
agpl-3.0
#Check that you are using the correct version of Python (should be 3.4+, otherwise gdal won't work) import sys sys.version """ Explanation: syncID: 8491e02fec01499281d05f3b92409e27 title: "NEON AOP Hyperspectral Data in HDF5 format with Python - Flightlines" description: "Learn how to read NEON AOP hyperspectral flightline data using Python and develop skills to manipulate and visualize spectral data." dateCreated: 2017-04-10 authors: Bridget Hass contributors: Donal O'Leary estimatedTime: 1 hour packagesLibraries: numpy, h5py, gdal, matplotlib.pyplot topics: hyperspectral-remote-sensing, HDF5, remote-sensing languagesTool: python dataProduct: NEON.DP1.30006, NEON.DP1.30008 code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/intro-hyperspectral/Intro_NEON_AOP_HDF5_Reflectance_Flightlines_py/Intro_NEON_AOP_HDF5_Reflectance_Flightlines_py.ipynb tutorialSeries: intro-hsi-py-series urlTitle: neon-aop-hdf5-py In this introductory tutorial, we discuss how to read NEON AOP hyperspectral flightline data using Python. We develop and practice skills and use several tools to manipulate and visualize the spectral data. By the end of this tutorial, you will become familiar with the Python syntax. If you are interested in learning how to do this for mosaic/tiled NEON AOP hyperspectral data, please see <a href="https://www.neonscience.org/resources/learning-hub/tutorials/neon-aop-hdf5-tile-py" target="_blank"> NEON AOP Hyperspectral Data in HDF5 format with Python - Tiles</a>. Learning Objectives After completing this tutorial, you will be able to: Import and use Python packages numpy, pandas, matplotlib, h5py, and gdal. Use the package h5py and the visititems functionality to read an HDF5 file and view data attributes. Read the data ignore value and scaling factor and apply these values to produce a cleaned reflectance array. Extract and plot a single band of reflectance data Plot a histogram of reflectance values to visualize the range and distribution of values. Subset an hdf5 reflectance file from the full flightline to a smaller region of interest (if you complete the optional extension). Apply a histogram stretch and adaptive equalization to improve the contrast of an image (if you complete the optional extension) . Install Python Packages numpy pandas gdal matplotlib h5py Download Data To complete this tutorial, you will use data available from the NEON 2017 Data Institute. This tutorial uses the following files: <ul> <li> <a href="https://www.neonscience.org/sites/default/files/neon_aop_refl_hdf5_functions.py_.zip">neon_aop_refl_hdf5_functions.py_.zip (5 KB)</a> <- Click to Download</li> <li><a href="https://neondata.sharefile.com/share/view/cdc8242e24ad4517/fo2c5fc2-6f9e-4c25-97c2-ca8125c3a53a" target="_blank">NEON_D02_SERC_DP1_20160807_160559_reflectance.h5 (3.55 GB)</a> <- Click to Download</li> </ul> <a href="https://neondata.sharefile.com/share/view/cdc8242e24ad4517/fo2c5fc2-6f9e-4c25-97c2-ca8125c3a53a" class="link--button link--arrow"> Download Dataset</a> The LiDAR and imagery data used to create this raster teaching data subset were collected over the <a href="http://www.neonscience.org/" target="_blank"> National Ecological Observatory Network's</a> <a href="http://www.neonscience.org/science-design/field-sites/" target="_blank" >field sites</a> and processed at NEON headquarters. The entire dataset can be accessed on the <a href="http://data.neonscience.org" target="_blank"> NEON data portal</a>. These data are a part of the NEON 2017 Remote Sensing Data Institute. The complete archive may be found here -<a href="https://neondata.sharefile.com/d-s11d5c8b9c53426db"> NEON Teaching Data Subset: Data Institute 2017 Data Set</a> Hyperspectral remote sensing data is a useful tool for measuring changes to our environment at the Earth’s surface. In this tutorial we explore how to extract information from NEON AOP hyperspectral reflectance data, stored in HDF5 format. Mapping the Invisible: Introduction to Spectral Remote Sensing For more information on spectral remote sensing watch this video. <iframe width="560" height="315" src="https://www.youtube.com/embed/3iaFzafWJQE" frameborder="0" allowfullscreen></iframe> Set up Before we start coding, make sure you are using the correct version of Python. The gdal package is currently compatible with Python versions 3.4 and earlier (May 2017). For this tutorial, we will use Python version 3.4. End of explanation """ import numpy as np import h5py import gdal, osr import matplotlib.pyplot as plt %matplotlib inline import warnings warnings.filterwarnings('ignore') """ Explanation: First let's import the required packages and set our display preferences so that plots are inline and plot warnings are off: End of explanation """ f = h5py.File('/Users/olearyd/Git/data/NEON_D02_SERC_DP1_20160807_160559_reflectance.h5','r') """ Explanation: Read hdf5 file into Python f = h5py.File('file.h5','r') reads in an h5 file to the variable f. If the h5 file is stored in a different directory, make sure to include the relative path to that directory (In this example, the path is ../data/SERC/hypserspectral) End of explanation """ #list_dataset lists the names of datasets in an hdf5 file def list_dataset(name,node): if isinstance(node, h5py.Dataset): print(name) f.visititems(list_dataset) """ Explanation: Explore the Files We can look inside the HDF5 dataset with the h5py visititems function. The list_dataset function defined below displays all datasets stored in the hdf5 file and their locations within the hdf5 file: End of explanation """ #ls_dataset displays the name, shape, and type of datasets in hdf5 file def ls_dataset(name,node): if isinstance(node, h5py.Dataset): print(node) f.visititems(ls_dataset) """ Explanation: We can display the name, shape, and type of each of these datasets using the ls_dataset function defined below, which is also called with visititems: End of explanation """ serc_refl = f['SERC']['Reflectance'] print(serc_refl) """ Explanation: Now that we see the general structure of the hdf5 file, let's take a look at some of the information that is stored inside. Let's start by extracting the reflectance data, which is nested under SERC/Reflectance/Reflectance_Data. End of explanation """ serc_reflArray = serc_refl['Reflectance_Data'] print(serc_reflArray) """ Explanation: The two members of the HDF5 group /SERC/Reflectance are Metadata and Reflectance_Data. Let's save the reflectance data as the variable serc_reflArray. End of explanation """ refl_shape = serc_reflArray.shape print('SERC Reflectance Data Dimensions:',refl_shape) """ Explanation: We can extract the shape as follows: End of explanation """ #View wavelength information and values wavelengths = serc_refl['Metadata']['Spectral_Data']['Wavelength'] print(wavelengths) # print(wavelengths.value) # Display min & max wavelengths print('min wavelength:', np.amin(wavelengths),'nm') print('max wavelength:', np.amax(wavelengths),'nm') #show the band width print('band width =',(wavelengths.value[1]-wavelengths.value[0]),'nm') print('band width =',(wavelengths.value[-1]-wavelengths.value[-2]),'nm') """ Explanation: This corresponds to (y,x, # of bands), where (x,y) are the dimensions of the reflectance array in pixels (1m x 1m). Shape, in Python, is read Y, X, and number of bands so we see we have a flight path is 10852 pixels long and 1106 pixels wide. Note that some programs are row, column; not column, row and will give a reverse output. The number of bands is 426 bands. All NEON hyperspectral data contains 426 wavelength bands. Let's take a look at the wavelength values. End of explanation """ serc_mapInfo = serc_refl['Metadata']['Coordinate_System']['Map_Info'] print('SERC Map Info:\n',serc_mapInfo.value) """ Explanation: The wavelengths recorded range from 383.66 - 2511.94 nm, and each band covers a range of ~5 nm. Now let's extract spatial information, which is stored under SERC/Reflectance/Metadata/Coordinate_System/Map_Info: End of explanation """ #First convert mapInfo to a string, and divide into separate strings using a comma separator mapInfo_string = str(serc_mapInfo.value) #convert to string mapInfo_split = mapInfo_string.split(",") #split the strings using the separator "," print(mapInfo_split) """ Explanation: Understanding the output: The 4th and 5th columns of map info signify the coordinates of the map origin, which refers to the upper-left corner of the image (xMin, yMax). The letter b appears before UTM. This appears because the variable-length string data is stored in binary format when it is written to the hdf5 file. Don't worry about it for now, as we will convert the numerical data we need in to floating point numbers. Learn more about HDF5 strings, on the <a href="http://docs.h5py.org/en/latest/strings.html" target="_blank">h5py Read the Docs</a>. Let's extract relevant information from the Map_Info metadata to define the spatial extent of this dataset: End of explanation """ #Extract the resolution & convert to floating decimal number res = float(mapInfo_split[5]),float(mapInfo_split[6]) print('Resolution:',res) #Extract the upper left-hand corner coordinates from mapInfo xMin = float(mapInfo_split[3]) yMax = float(mapInfo_split[4]) #Calculate the xMax and yMin values from the dimensions #xMax = left corner + (# of columns * resolution) xMax = xMin + (refl_shape[1]*res[0]) yMin = yMax - (refl_shape[0]*res[1]) # print('xMin:',xMin) ; print('xMax:',xMax) # print('yMin:',yMin) ; print('yMax:',yMax) serc_ext = (xMin, xMax, yMin, yMax) print('serc_ext:',serc_ext) #Can also create a dictionary of extent: serc_extDict = {} serc_extDict['xMin'] = xMin serc_extDict['xMax'] = xMax serc_extDict['yMin'] = yMin serc_extDict['yMax'] = yMax print('serc_extDict:',serc_extDict) """ Explanation: Now we can extract the spatial information we need from the map info values, convert them to the appropriate data types (eg. float) and store it in a way that will enable us to access and apply it later: End of explanation """ print('b56 wavelngth:',wavelengths[56],"nanometers") b56 = serc_reflArray[:,:,55].astype(np.float) print('b56 type:',type(b56)) print('b56 shape:',b56.shape) print('Band 56 Reflectance:\n',b56) # plt.hist(b56.flatten()) """ Explanation: Extract a Single Band from Array End of explanation """ #View and apply scale factor and data ignore value scaleFactor = serc_reflArray.attrs['Scale_Factor'] noDataValue = serc_reflArray.attrs['Data_Ignore_Value'] print('Scale Factor:',scaleFactor) print('Data Ignore Value:',noDataValue) b56[b56==int(noDataValue)]=np.nan b56 = b56/scaleFactor print('Cleaned Band 56 Reflectance:\n',b56) """ Explanation: Scale factor and No Data Value This array represents the unscaled reflectance for band 56. Recall from exploring the HDF5 value in HDFViewer that the Data_Ignore_Value=-9999, and the Scale_Factor=10000.0. <figure> <a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png"> <img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/HDF5-general/hdfview_SERCrefl.png"></a> <figcaption> Screenshot of the NEON HDF5 file format. Source: National Ecological Observatory Network </figcaption> </figure> We can extract and apply the no data value and scale factor as follows: End of explanation """ plt.hist(b56[~np.isnan(b56)],50); plt.title('Histogram of SERC Band 56 Reflectance') plt.xlabel('Reflectance'); plt.ylabel('Frequency') """ Explanation: Plot histogram End of explanation """ serc_fig = plt.figure(figsize=(20,10)) ax1 = serc_fig.add_subplot(1,2,1) # serc_plot = ax1.imshow(b56,extent=serc_ext,cmap='jet',clim=(0,0.1)) serc_plot = ax1.imshow(b56,extent=serc_ext,cmap='jet') cbar = plt.colorbar(serc_plot,aspect=50); cbar.set_label('Reflectance') plt.title('SERC Band 56 Reflectance'); #ax = plt.gca(); ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=270) #rotate x tick labels 90 degree # plot histogram of reflectance values (with 50 bins) ax2 = serc_fig.add_subplot(2,2,2) ax2.hist(b56[~np.isnan(b56)],50); plt.title('Histogram of SERC Reflectance') plt.xlabel('Reflectance'); plt.ylabel('Frequency') # plot histogram, zooming in on values < 0.5 ax3 = serc_fig.add_subplot(2,2,4) ax3.hist(b56[~np.isnan(b56)],50); plt.title('Histogram of SERC Reflectance, 0-0.5') plt.xlabel('Reflectance'); plt.ylabel('Frequency') ax3.set_xlim([0,0.5]) """ Explanation: Plot Single Band Now we can plot this band using the Python package matplotlib.pyplot, which we imported at the beginning of the lesson as plt. Note that the default colormap is jet unless otherwise specified. We will explore using different colormaps a little later. End of explanation """ # Plot in grayscale with different color limits # Higher reflectance is lighter/brighter, lower reflectance is darker serc_fig2 = plt.figure(figsize=(15,15)) ax1 = serc_fig2.add_subplot(1,3,1) serc_plot = ax1.imshow(b56,extent=serc_ext,cmap='gray',clim=(0,0.3)) cbar = plt.colorbar(serc_plot,aspect=50); cbar.set_label('Reflectance') plt.title('clim = 0-0.3'); #ax = plt.gca(); ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=270) #rotate x tick labels 90 degree ax2 = serc_fig2.add_subplot(1,3,2) serc_plot = ax2.imshow(b56,extent=serc_ext,cmap='gray',clim=(0,0.2)) cbar = plt.colorbar(serc_plot,aspect=50); cbar.set_label('Reflectance') plt.title('clim = 0-0.2'); #ax = plt.gca(); ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax2.get_xticklabels(),rotation=270) #rotate x tick labels 90 degree ax3 = serc_fig2.add_subplot(1,3,3) serc_plot = ax3.imshow(b56,extent=serc_ext,cmap='gray',clim=(0,0.1)) cbar = plt.colorbar(serc_plot,aspect=50); cbar.set_label('Reflectance') plt.title('clim = 0-0.1'); #ax = plt.gca(); ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax3.get_xticklabels(),rotation=270) #rotate x tick labels 90 degree """ Explanation: Note from both the plot and histogram of the reflectance values that almost all of the reflectance values range from 0.0-0.35. In order to see more contrast in the plot, we try out a couple things: adjust the color limits to only show the relevant range using the imshow clim option apply linear contrast stretch or histogram equalization End of explanation """ def calc_clip_index(clipExtent, fullExtent, xscale=1, yscale=1): h5rows = fullExtent['yMax'] - fullExtent['yMin'] h5cols = fullExtent['xMax'] - fullExtent['xMin'] indExtent = {} indExtent['xMin'] = round((clipExtent['xMin']-fullExtent['xMin'])/xscale) indExtent['xMax'] = round((clipExtent['xMax']-fullExtent['xMin'])/xscale) indExtent['yMax'] = round(h5rows - (clipExtent['yMin']-fullExtent['yMin'])/xscale) indExtent['yMin'] = round(h5rows - (clipExtent['yMax']-fullExtent['yMin'])/yscale) return indExtent #Define clip extent clipExtent = {} clipExtent['xMin'] = 367400 clipExtent['xMax'] = 368100 clipExtent['yMin'] = 4305750 clipExtent['yMax'] = 4306350 """ Explanation: Extension: Plot a subset a flightline You may want to zoom in on a specific region within a flightline for further analysis. To do this, we need to subset the data, which requires the following steps: Define the spatial extent of the data subset (or clip) that we want to zoom in on. Determine the pixel indices of the full flightline that correspond to these spatial extents. Subscript the full flightline array with these indices to create a subset. For this exercise, we will zoom in on a region in the middle of this SERC flight line, around UTM y = 4306000 m. We will load the function calc_clip_index, which reads in a dictionary of the spatial extent of the clipped region of interest, and a dictionary of the full extent of the array you are subsetting, and returns the pixel indices corresponding to the full flightline array. End of explanation """ serc_subInd = calc_clip_index(clipExtent,serc_extDict) print('SERC Subset Index:',serc_subInd) """ Explanation: Use this function to find the indices corresponding to the clip extent that we specified above for SERC: End of explanation """ serc_subArray = serc_reflArray[serc_subInd['yMin']:serc_subInd['yMax'],serc_subInd['xMin']:serc_subInd['xMax'],:] serc_subExt = (clipExtent['xMin'],clipExtent['xMax'],clipExtent['yMin'],clipExtent['yMax']) print('SERC Reflectance Subset Dimensions:',serc_subArray.shape) """ Explanation: We can now use these indices to create a subsetted array, with dimensions 600 x 700 x 426. End of explanation """ serc_b56_subset = serc_subArray[:,:,55].astype(np.float) serc_b56_subset[serc_b56_subset==int(noDataValue)]=np.nan serc_b56_subset = serc_b56_subset/scaleFactor #print(serc_b56_subset) """ Explanation: Extract band 56 from this subset, and clean by applying the no data value and scale factor: End of explanation """ print('SERC Subsetted Band 56 Reflectance Stats:') print('min reflectance:',np.nanmin(serc_b56_subset)) print('mean reflectance:',round(np.nanmean(serc_b56_subset),2)) print('max reflectance:',round(np.nanmax(serc_b56_subset),2)) """ Explanation: Take a quick look at the minimum, maximum, and mean reflectance values in this subsetted area: End of explanation """ fig = plt.figure(figsize=(15,5)) ax1 = fig.add_subplot(1,2,1) serc_subset_plot = plt.imshow(serc_b56_subset,extent=serc_subExt,cmap='gist_earth') cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Reflectance') plt.title('SERC Subset Band 56 Reflectance'); ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree ax2 = fig.add_subplot(1,2,2) plt.hist(serc_b56_subset[~np.isnan(serc_b56_subset)],50); plt.title('Histogram of SERC Subset Band 56 Reflectance') plt.xlabel('Reflectance'); plt.ylabel('Frequency') """ Explanation: Lastly, plot the data and a histogram of the reflectance values to see what the distribution looks like. End of explanation """ from skimage import exposure # Contrast stretching p2, p98 = np.percentile(serc_b56_subset[~np.isnan(serc_b56_subset)], (2, 98)) img_rescale2pct = exposure.rescale_intensity(serc_b56_subset, in_range=(p2, p98)) fig = plt.figure(figsize=(15,5)) ax1 = fig.add_subplot(1,2,1) plt.imshow(img_rescale2pct,extent=serc_subExt,cmap='gist_earth') cbar = plt.colorbar(); cbar.set_label('Reflectance') plt.title('SERC Band 56 Subset \n Linear 2% Contrast Stretch'); rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree p8, p92 = np.percentile(serc_b56_subset[~np.isnan(serc_b56_subset)], (8, 92)) img_rescale8pct = exposure.rescale_intensity(serc_b56_subset, in_range=(p8, p92)) ax2 = fig.add_subplot(1,2,2) plt.imshow(img_rescale8pct,extent=serc_subExt,cmap='gist_earth') cbar = plt.colorbar(); cbar.set_label('Reflectance') plt.title('SERC Band 56 Subset \n Linear 8% Contrast Stretch'); rotatexlabels = plt.setp(ax2.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree """ Explanation: Note that most of the reflectance values are < 0.5, but the colorbar scale ranges from 0 - 1.6. This results in a low-contrast image; with this colormap, most of the image is blue, and the contents are difficult to discern. A few simple plot adjustments can be made to better display and visualize the reflectance data: Other colormaps with the cmap option. For a list of colormaps, refer to <a href="http://matplotlib.org/examples/color/colormaps_reference.html" target="_blank"> Matplotlib's color example code</a>. Note: You can reverse the order of these colormaps by appending _r to the end (e.g., spectral_r). Adjust the colorbar limits -- looking at the histogram, most of the reflectance data < 0.08, so you can adjust the maximum clim value for more visual contrast. <div id="ds-challenge" markdown="1"> ### Challenge: Plot options to visualize the data Use the above suggestions to replot your previous plot to show other traits. Example Challenge Code is shown at the end of this tutorial. </div> Extension: Basic Image Processing -- Contrast Stretch & Histogram Equalization We can also try out some basic image processing to better visualize the reflectance data using the ski-image package. Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Stretching the histogram can improve the contrast of a displayed image, as we will show how to do below. <figure> <a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png"> <img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-general/histogram_equalization.png"></a> <figcaption> Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Stretching the histogram can improve the contrast of a displayed image, as we will show how to do below. Source: <a href="https://en.wikipedia.org/wiki/Talk%3AHistogram_equalization#/media/File:Histogrammspreizung.png"> Wikipedia - Public Domain </a> </figcaption> </figure> The following tutorial section is adapted from skikit-image's tutorial <a href="http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#sphx-glr-auto-examples-color-exposure-plot-equalize-py" target="_blank"> Histogram Equalization</a>. Let's start with trying a 2% and 5% linear contrast stretch: End of explanation """ from IPython.html.widgets import * def linearStretch(percent): pLow, pHigh = np.percentile(serc_b56_subset[~np.isnan(serc_b56_subset)], (percent,100-percent)) img_rescale = exposure.rescale_intensity(serc_b56_subset, in_range=(pLow,pHigh)) plt.imshow(img_rescale,extent=serc_subExt,cmap='gist_earth') cbar = plt.colorbar(); cbar.set_label('Reflectance') plt.title('SERC Band 56 Subset \n Linear ' + str(percent) + '% Contrast Stretch'); ax = plt.gca() ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree interact(linearStretch,percent=(0,100,1)) """ Explanation: Notice that the 8% stretch image (right) washes out some of the objects with higher reflectance (eg. the dock & buildings), but does a better job showing contrast of the vegetation (eg. grass, trees, shadows). Explore the contrast stretch feature interactively using Python widgets End of explanation """ #Adaptive Equalized Histogram img_nonan = np.ma.masked_invalid(serc_b56_subset) #first mask the image img_adapteq = exposure.equalize_adapthist(img_nonan, clip_limit=.05) print('img_adapteq min:',np.min(img_adapteq)) print('img_adapteq max:',np.max(img_adapteq)) # Display Adaptively Equalized Image fig = plt.figure(figsize=(15,6)) ax1 = fig.add_subplot(1,2,1) ax1.imshow(img_adapteq,extent=serc_subExt,cmap='gist_earth') rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree plt.title('SERC Band 56 Subset \n Adaptive Equalized Histogram'); # Display histogram bins=100 ax_hist = fig.add_subplot(1,2,2) ax_hist.hist(img_adapteq.ravel(),bins); #np.ravel flattens an array into one dimension plt.title('SERC Band 56 Subset \n Adaptive Equalized Histogram'); ax_hist.set_xlabel('Pixel Intensity'); ax_hist.set_ylabel('# of Pixels') # Display cumulative distribution ax_cdf = ax_hist.twinx() img_cdf, bins = exposure.cumulative_distribution(img_adapteq,bins) ax_cdf.plot(bins, img_cdf, 'r') ax_cdf.set_ylabel('Fraction of Total Intensity') """ Explanation: Apply Adaptive Histogram Equalization to Improve Image Contrast End of explanation """ fig = plt.figure(figsize=(15,12)) #spectral Colormap, 0-0.08 ax1 = fig.add_subplot(2,2,1) serc_subset_plot = plt.imshow(serc_b56_subset,extent=serc_subExt,cmap='Spectral',clim=(0,0.08)) cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Reflectance') plt.title('Subset SERC Band 56 Reflectance\n spectral colormap, 0-0.08'); ax1.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax1.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree #gist_earth colormap, 0-0.10 ax2 = fig.add_subplot(2,2,2) serc_subset_plot = plt.imshow(serc_b56_subset,extent=serc_subExt,cmap='gist_earth',clim=(0,0.1)) cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Reflectance') plt.title('Subset SERC Band 56 Reflectance\n gist_earth colormap, 0-0.10'); ax2.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax2.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree #YlGn_r colormap, 0-0.08 ax3 = fig.add_subplot(2,2,3) serc_subset_plot = plt.imshow(serc_b56_subset,extent=serc_subExt,cmap='YlGn_r',clim=(0,0.08)) cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Reflectance') plt.title('Subset SERC Band 56 Reflectance\n YlGn_r colormap, 0-0.08'); ax3.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax3.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree #For the last example, take the logarithm of the reflectance data to stretch the values: serc_b56_subset_log = np.log(serc_b56_subset); ax4 = fig.add_subplot(2,2,4) serc_subset_plot = plt.imshow(serc_b56_subset_log,extent=serc_subExt,cmap='jet',clim=(-5,-3)) cbar = plt.colorbar(serc_subset_plot); cbar.set_label('Log(Reflectance)') plt.title('Subset SERC log(Band 56 Reflectance)\n jet colormap'); ax4.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax4.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree """ Explanation: With contrast-limited adaptive histogram equalization, you can see more detail in the image, and the highly reflective objects are not washed out, as they were in the linearly-stretched images. Download Function Module All of the functions we just wrote are available as a Python module. <a href="https://www.neonscience.org/sites/default/files/neon_aop_refl_hdf5_functions.py_.zip">neon_aop_refl_hdf5_functions.py_.zip (5 KB)</a> <- Click to Download Challenge Code Solutions Challenge: Plot options to visualize the data End of explanation """
ngast/rmf_tool
examples/BasicExample_SIR.ipynb
mit
# To load the library import rmftool as rmf import importlib importlib.reload(rmf) # To plot the results import numpy as np import matplotlib.pyplot as plt # %matplotlib inline %matplotlib notebook """ Explanation: This document demonstrate how to use the library to define a "density dependent population process" and to compute its mean-field approximation and refined mean-field approximation End of explanation """ # This code creates an object that represents a "density dependent population process" ddpp = rmf.DDPP() # We then add the three transitions : ddpp.add_transition([-1,1,0],lambda x:x[0]+2*x[0]*x[1]) ddpp.add_transition([0,-1,+1],lambda x:x[1]) ddpp.add_transition([1,0,-1],lambda x:3*x[2]) """ Explanation: Example : basic SIR model Suppose that we want to simulate a model composed of $N$ agents where each agent has 3 possible states : $S$ (susceptible) $I$ (infected) and $R$ (recoverd). If we denote by $x_S$, $x_I$ and $x_S$ the population of agent and assume that : * A susceptible agent becomes infected at rate $x_S + 2x_Sx_I$ * An infected agent becomes recovered at rate $x_I$ * A recovered agent becomes susceptible at rate $3*x_R$ In terms of population process, we get the following transitions: * $x \mapsto x+\frac1N(-1,1,0)$ at rate $x_0+2x_0x_1$ * $x \mapsto x+\frac1N(0,-1,1)$ at rate $x_1$ * $x \mapsto x+\frac1N(1,0,-1)$ at rate $3x_2$ End of explanation """ ddpp.set_initial_state([.3,.2,.5]) # We first need to define an initial stater T,X = ddpp.simulate(100,time=10) # We first plot a trajectory for $N=100$ plt.plot(T,X) T,X = ddpp.simulate(1000,time=10) # Then for $N=1000$ plt.plot(T,X,'--') """ Explanation: Simulation and comparison with ODE We can easily simulate one sample trajectory Simulation for various values of $N$ End of explanation """ plt.figure() ddpp.plot_ODE_vs_simulation(N=100) plt.figure() ddpp.plot_ODE_vs_simulation(N=1000) """ Explanation: Comparison with the ODE approximation We can easily compare simulations with the ODE approximation End of explanation """ %time pi,V,W = ddpp.meanFieldExpansionSteadyState(order=1) print(pi,V) """ Explanation: Refined mean-field approximation (reference to be added) This class also contains some functions to compute the fixed point of the mean-field approximation, to compute the "refined mean-field approximation" and to compare it with simulations. If $\pi$ is the fixed point of the ODE, and $V$ the constant calculated by the function "meanFieldExpansionSteadyState(order=1)", then we have $$E[X^N] = \pi + \frac1N V + o(\frac1N) $$ To compute these constants : End of explanation """ print(pi,'(mean-field)') for N in [10,50,100]: Xs,Vs = ddpp.steady_state_simulation(N=N,time=100000/N) print(Xs,'(Simulation, N={})'.format(N)) print('+/-',Vs) print(pi+V/N,'(refined mean-field, N={})'.format(N)) """ Explanation: Comparison of theoretical V and simulation We observe that, for this model, the mean-field approximation is already very close to the simulation. End of explanation """ Xm,Xrmf,Xs,Vs = ddpp.compare_refinedMF(N=10,time=10000) print(Xm, 'mean-field') print(Xrmf,'Refined mean-field') print(Xs, 'Simulation') print(Vs,'Confidence inverval of simulation (rough estimation)') """ Explanation: The function compare_refinedMF can be used to compare the refined mean-field "x+C/N" to the expectation of $X^N$. Note that the expectation is computed by using forward simulation up to time "time"; the value $E[X^N]$ is then the temporal average of $X^N(t)$ from t="time/2" to "time". (Hence, "time" should be manualy chosen so as to minimize the variance). End of explanation """ n=3 T,X,V,A,XVWABCD=ddpp.meanFieldExpansionTransient(order=2,time=10) N=10 plt.figure() plt.plot(T,X,'-') plt.plot(T,X+V/N,'--') plt.plot(T,X+V/N+A/N**2,':') plt.legend(['Mean field approx.','','','$O(1/N)$-expansion','','','$O(1/N^2)$-expansion']) """ Explanation: Transient analysis Here we not not compare with simulation but just show how the vectors V(t) and A(t) can be computed. We observe that for this model the $O(1/N)$ and $O(1/N^2)$ expansions are very close. End of explanation """
taku-y/bmlingam
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
mit
%matplotlib inline %autosave 0 import sys, os sys.path.insert(0, os.path.expanduser('~/work/tmp/20160915-bmlingam/bmlingam')) from copy import deepcopy import hashlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns import time from bmlingam import do_mcmc_bmlingam, InferParams, MCMCParams, save_pklz, load_pklz, define_hparam_searchspace, find_best_model from bmlingam.utils.gendata import GenDataParams, gen_artificial_data """ Explanation: 因果の効果が小さい場合のベイズファクター このノートブックでは, 回帰係数が一様分布U(-1.5, 1.5)から生成される場合に, ベイズファクターによって因果の効果が小さい場合を判断できるかどうかを確認します. End of explanation """ conds = [ { 'totalnoise': totalnoise, 'L_cov_21s': L_cov_21s, 'n_samples': n_samples, 'n_confs': n_confs, 'data_noise_type': data_noise_type, 'model_noise_type': model_noise_type, 'b21_dist': b21_dist } for totalnoise in [0.5, 1.0] for L_cov_21s in [[-.9, -.7, -.5, -.3, 0, .3, .5, .7, .9]] for n_samples in [100] for n_confs in [10] # [1, 3, 5, 10] for data_noise_type in ['laplace', 'uniform'] for model_noise_type in ['gg'] for b21_dist in ['uniform(-1.5, 1.5)'] ] print('{} conditions'.format(len(conds))) """ Explanation: 実験条件 実験条件は以下の通りです. 人工データパラメータ サンプル数 (n_samples): [100] 総ノイズスケール: $c=0.5, 1.0$. 交絡因子のスケール: $c/\sqrt{Q}$ データ観測ノイズ分布 (data_noise_type): ['laplace', 'uniform'] 交絡因子数 (n_confs or $Q$): [10] 観測データノイズスケール: 3に固定 回帰係数の分布 uniform(-1.5, 1.5) 推定ハイパーパラメータ 交絡因子相関係数 (L_cov_21s): [[-.9, -.7, -.5, -.3, 0, .3, .5, .7, .9]] モデル観測ノイズ分布 (model_noise_type): ['gg'] End of explanation """ def gen_artificial_data_given_cond(ix_trial, cond): # 実験条件に基づく人工データ生成パラメータの設定 n_confs = cond['n_confs'] gen_data_params = deepcopy(gen_data_params_default) gen_data_params.n_samples = cond['n_samples'] gen_data_params.conf_dist = [['all'] for _ in range(n_confs)] gen_data_params.e1_dist = [cond['data_noise_type']] gen_data_params.e2_dist = [cond['data_noise_type']] gen_data_params.b21_dist = cond['b21_dist'] noise_scale = cond['totalnoise'] / np.sqrt(n_confs) gen_data_params.f1_coef = [noise_scale for _ in range(n_confs)] gen_data_params.f2_coef = [noise_scale for _ in range(n_confs)] # 人工データ生成 gen_data_params.seed = ix_trial data = gen_artificial_data(gen_data_params) return data # 人工データ生成パラメータの基準値 gen_data_params_default = GenDataParams( n_samples=100, b21_dist='r2intervals', mu1_dist='randn', mu2_dist='randn', f1_scale=1.0, f2_scale=1.0, f1_coef=['r2intervals', 'r2intervals', 'r2intervals'], f2_coef=['r2intervals', 'r2intervals', 'r2intervals'], conf_dist=[['all'], ['all'], ['all']], e1_std=3.0, e2_std=3.0, e1_dist=['laplace'], e2_dist=['laplace'], seed=0 ) """ Explanation: 人工データの生成 実験条件に基づいて人工データを生成する関数を定義します. End of explanation """ data = gen_artificial_data_given_cond(0, conds[0]) xs = data['xs'] plt.figure(figsize=(3, 3)) plt.scatter(xs[:, 0], xs[:, 1]) data = gen_artificial_data_given_cond(0, { 'totalnoise': 3 * np.sqrt(1), 'n_samples': 10000, 'n_confs': 1, 'data_noise_type': 'laplace', 'b21_dist': 'uniform(-1.5, 1.5)' } ) xs = data['xs'] plt.figure(figsize=(3, 3)) plt.scatter(xs[:, 0], xs[:, 1]) """ Explanation: 実行例です. End of explanation """ n_trials_per_cond = 100 """ Explanation: トライアルの定義 トライアルとは, 生成された1つの人工データに対する因果推論と精度評価の処理を指します. 一つの実験条件に対し, トライアルを100回実行します. End of explanation """ # 因果推論パラメータ infer_params = InferParams( seed=0, standardize=True, subtract_mu_reg=False, fix_mu_zero=True, prior_var_mu='auto', prior_scale='uniform', max_c=1.0, n_mc_samples=10000, dist_noise='laplace', df_indvdl=8.0, prior_indvdls=['t'], cs=[0.4, 0.6, 0.8], scale_coeff=2. / 3., L_cov_21s=[-0.8, -0.6, -0.4, 0.4, 0.6, 0.8], betas_indvdl=None, # [0.25, 0.5, 0.75, 1.], betas_noise=[0.25, 0.5, 1.0, 3.0], causalities=[[1, 2], [2, 1]], sampling_mode='cache_mp4' ) # 回帰係数推定パラメータ mcmc_params = MCMCParams( n_burn=1000, n_mcmc_samples=1000 ) """ Explanation: 実験条件パラメータの基準値 End of explanation """ def make_id(ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, L_cov_21s, totalnoise, b21_dist): L_cov_21s_ = ' '.join([str(v) for v in L_cov_21s]) return hashlib.md5( str((L_cov_21s_, ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, totalnoise, b21_dist.replace(' ', ''))).encode('utf-8') ).hexdigest() # テスト print(make_id(55, 100, 12, 'all', 'gg', [1, 2, 3], 0.3, 'uniform(-1.5, 1.5)')) print(make_id(55, 100, 12, 'all', 'gg', [1, 2, 3], 0.3, 'uniform(-1.5, 1.5)')) # 空白を無視します """ Explanation: プログラム トライアル識別子の生成 以下の情報からトライアルに対する識別子を生成します. トライアルインデックス (ix_trial) サンプル数 (n_samples) 交絡因子数 (n_confs) 人工データ観測ノイズの種類 (data_noise_type) 予測モデル観測ノイズの種類 (model_noise_type) 交絡因子相関係数 (L_cov_21s) 総ノイズスケール (totalnoise) 回帰係数分布 (b21_dist) トライアル識別子は推定結果をデータフレームに格納するときに使用されます. End of explanation """ def add_result_to_df(df, result): if df is None: return pd.DataFrame({k: [v] for k, v in result.items()}) else: return df.append(result, ignore_index=True) # テスト result1 = {'col1': 10, 'col2': 20} result2 = {'col1': 30, 'col2': -10} df1 = add_result_to_df(None, result1) print('--- df1 ---') print(df1) df2 = add_result_to_df(df1, result2) print('--- df2 ---') print(df2) """ Explanation: トライアル結果のデータフレームへの追加 トライアル結果をデータフレームに追加します. 引数dfがNoneの場合, 新たにデータフレームを作成します. End of explanation """ def df_exist_result_id(df, result_id): if df is not None: return result_id in np.array(df['result_id']) else: False """ Explanation: データフレーム内のトライアル識別子の確認 計算済みの結果に対して再計算しないために使用します. End of explanation """ def load_df(df_file): if os.path.exists(df_file): return load_pklz(df_file) else: return None def save_df(df_file, df): save_pklz(df_file, df) """ Explanation: データフレームの取得 データフレームをセーブ・ロードする関数を定義します. ファイルが存在しなければNoneを返します. End of explanation """ def _estimate_hparams(xs, infer_params): assert(type(infer_params) == InferParams) sampling_mode = infer_params.sampling_mode hparamss = define_hparam_searchspace(infer_params) results = find_best_model(xs, hparamss, sampling_mode) hparams_best = results[0] bf = results[2] - results[5] # Bayes factor return hparams_best, bf def run_trial(ix_trial, cond): # 人工データ生成 data = gen_artificial_data_given_cond(ix_trial, cond) b_true = data['b'] causality_true = data['causality_true'] # 因果推論 t = time.time() infer_params.L_cov_21s = cond['L_cov_21s'] infer_params.dist_noise = cond['model_noise_type'] hparams, bf = _estimate_hparams(data['xs'], infer_params) causality_est = hparams['causality'] time_causal_inference = time.time() - t # 回帰係数推定 t = time.time() trace = do_mcmc_bmlingam(data['xs'], hparams, mcmc_params) b_post = np.mean(trace['b']) time_posterior_inference = time.time() - t return { 'causality_true': causality_true, 'regcoef_true': b_true, 'n_samples': cond['n_samples'], 'n_confs': cond['n_confs'], 'data_noise_type': cond['data_noise_type'], 'model_noise_type': cond['model_noise_type'], 'causality_est': causality_est, 'correct_rate': (1.0 if causality_est == causality_true else 0.0), 'error_reg_coef': np.abs(b_post - b_true), 'regcoef_est': b_post, 'log_bf': 2 * bf, # 2log(p(M) / p(M_rev))なので常に正の値となります. 'time_causal_inference': time_causal_inference, 'time_posterior_inference': time_posterior_inference, 'L_cov_21s': str(cond['L_cov_21s']), 'n_mc_samples': infer_params.n_mc_samples, 'confs_absmean': np.mean(np.abs(data['confs'].ravel())), 'totalnoise': cond['totalnoise'] } """ Explanation: トライアル実行 トライアルインデックスと実験条件を引数としてトライアルを実行し, 推定結果を返します. End of explanation """ def run_expr(conds): # データフレームファイル名 data_dir = '.' df_file = data_dir + '/20160902-eval-bml-results.pklz' # ファイルが存在すれば以前の続きから実行します. df = load_df(df_file) # 実験条件に渡るループ n_skip = 0 for cond in conds: print(cond) # トライアルに渡るループ for ix_trial in range(n_trials_per_cond): # 識別子 result_id = make_id(ix_trial, **cond) # データフレームに結果が保存済みかどうかチェックします. if df_exist_result_id(df, result_id): n_skip += 1 else: # resultはトライアルの結果が含まれるdictです. # トライアルインデックスix_trialは乱数シードとして使用されます. result = run_trial(ix_trial, cond) result.update({'result_id': result_id}) df = add_result_to_df(df, result) save_df(df_file, df) print('Number of skipped trials = {}'.format(n_skip)) return df """ Explanation: メインプログラム End of explanation """ df = run_expr(conds) """ Explanation: メインプログラムの実行 End of explanation """ import pandas as pd # データフレームファイル名 data_dir = '.' df_file = data_dir + '/20160902-eval-bml-results.pklz' df = load_pklz(df_file) sg = df.groupby(['model_noise_type', 'data_noise_type', 'n_confs', 'totalnoise']) sg1 = sg['correct_rate'].mean() sg2 = sg['correct_rate'].count() sg3 = sg['time_causal_inference'].mean() pd.concat( { 'correct_rate': sg1, 'count': sg2, 'time': sg3, }, axis=1 ) """ Explanation: 結果の確認 End of explanation """ data = np.array(df[['regcoef_true', 'log_bf', 'totalnoise', 'correct_rate']]) ixs1 = np.where(data[:, 3] == 1.0)[0] ixs2 = np.where(data[:, 3] == 0.0)[0] plt.scatter(data[ixs1, 1], np.abs(data[ixs1, 0]), marker='o', s=20, c='r', label='Success') plt.scatter(data[ixs2, 1], np.abs(data[ixs2, 0]), marker='*', s=70, c='b', label='Failure') plt.ylabel('|b|') plt.xlabel('2 * log(bayes_factor)') plt.legend(fontsize=15, loc=4, shadow=True, frameon=True, framealpha=1.0) """ Explanation: 回帰係数の大きさとBayesFactor $2\log(BF)$を横軸, $|b_{21}|$(または$|b_{12}|$)を縦軸に取りプロットしました. $2\log(BF)$が10以上だと真の回帰係数(の絶対値)が大きく因果効果があると言えるのですが, $2\log(BF)$がそれ以下だと, 回帰係数が大きい場合も小さい場合もあり, BFで因果効果の有無を判断するのは難しいと言えそうです. 因果効果があるモデルと無いモデルとの比較も必要なのでしょう. End of explanation """ data = np.array(df[['regcoef_true', 'regcoef_est', 'correct_rate']]) ixs1 = np.where(data[:, 2] == 1)[0] ixs2 = np.where(data[:, 2] == 0)[0] assert(len(ixs1) + len(ixs2) == len(data)) plt.figure(figsize=(5, 5)) plt.scatter(data[ixs1, 0], data[ixs1, 1], c='r', label='Correct') plt.scatter(data[ixs2, 0], data[ixs2, 1], c='b', label='Incorrect') plt.plot([-3, 3], [-3, 3]) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.gca().set_aspect('equal') plt.xlabel('Reg coef (true)') plt.ylabel('Reg coef (posterior mean)') """ Explanation: 回帰系数の予測精度 人工データの回帰係数をU(-1.5, 1.5)で生成した実験で, 回帰系数の真値を横軸, 事後分布平均を縦軸に取りプロットしました. 真値が小さい場合は因果方向予測の正解(赤)と不正解(青)に関わらず事後分布平均が小さくなっています. 一方, 正解の場合には回帰係数が小さく, 不正解の場合には回帰係数が小さく推定される傾向があるようです. End of explanation """ data = np.array(df[['regcoef_true', 'regcoef_est', 'correct_rate']]) ixs1 = np.where(data[:, 2] == 1)[0] ixs2 = np.where(data[:, 2] == 0)[0] assert(len(ixs1) + len(ixs2) == len(data)) plt.figure(figsize=(5, 5)) plt.scatter(data[ixs1, 0], data[ixs1, 1], c='r', label='Correct') plt.plot([-3, 3], [-3, 3]) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.gca().set_aspect('equal') plt.xlabel('Reg coef (true)') plt.ylabel('Reg coef (posterior mean)') plt.title('Correct inference') plt.savefig('20160905-eval-bml-correct.eps') plt.figure(figsize=(5, 5)) plt.scatter(data[ixs2, 0], data[ixs2, 1], c='b', label='Incorrect') plt.plot([-3, 3], [-3, 3]) plt.xlim(-3, 3) plt.ylim(-3, 3) plt.gca().set_aspect('equal') plt.xlabel('Reg coef (true)') plt.ylabel('Reg coef (posterior mean)') plt.title('Incorrect inference') plt.savefig('20160905-eval-bml-incorrect.eps') """ Explanation: EPSで出力 End of explanation """
UV-CDAT/tutorials
graphics/ParallelCoordinates.ipynb
bsd-2-clause
import vcs # For plots import vcsaddons # module containing pcoords import cdms2 # for data import glob # to list files in directories import pcmdi_metrics # for special json loader class """ Explanation: import necessary modules End of explanation """ import tempfile import base64 class VCSAddonsNotebook(object): def __init__(self, x): self.x = x def _repr_png_(self): fnm = tempfile.mktemp()+".png" x.png(fnm) encoded = base64.b64encode(open(fnm, "rb").read()) return encoded def __call__(self): return self """ Explanation: Work around to visualize plot in Jupyter Notebook This class allow use to use vcsaddons plots End of explanation """ # Prepare list of json files # Location on your computer json_pth = "/git/pcmdi_metrics/test/graphics" # Geenrate list ofjson files json_files = glob.glob( os.path.join( json_pth, "json", "v2.0", "*.json")) json_files += glob.glob( os.path.join( json_pth, "json", "v1.0", "*.json")) # Read them in via pmp special json class J = pcmdi_metrics.pcmdi.io.JSONs(json_files) # Retrieve data we need for plot # Annual mean RMS (XYT dimensions) # All models and all variables rms_xyt = J(statistic=["rms_xyt"],season=["ann"],region="global")(squeeze=1) """ Explanation: Sample Data These files are in the test directory of pcmdi_metrics repo at: http://github.com/PCMDI/pcmdi_metrics.git End of explanation """ rms_xyt.info() # Ok now let's create a VCS pcoord graphic method # initialize a canvas x=vcs.init(geometry=(1200,600),bg=True) import vcsaddons gm = vcsaddons.createparallelcoordinates(x=x) """ Explanation: Let's take a look at the array generated Note the axis are strings of varialbes used and models The order of the axes is the order on the plot End of explanation """ # Prepare the graphics # Set variable name rms_xyt.id = "RMS" # Set units of each variables on axis # This is a trick to have units listed on plot rms_xyt.getAxis(-2).units = ["mm/day","mm/day","hPa","W/m2","W/m2","W/m2", "K","K","K","m/s","m/s","m/s","m/s","m"] # Sets title on the variable rms_xyt.title = "Annual Mean Error" # Preprare the canvas areas t = vcs.createtemplate() # Create a text orientation object for xlabels to=x.createtextorientation() to.angle=-45 to.halign="right" # Tell template to use this orientation for x labels t.xlabel1.textorientation = to.name # Define area where plot will be drawn in x direction t.reset('x',0.05,0.9,t.data.x1,t.data.x2) ln = vcs.createline() # Turn off box around legend ln.color = [[0,0,0,0]] t.legend.line = ln # turn off box around data area t.box1.priority=0 # Define box where legend will be drawn t.legend.x1 = .91 t.legend.x2 = .99 # use x/y of data drawn for legend height t.legend.y1 = t.data.y1 t.legend.y2 = t.data.y2 # Plot with default values of graphic method # Bug vcsaddons need to return a display # as a result it does not show up in notebook x.clear() show = VCSAddonsNotebook(x) gm.plot(rms_xyt,template=t,bg=True) show() """ Explanation: Preparing the plot Data 'id' is used for variable in plot the JSON class returns var as "pmp", here "RMS" is more appropriate 'title' is used to draw the plot title (location/font controlled by template) Template The template section prepares where data will be rendered on plot, and the fonts used fonts are controlled via textorientation and texttable VCS primary objects Here we need to angle a bit the xlabels (45 degrees) We also want to turn off the boxes around the legend and the data area. End of explanation """ x.clear() gm.linecolors = ["blue","red","grey"] gm.linestyles=["solid","solid","dot"] gm.linewidths=[5.,5.,1.] gm.markercolors = ["blue","red","grey"] gm.markertypes=["triangle_up","star","dot"] gm.markersizes=[7,5,2] gm.plot(rms_xyt,template=t,bg=True) show() # change order and number of models and variables axes = rms_xyt.getAxisList() models = ['MIROC4h', 'HadGEM2-AO', 'GFDL-ESM2M', 'GFDL-ESM2G', 'GFDL-CM3', 'FGOALS-g2', 'CSIRO-Mk3-6-0', 'CESM1-WACCM', 'CESM1-FASTCHEM', 'CESM1-CAM5', 'CESM1-BGC', 'CCSM4', 'ACCESS1-3', 'ACCESS1-0', '0071-0100'] # invert them variables = ['prw', 'psl', 'rltcre', 'rlut', 'rstcre', 'ta-200', 'ta-850', 'tas', 'ua-850', 'va-850', 'zg-500'] rms_xyt = J(statistic=["rms_xyt"],season=["ann"],region="global",model=models,variable=variables)(squeeze=1) x.clear() gm.plot(rms_xyt,template=t,bg=True) show() """ Explanation: Control various aspects of the graphic method We want the first two model to be 'blue' and 'red' and a bit thicker All other plots will be 'grey' and 'dashed' End of explanation """
timothyb0912/pylogit
examples/.ipynb_checkpoints/Main PyLogit Example-checkpoint.ipynb
bsd-3-clause
from collections import OrderedDict # For recording the model specification import pandas as pd # For file input/output import numpy as np # For vectorized math operations import pylogit as pl # For MNL model estimation and # conversion from wide to long format """ Explanation: pyLogit Example The purpose of this notebook is to demonstrate they key functionalities of pyLogit: <ol> <li> Converting data between 'wide' and 'long' formats. </li> <li> Estimating conditional logit models. </li> </ol> The dataset being used for this example is the "Swissmetro" dataset used in the Python Biogeme examples. The data can be downloaded at <a href="http://biogeme.epfl.ch/examples_swissmetro.html">http://biogeme.epfl.ch/examples_swissmetro.html</a>, and a detailed explanation of the variables and data-collection procedure can be found at http://www.strc.ch/conferences/2001/bierlaire1.pdf. Relevant information about this dataset is that it is from a stated preference survey about whether or not individuals would use a new underground Magnetic-Levetation train system called the Swissmetro. The overall set of possible choices in this dataset was "Train", "Swissmetro", and "Car." However, the choice set faced by each individual is <strong><u>not</u></strong> constant. An individual's choice set was partially based on the alternatives that he/she was capable of using at the moment. For instance, people who did not own cars did not receive a stated preference question where car was an alternative that they could choose. Note that because the choice set varies across choice situations, mlogit and statsmodels could not be used with this dataset. Also, each individual responded to multiple choice situations. Thus the choice observations are not truly independent of all other choice observations (they are correlated accross choices made by the same individual). However, for the purposes of this example, the effect of repeat-observations on the typical i.i.d. assumptions will be ignored. Based on the Swissmetro data, we will build a travel mode choice model for individuals who are commuting or going on a business trip. End of explanation """ # Note that the .dat files used by python biogeme are tab delimited text files wide_swiss_metro = pd.read_table("../data/swissmetro.dat", sep="\t") # Select obervations whose choice is known (i.e. CHOICE != 0) # **AND** whose PURPOSE is either 1 (commute) or 3 (business) include_criteria = (wide_swiss_metro.PURPOSE.isin([1, 3]) & (wide_swiss_metro.CHOICE != 0)) # Note that the .copy() ensures that any later changes are made # to a copy of the data and not to the original data wide_swiss_metro = wide_swiss_metro.loc[include_criteria].copy() # Look at the first 5 rows of the data wide_swiss_metro.head().T """ Explanation: Load and filter the raw Swiss Metro data End of explanation """ # Look at the columns of the swiss metro dataset wide_swiss_metro.columns # Create the list of individual specific variables ind_variables = wide_swiss_metro.columns.tolist()[:15] # Specify the variables that vary across individuals and some or all alternatives # The keys are the column names that will be used in the long format dataframe. # The values are dictionaries whose key-value pairs are the alternative id and # the column name of the corresponding column that encodes that variable for # the given alternative. Examples below. alt_varying_variables = {u'travel_time': dict([(1, 'TRAIN_TT'), (2, 'SM_TT'), (3, 'CAR_TT')]), u'travel_cost': dict([(1, 'TRAIN_CO'), (2, 'SM_CO'), (3, 'CAR_CO')]), u'headway': dict([(1, 'TRAIN_HE'), (2, 'SM_HE')]), u'seat_configuration': dict([(2, "SM_SEATS")])} # Specify the availability variables # Note that the keys of the dictionary are the alternative id's. # The values are the columns denoting the availability for the # given mode in the dataset. availability_variables = {1: 'TRAIN_AV', 2: 'SM_AV', 3: 'CAR_AV'} ########## # Determine the columns for: alternative ids, the observation ids and the choice ########## # The 'custom_alt_id' is the name of a column to be created in the long-format data # It will identify the alternative associated with each row. custom_alt_id = "mode_id" # Create a custom id column that ignores the fact that this is a # panel/repeated-observations dataset. Note the +1 ensures the id's start at one. obs_id_column = "custom_id" wide_swiss_metro[obs_id_column] = np.arange(wide_swiss_metro.shape[0], dtype=int) + 1 # Create a variable recording the choice column choice_column = "CHOICE" # Perform the conversion to long-format long_swiss_metro = pl.convert_wide_to_long(wide_swiss_metro, ind_variables, alt_varying_variables, availability_variables, obs_id_column, choice_column, new_alt_id_name=custom_alt_id) # Look at the resulting long-format dataframe long_swiss_metro.head(10).T """ Explanation: Convert the Swissmetro data to "Long Format" pyLogit only estimates models using data that is in "long" format. Long format has 1 row per individual per available alternative, and wide format has 1 row per individual or observation. Long format is useful because it permits one to directly use matrix dot products to calculate the index, $V_{ij} = x_{ij} \beta$, for each individual $\left(i \right)$ for each alternative $\left(j \right)$. In applications where one creates one's own dataset, the dataset can usually be created in long format from the very beginning. However, in situations where a dataset is provided to you in wide format (as in the case of the Swiss Metro dataset), it will be necesssary to convert the data from wide format to long format. To convert the raw swiss metro data to long format, we need to specify: <ol> <li>the variables or columns that are specific to a given individual, regardless of what alternative is being considered (note: every row is being treated as a separate observation, even though each individual gave multiple responses in this stated preference survey)</li> <li>the variables that vary across some or all alternatives, for a given individual (e.g. travel time)</li> <li>the availability variables</li> <li>the <u>unique</u> observation id column. (Note this dataset has an observation id column, but for the purposes of this example we don't want to consider the repeated observations of each person as being related. We therefore want a identifying column that gives an id to every response of every individual instead of to every individual).</li> <li>the choice column</li> </ol> <br>The cells below will identify these various columns, give them names in the long-format data, and perform the necessary conversion. End of explanation """ ########## # Create scaled variables so the estimated coefficients are of similar magnitudes ########## # Scale the travel time column by 60 to convert raw units (minutes) to hours long_swiss_metro["travel_time_hrs"] = long_swiss_metro["travel_time"] / 60.0 # Scale the headway column by 60 to convert raw units (minutes) to hours long_swiss_metro["headway_hrs"] = long_swiss_metro["headway"] / 60.0 # Figure out who doesn't incur a marginal cost for the ticket # This can be because he/she owns an annial season pass (GA == 1) # or because his/her employer pays for the ticket (WHO == 2). # Note that all the other complexity in figuring out ticket costs # have been accounted for except the GA pass (the annual season # ticket). Make sure this dummy variable is only equal to 1 for # the rows with the Train or Swissmetro long_swiss_metro["free_ticket"] = (((long_swiss_metro["GA"] == 1) | (long_swiss_metro["WHO"] == 2)) & long_swiss_metro[custom_alt_id].isin([1,2])).astype(int) # Scale the travel cost by 100 so estimated coefficients are of similar magnitude # and acccount for ownership of a season pass long_swiss_metro["travel_cost_hundreth"] = (long_swiss_metro["travel_cost"] * (long_swiss_metro["free_ticket"] == 0) / 100.0) ########## # Create various dummy variables to describe the choice context of a given # invidual for each choice task. ########## # Create a dummy variable for whether a person has a single piece of luggage long_swiss_metro["single_luggage_piece"] = (long_swiss_metro["LUGGAGE"] == 1).astype(int) # Create a dummy variable for whether a person has multiple pieces of luggage long_swiss_metro["multiple_luggage_pieces"] = (long_swiss_metro["LUGGAGE"] == 3).astype(int) # Create a dummy variable indicating that a person is NOT first class long_swiss_metro["regular_class"] = 1 - long_swiss_metro["FIRST"] # Create a dummy variable indicating that the survey was taken aboard a train # Note that such passengers are a-priori imagined to be somewhat partial to train modes long_swiss_metro["train_survey"] = 1 - long_swiss_metro["SURVEY"] """ Explanation: Perform desired variable creations and transformations Before estimating a model, one needs to pre-compute all of the variables that one wants to use. This is different from the functionality of other packages such as mlogit or statsmodels that use formula strings to create new variables "on-the-fly." This is also somewhat different from Python Biogeme where new variables can be defined in the script but not actually created by the user before model estimation. pyLogit does not perform variable creation. It only estimates models using variables that already exist. Below, we pre-compute the variables needed for this example's model: <ol> <li> Travel time in hours instead of minutes. </li> <li> Travel cost in units of 0.01 CHF (swiss franks) instead of CHF, for ease of numeric optimization. </li> <li> Travel cost interacted with a variable that identifies individuals who own a season pass (and therefore have no marginal cost of traveling on the trip) or whose employer will pay for their commute/business trip. </li> <li> A dummy variable for traveling with a single piece of luggage. </li> <li> A dummy variable for traveling with multiple pieces of luggage. </li> <li> A dummy variable denoting whether an individual is traveling first class. </li> <li> A dummy variable indicating whether an individual took their survey on-board a train (since it is a-priori expected that these individuals are already willing to take a train or train-like service such as Swissmetro).</li> </ol> End of explanation """ # NOTE: - Specification and variable names must be ordered dictionaries. # - Keys should be variables within the long format dataframe. # The sole exception to this is the "intercept" key. # - For the specification dictionary, the values should be lists # of integers or or lists of lists of integers. Within a list, # or within the inner-most list, the integers should be the # alternative ID's of the alternative whose utility specification # the explanatory variable is entering. Lists of lists denote # alternatives that will share a common coefficient for the variable # in question. basic_specification = OrderedDict() basic_names = OrderedDict() basic_specification["intercept"] = [1, 2] basic_names["intercept"] = ['ASC Train', 'ASC Swissmetro'] basic_specification["travel_time_hrs"] = [[1, 2,], 3] basic_names["travel_time_hrs"] = ['Travel Time, units:hrs (Train and Swissmetro)', 'Travel Time, units:hrs (Car)'] basic_specification["travel_cost_hundreth"] = [1, 2, 3] basic_names["travel_cost_hundreth"] = ['Travel Cost * (Annual Pass == 0), units: 0.01 CHF (Train)', 'Travel Cost * (Annual Pass == 0), units: 0.01 CHF (Swissmetro)', 'Travel Cost, units: 0.01 CHF (Car)'] basic_specification["headway_hrs"] = [1, 2] basic_names["headway_hrs"] = ["Headway, units:hrs, (Train)", "Headway, units:hrs, (Swissmetro)"] basic_specification["seat_configuration"] = [2] basic_names["seat_configuration"] = ['Airline Seat Configuration, base=No (Swissmetro)'] basic_specification["train_survey"] = [[1, 2]] basic_names["train_survey"] = ["Surveyed on a Train, base=No, (Train and Swissmetro)"] basic_specification["regular_class"] = [1] basic_names["regular_class"] = ["First Class == False, (Swissmetro)"] basic_specification["single_luggage_piece"] = [3] basic_names["single_luggage_piece"] = ["Number of Luggage Pieces == 1, (Car)"] basic_specification["multiple_luggage_pieces"] = [3] basic_names["multiple_luggage_pieces"] = ["Number of Luggage Pieces > 1, (Car)"] """ Explanation: Create the model specification The model specification being used in this example is the following: $$ \begin{aligned} V_{i, \textrm{Train}} &= \textrm{ASC Train} + \ &\quad \beta { \textrm{tt_transit} } \textrm{Travel Time} { \textrm{Train}} * \frac{1}{60} + \ &\quad \beta { \textrm{tc_train} } \textrm{Travel Cost}{\textrm{Train}} * \left( GA == 0 \right) * 0.01 + \ &\quad \beta { \textrm{headway_train} } \textrm{Headway} {\textrm{Train}} * \frac{1}{60} + \ &\quad \beta { \textrm{survey} } \left( \textrm{Train Survey} == 1 \right) \ \ V{i, \textrm{Swissmetro}} &= \textrm{ASC Swissmetro} + \ &\quad \beta { \textrm{tt_transit} } \textrm{Travel Time} { \textrm{Swissmetro}} * \frac{1}{60} + \ &\quad \beta { \textrm{tc_sm} } \textrm{Travel Cost}{\textrm{Swissmetro}} * \left( GA == 0 \right) * 0.01 + \ &\quad \beta { \textrm{headway_sm} } \textrm{Heaway} {\textrm{Swissmetro}} * \frac{1}{60} + \ &\quad \beta { \textrm{seat} } \left( \textrm{Seat Configuration} == 1 \right) \ &\quad \beta { \textrm{survey} } \left( \textrm{Train Survey} == 1 \right) \ &\quad \beta { \textrm{first_class} } \left( \textrm{First Class} == 0 \right) \ \ V{i, \textrm{Car}} &= \beta { \textrm{tt_car} } \textrm{Travel Time} { \textrm{Car}} * \frac{1}{60} + \ &\quad \beta { \textrm{tc_car}} \textrm{Travel Cost}{\textrm{Car}} * 0.01 + \ &\quad \beta {\textrm{luggage}=1} \left( \textrm{Luggage} == 1 \right) + \ &\quad \beta {\textrm{luggage}>1} \left( \textrm{Luggage} > 1 \right) \end{aligned} $$ Note that packages such as mlogit and statsmodels do not, by default, handle coefficients that vary over some alternatives but not all, such as the travel time coefficient that is specified as being the same for "Train" and "Swissmetro" but different for "Car." End of explanation """ # Estimate the multinomial logit model (MNL) swissmetro_mnl = pl.create_choice_model(data=long_swiss_metro, alt_id_col=custom_alt_id, obs_id_col=obs_id_column, choice_col=choice_column, specification=basic_specification, model_type="MNL", names=basic_names) # Specify the initial values and method for the optimization. swissmetro_mnl.fit_mle(np.zeros(14)) # Look at the estimation results swissmetro_mnl.get_statsmodels_summary() """ Explanation: Estimate the conditional logit model End of explanation """ # Look at other all results at the same time swissmetro_mnl.print_summaries() # Look at the general and goodness of fit statistics swissmetro_mnl.fit_summary # Look at the parameter estimation results, and round the results for easy viewing np.round(swissmetro_mnl.summary, 3) """ Explanation: View results without using statsmodels summary table You can view all of the results simply by using print_summaries(). This will simply print the various summary dataframes. End of explanation """
ChadFulton/statsmodels
examples/notebooks/statespace_arma_0.ipynb
bsd-3-clause
%matplotlib inline from __future__ import print_function import numpy as np from scipy import stats import pandas as pd import matplotlib.pyplot as plt import statsmodels.api as sm from statsmodels.graphics.api import qqplot """ Explanation: Autoregressive Moving Average (ARMA): Sunspots data This notebook replicates the existing ARMA notebook using the statsmodels.tsa.statespace.SARIMAX class rather than the statsmodels.tsa.ARMA class. End of explanation """ print(sm.datasets.sunspots.NOTE) dta = sm.datasets.sunspots.load_pandas().data dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008')) del dta["YEAR"] dta.plot(figsize=(12,4)); fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2) arma_mod20 = sm.tsa.statespace.SARIMAX(dta, order=(2,0,0), trend='c').fit(disp=False) print(arma_mod20.params) arma_mod30 = sm.tsa.statespace.SARIMAX(dta, order=(3,0,0), trend='c').fit(disp=False) print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic) print(arma_mod30.params) print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic) """ Explanation: Sunpots Data End of explanation """ sm.stats.durbin_watson(arma_mod30.resid) fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(111) ax = plt.plot(arma_mod30.resid) resid = arma_mod30.resid stats.normaltest(resid) fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(111) fig = qqplot(resid, line='q', ax=ax, fit=True) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2) r,q,p = sm.tsa.acf(resid, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag')) """ Explanation: Does our model obey the theory? End of explanation """ predict_sunspots = arma_mod30.predict(start='1990', end='2012', dynamic=True) fig, ax = plt.subplots(figsize=(12, 8)) dta.loc['1950':].plot(ax=ax) predict_sunspots.plot(ax=ax, style='r'); def mean_forecast_err(y, yhat): return y.sub(yhat).mean() mean_forecast_err(dta.SUNACTIVITY, predict_sunspots) """ Explanation: This indicates a lack of fit. In-sample dynamic prediction. How good does our model do? End of explanation """
penguinmenac3/ml-notebooks
Machine Learning Basics with Sklearn.ipynb
gpl-3.0
import numpy as np import matplotlib.pyplot as plt %matplotlib inline """ Explanation: Machine Learning Basics with Sklearn First some imports for the notebook and visualization. End of explanation """ from sklearn.datasets import load_iris iris = load_iris() """ Explanation: Choosing a dataset First of all you need a dataset to work on. To keep things simple we will use the iris dataset provided with scikit. End of explanation """ test_idx = [0, 50, 100] train_y = np.delete(iris.target, test_idx) train_X = np.delete(iris.data, test_idx, axis=0) test_y = iris.target[test_idx] test_X = iris.data[test_idx] """ Explanation: Splitting the dataset The dataset needs to be split into a training and test dataset. This is done to first train our model and then test how good it is on data it has never seen before. End of explanation """ from sklearn import tree clf = tree.DecisionTreeClassifier() clf = clf.fit(train_X,train_y) """ Explanation: Decision Tree Classifier We will use a simple decision tree as the first model and train it. End of explanation """ from sklearn.externals.six import StringIO import pydot import matplotlib.image as mpimg dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, impurity=False) pydot_graph = pydot.graph_from_dot_data(dot_data.getvalue()) png_str = pydot_graph.create_png(prog='dot') # treat the dot output string as an image file sio = StringIO() sio.write(png_str) sio.seek(0) img = mpimg.imread(sio) # plot the image f, axes = plt.subplots(1, 1, figsize=(12,12)) imgplot = axes.imshow(img, aspect='equal') plt.show() """ Explanation: Visualize the decision tree Whereas the decision tree is a simple graph we can visualize it quite simple. End of explanation """ from sklearn.metrics import accuracy_score print(accuracy_score(test_y, clf.predict(test_X))) """ Explanation: Evaluating the model After the model is trained it has to be evaluated. End of explanation """ from sklearn.neighbors import KNeighborsClassifier clf = KNeighborsClassifier() clf = clf.fit(train_X,train_y) print(accuracy_score(test_y, clf.predict(test_X))) """ Explanation: KNN-Classifier Let's try another classifier. Initialize, train, print error. End of explanation """ from scipy.spatial import distance class ScrappyKNN(object): def fit(self, X_train, y_train): self.X_train = X_train self.y_train = y_train return self def predict(self, X_test): predictions = [] for row in X_test: label = self.closest(row) predictions.append(label) return predictions def closest(self, row): best_dist = distance.euclidean(row, self.X_train[0]) best_index = 0 for i in range(1, len(self.X_train)): dist = distance.euclidean(row, self.X_train[i]) if dist < best_dist: best_dist = dist best_index = i return self.y_train[best_index] clf = ScrappyKNN() clf = clf.fit(train_X,train_y) print(accuracy_score(test_y, clf.predict(test_X))) """ Explanation: Implementing your own KNN Let's implement a simple knn classifier with k=1. We have to implement the fit method and the predict method. Then initialize, train and print error. End of explanation """
WNoxchi/Kaukasos
FADL2/darknet_loss_PR.ipynb
mit
%matplotlib inline %reload_ext autoreload %autoreload 2 from pathlib import Path from fastai.conv_learner import * # from fastai.models import darknet """ Explanation: PR: Adding LogSoftmax layer to Darknet for Cross Entropy Loss Wayne Nixalo - 2018/4/24 0. Proposed Change; Setup Dataset is the fast.ai ImageNet sampleset. Jupyter kernel restarted between ImageNet learner runs due to model size. End of explanation """ import torch import torch.nn as nn import torch.nn.functional as F from fastai.layers import * ### <<<------ class ConvBN(nn.Module): "convolutional layer then batchnorm" def __init__(self, ch_in, ch_out, kernel_size = 3, stride=1, padding=0): super().__init__() self.conv = nn.Conv2d(ch_in, ch_out, kernel_size=kernel_size, stride=stride, padding=padding, bias=False) self.bn = nn.BatchNorm2d(ch_out, momentum=0.01) self.relu = nn.LeakyReLU(0.1, inplace=True) def forward(self, x): return self.relu(self.bn(self.conv(x))) class DarknetBlock(nn.Module): def __init__(self, ch_in): super().__init__() ch_hid = ch_in//2 self.conv1 = ConvBN(ch_in, ch_hid, kernel_size=1, stride=1, padding=0) self.conv2 = ConvBN(ch_hid, ch_in, kernel_size=3, stride=1, padding=1) def forward(self, x): return self.conv2(self.conv1(x)) + x class Darknet(nn.Module): "Replicates the darknet classifier from the YOLOv3 paper (table 1)" def make_group_layer(self, ch_in, num_blocks, stride=1): layers = [ConvBN(ch_in,ch_in*2,stride=stride)] for i in range(num_blocks): layers.append(DarknetBlock(ch_in*2)) return layers def __init__(self, num_blocks, num_classes=1000, start_nf=32): super().__init__() nf = start_nf layers = [ConvBN(3, nf, kernel_size=3, stride=1, padding=1)] for i,nb in enumerate(num_blocks): layers += self.make_group_layer(nf, nb, stride=(1 if i==1 else 2)) nf *= 2 layers += [nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(nf, num_classes)] self.layers = nn.Sequential(*layers) def forward(self, x): return self.layers(x) ######################## ### Proposed Version ### class PR_Darknet(nn.Module): "Replicates the darknet classifier from the YOLOv3 paper (table 1)" def make_group_layer(self, ch_in, num_blocks, stride=1): layers = [ConvBN(ch_in,ch_in*2,stride=stride)] for i in range(num_blocks): layers.append(DarknetBlock(ch_in*2)) return layers def __init__(self, num_blocks, num_classes=1000, start_nf=32): super().__init__() nf = start_nf layers = [ConvBN(3, nf, kernel_size=3, stride=1, padding=1)] for i,nb in enumerate(num_blocks): layers += self.make_group_layer(nf, nb, stride=(1 if i==1 else 2)) nf *= 2 layers += [nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(nf, num_classes)] layers += [nn.LogSoftmax()] ### <<<------ self.layers = nn.Sequential(*layers) def forward(self, x): return self.layers(x) ### /Proposed Version ### ######################## def darknet_53(num_classes=1000): return Darknet([1,2,8,8,4], num_classes) def darknet_small(num_classes=1000): return Darknet([1,2,4,8,4], num_classes) def darknet_mini(num_classes=1000): return Darknet([1,2,4,4,2], num_classes, start_nf=24) def darknet_mini2(num_classes=1000): return Darknet([1,2,8,8,4], num_classes, start_nf=16) def darknet_mini3(num_classes=1000): return Darknet([1,2,4,4], num_classes) # demonstrator def PR_darknet_53(num_classes=1000): return PR_Darknet([1,2,8,8,4], num_classes) def display_head(fastai_learner, λ_name=None, show_nums=False): """displays final conv block and network head.""" # parse if λ_name == None: λ_name='DarknetBlock' fastai_learner = fastai_learner[0] if show_nums: fastai_learner = str(fastai_learner).split('\n') n = len(fastai_learner) else: n = len(str(fastai_learner).split('\n')) j = 1 # find final conv block for i in range(n): if λ_name in str(fastai_learner[-j]): break j += 1 # print head & 'neck' for i in range(j): print(fastai_learner[i-j]) # don't mind the λ's.. l's alone look too much like 1's # It's easy to switch keyboards on a Mac or Windows (ctrl/win-space) # fn NOTE: the `learner[0]` for Darknet is the same as `learner` # for other models; hence the if/else logic to keep printouts neat # show_nums displays layer numbers - kinda PATH = Path('data/imagenet') sz = 256 bs = 32 tfms = tfms_from_stats(imagenet_stats, sz) model_data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms, val_name='train') """ Explanation: Current Darknet, and Proposed Changes: NOTE: from .layers import * changed to from fastai.layers import *, preventing ModuleNotFoundError. End of explanation """ f_model = darknet_53() learner = ConvLearner.from_model_data(f_model, model_data) """ Explanation: 1. fastai Darknet Loss Function Darknet without LogSoftmax layer and with NLL loss End of explanation """ learner.crit """ Explanation: fastai.conv_learner logic sets criterion to torch.nn.functional.nll_loss: End of explanation """ display_head(learner) """ Explanation: There is no final activation layer. The criterion will be applied to the final layer's output: End of explanation """ learner.lr_find() learner.sched.plot() """ Explanation: In this case the Learning Rate Finder 'fails' due to very small - often negative - loss values. End of explanation """ learner.lr_find() learner.sched.plot() """ Explanation: Sometimes the LR Finder manages to produce a plot, but the results leave much to be desired: End of explanation """ f_model = darknet_53() learner = ConvLearner.from_model_data(f_model, model_data, crit=F.cross_entropy) learner.crit """ Explanation: Darknet with Cross Entropy loss End of explanation """ display_head(learner) learner.lr_find() learner.sched.plot() """ Explanation: There is no final activation layer. The criterion will be applied to the final layer's output: End of explanation """ f_model = PR_darknet_53() learner = ConvLearner.from_model_data(f_model, model_data) """ Explanation: This is the shape of plot we expect to see. Proposal: Darknet with LogSoftmax layer and NLL loss (Cross Entropy loss) End of explanation """ learner.crit """ Explanation: fastai.conv_learner logic sets criterion to torch.nn.functional.nll_loss: End of explanation """ display_head(learner) learner.lr_find() learner.sched.plot() """ Explanation: However cross_entropy is NLL(LogSoftmax). The final layer is a LogSoftmax activation. The NLL criterion applied to its output will produce a Cross Entropy loss function. End of explanation """ from fastai.conv_learner import * PATH = Path('data/cifar10') sz = 64 # darknet53 architecture can't handle 32x32 small input bs = 64 tfms = tfms_from_stats(imagenet_stats, sz) model_data = ImageClassifierData.from_paths(PATH, bs=bs, tfms=tfms, val_name='test') """ Explanation: 2. Learner Comparison: ResNet18 & DarkNet53 In working on this, I found some behavior that seemed odd, but may be normal. The CIFAR-10 dataset from fast.ai will be used here. End of explanation """ from fastai.models import darknet """ Explanation: A version of darknet.py with the proposed changes above is used. End of explanation """ from torchvision.models import resnet18 resnet18 """ Explanation: Comparing resnet18 from PyTorch to resnet18 from FastAI: End of explanation """ darknet.darknet_53 """ Explanation: The fastai library does not alter the resnet18 model it imports from PyTorch. For comparison, the darknet53 import from fastai looks like this: End of explanation """ type(resnet18(num_classes=10)) type(darknet.darknet_53(num_classes=10)) """ Explanation: By contrast, the types of the initialized models: End of explanation """ f_model = resnet18() display_head(str(f_model), λ_name='BasicBlock', show_nums=True) """ Explanation: The PyTorch ResNet18 model has no output activation layer. End of explanation """ learner = ConvLearner.pretrained(resnet18, model_data) display_head(learner, λ_name='BasicBlock', show_nums=True) """ Explanation: When a learner is intialized via ConvLearner.pretrained, the fastai library adds a classifier head to the model via the ConvnetBuilder class. NOTE that the definition of the model is passed in, and not a model object. End of explanation """ learner.crit """ Explanation: the fastai library adds the necessary LogSoftmax layer to the end of the model NOTE: default constructor for resnet18 & darknet is 1000 classes (ImageNet). fastai lib finds the correct num_classes from the ModelData object. That's why the resnet18 model above has 1000 output features, and the resnet18 learner below it has the correct 10. The criterion, the loss function, of the learner is still F.nll_loss: End of explanation """ learner = ConvLearner.from_model_data(resnet18(num_classes=10), model_data) display_head(learner, λ_name='BasicBlock', show_nums=True) """ Explanation: But since the final layer is an nn.LogSoftmax, the effective loss function is Cross Entropy. NOTE that this does not happen when the learner is initalized via .from_model_data: End of explanation """ learner = ConvLearner.pretrained(resnet18, model_data) learner = ConvLearner.pretrained(resnet18(num_classes=10), model_data) """ Explanation: 'Strange/Normal' behavior: ConvLearner.pretrained will only accept model definitions, not models themselves: End of explanation """ learner = ConvLearner.pretrained(darknet.darknet_53, model_data) learner = ConvLearner.pretrained(darknet.darknet_53(num_classes=10), model_data) """ Explanation: However the current version of Darknet is not accepted by ConvLearner.pretrained at all. This makes sense, given that the model is not yet pretrained, but also suggests further work is needed to integrate the model into the library. End of explanation """ # Use this version of `display_head` if the other is too finicky for you. # NOTE: fastai learners other than darknet will have to be entered as: # [str(learner_or_model).split('\n')] def display_head(fastai_learner, λ_name=None): """displays final conv block and network head.""" n = len(fastai_learner[0]) if λ_name == None: λ_name='DarknetBlock' j = 1 # find final conv block for i in range(n): if λ_name in str(fastai_learner[0][-j]): break j += 1 # print head & 'neck' for i in range(j): print(fastai_learner[0][i-j]) # display_head(learner, λ_name='BasicBlock') display_head(learner1) #darknet learner print('--------') display_head([str(learner2).split('\n')], λ_name='BasicBlock') #resnet learner print('--------') display_head([str(f_model).split('\n')], λ_name='BasicBlock') #resnet model """ Explanation: The from_model_data method works, as seen in section 1. Misc End of explanation """
sdpython/ensae_teaching_cs
_doc/notebooks/td2a_eco2/td2a_eco_5d_Travailler_du_texte_les_expressions_regulieres_correction.ipynb
mit
from jyquickhelper import add_notebook_menu add_notebook_menu() """ Explanation: 2A.eco - Les expressions régulières : à quoi ça sert ? (correction) Chercher un mot dans un texte est une tâche facile, c'est l'objectif de la méthode find attachée aux chaînes de caractères, elle suffit encore lorsqu'on cherche un mot au pluriel ou au singulier mais il faut l'appeler au moins deux fois pour chercher ces deux formes. Pour des expressions plus compliquées, il est conseillé d'utiliser les expressions régulières. C'est une fonctionnalité qu'on retrouve dans beaucoup de langages. C'est une forme de grammaire qui permet de rechercher des expressions. End of explanation """ s = """date 0 : 14/9/2000 date 1 : 20/04/1971 date 2 : 14/09/1913 date 3 : 2/3/1978 date 4 : 1/7/1986 date 5 : 7/3/47 date 6 : 15/10/1914 date 7 : 08/03/1941 date 8 : 8/1/1980 date 9 : 30/6/1976""" """ Explanation: Lorsqu'on remplit un formulaire, on voit souvent le format "MM/JJ/AAAA" qui précise sous quelle forme on s'attend à ce qu’une date soit écrite. Les expressions régulières permettent de définir également ce format et de chercher dans un texte toutes les chaînes de caractères qui sont conformes à ce format. La liste qui suit contient des dates de naissance. On cherche à obtenir toutes les dates de cet exemple sachant que les jours ou les mois contiennent un ou deux chiffres, les années deux ou quatre. End of explanation """ import re # première étape : construction expression = re.compile("([0-3]?[0-9]/[0-1]?[0-9]/([0-2][0-9])?[0-9][0-9])") # seconde étape : recherche res = expression.findall(s) print(res) """ Explanation: Le premier chiffre du jour est soit 0, 1, 2, ou 3 ; ceci se traduit par [0-3]. Le second chiffre est compris entre 0 et 9, soit [0-9]. Le format des jours est traduit par [0-3][0-9]. Mais le premier jour est facultatif, ce qu'on précise avec le symbole ? : [0-3]?[0-9]. Les mois suivent le même principe : [0-1]?[0-9]. Pour les années, ce sont les deux premiers chiffres qui sont facultatifs, le symbole ? s'appliquent sur les deux premiers chiffres, ce qu'on précise avec des parenthèses : ([0-2][0-9])?[0-9][0-9]. Le format final d'une date devient : Le module re gère les expressions régulières, celui-ci traite différemment les parties de l'expression régulière qui sont entre parenthèses de celles qui ne le sont pas : c'est un moyen de dire au module re que nous nous intéressons à telle partie de l'expression qui est signalée entre parenthèses. Comme la partie qui nous intéresse - une date - concerne l'intégralité de l'expression régulière, il faut insérer celle-ci entre parenthèses. La première étape consiste à construire l'expression régulière, la seconde à rechercher toutes les fois qu'un morceau de la chaîne s définie plus haut correspond à l’expression régulière. End of explanation """ import re s = "something\\support\\vba\\image/vbatd1_4.png" print(re.compile("[\\\\/]image[\\\\/].*[.]png").search(s)) # résultat positif print(re.compile("[\\\\/]image[\\\\/].*[.]png").search(s)) # même résultat """ Explanation: Le résultat une liste de couples dont chaque élément correspond aux parties comprises entre parenthèses qu'on appelle des groupes. Lorsque les expressions régulières sont utilisées, on doit d'abord se demander comment définir ce qu’on cherche puis quelles fonctions utiliser pour obtenir les résultats de cette recherche. Les deux paragraphes qui suivent y répondent. Syntaxe La syntaxe des expressions régulières est décrite sur le site officiel de python. La page Regular Expression Syntax décrit comment se servir des expressions régulières, les deux pages sont en anglais. Comme toute grammaire, celle des expressions régulières est susceptible d’évoluer au fur et à mesure des versions du langage python. Les ensembles de caractères Lors d’une recherche, on s’intéresse aux caractères et souvent aux classes de caractères : on cherche un chiffre, une lettre, un caractère dans un ensemble précis ou un caractère qui n’appartient pas à un ensemble précis. Certains ensembles sont prédéfinis, d’autres doivent être définis à l’aide de crochets. Pour définir un ensemble de caractères, il faut écrire cet ensemble entre crochets : [0123456789] désigne un chiffre. Comme c’est une séquence de caractères consécutifs, on peut résumer cette écriture en [0-9]. Pour inclure les symboles -, +, il suffit d’écrire : [-0-9+]. Il faut penser à mettre le symbole - au début pour éviter qu’il ne désigne une séquence. Le caractère ^ inséré au début du groupe signifie que le caractère cherché ne doit pas être un de ceux qui suivent. Le tableau suivant décrit les ensembles prédéfinis et leur équivalent en terme d’ensemble de caractères : . désigne tout caractère non spécial quel qu'il soit. \d désigne tout chiffre, est équivalent à [0-9]. \D désigne tout caractère différent d'un chiffre, est équivalent à [^0-9]. \s désigne tout espace ou caractère approché, est équivalent à [\; \t\n\r\f\v]. Ces caractères sont spéciaux, les plus utilisés sont \t qui est une tabulation, \n qui est une fin de ligne et qui \r qui est un retour à la ligne. \S désigne tout caractère différent d'un espace, est équivalent à [^ \t\n\r\f\v]. \w désigne tout lettre ou chiffre, est équivalent à [a-zA-Z0-9_]. \W désigne tout caractère différent d'une lettre ou d'un chiffre, est équivalent à [^a-zA-Z0-9_]. ^ désigne le début d'un mot sauf s'il est placé entre crochets. $ désigne la fin d'un mot sauf s'il est placé entre crochets. A l'instar des chaînes de caractères, comme le caractère \ est un caractère spécial, il faut le doubler : [\\]. Le caractère \ est déjà un caractère spécial pour les chaînes de caractères en python, il faut donc le quadrupler pour l'insérer dans un expression régulière. L'expression suivante filtre toutes les images dont l’extension est png et qui sont enregistrées dans un répertoire image. End of explanation """ "<h1>mot</h1>" """ Explanation: Les multiplicateurs Les multiplicateurs permettent de définir des expressions régulières comme : un mot entre six et huit lettres qu’on écrira [\w]{6,8}. Le tableau suivant donne la liste des multiplicateurs principaux : * présence de l'ensemble de caractères qui précède entre 0 fois et l'infini + présence de l'ensemble de caractères qui précède entre 1 fois et l'infini ? présence de l'ensemble de caractères qui précède entre 0 et 1 fois {m,n} présence de l'ensemble de caractères qui précède entre m et n fois, si m=n, cette expression peut être résumée par {n}. (?!(...)) absence du groupe désigné par les points de suspensions. L’algorithme des expressions régulières essaye toujours de faire correspondre le plus grand morceau à l’expression régulière. End of explanation """ import re s = "<h1>mot</h1>" print(re.compile("(<.*>)").match(s).groups()) # ('<h1>mot</h1>',) print(re.compile("(<.*?>)").match(s).groups()) # ('<h1>',) print(re.compile("(<.+?>)").match(s).groups()) # ('<h1>',) """ Explanation: &lt;.*&gt; correspond avec &lt;h1&gt;, &lt;/h1&gt; ou encore &lt;h1&gt;mot&lt;/h1&gt;. Par conséquent, l’expression régulière correspond à trois morceaux. Par défaut, il prendra le plus grand. Pour choisir les plus petits, il faudra écrire les multiplicateurs comme ceci : *?, +? End of explanation """ texte = """Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Ma seconde femme est morte le 10/11/1963. J'ai écrit un livre intitulé 'Comprendre les fractions : les exemples en page 12/46/83' """ import re expression = re.compile("[0-9]{2}/[0-9]{2}/[0-9]{4}") cherche = expression.findall(texte) print(cherche) """ Explanation: Exercice 1 Recherchez les dates présentes dans la phrase suivante End of explanation """ texte = """Je suis né le 28/12/1903 et je suis mort le 08/02/1957. Je me suis marié le 8/5/45. J'ai écrit un livre intitulé 'Comprendre les fractions : les exemples en page 12/46/83' """ expression = re.compile("[0-3]?[0-9]/[0-1]?[0-9]/[0-1]?[0-9]?[0-9]{2}") cherche = expression.findall(texte) print(cherche) """ Explanation: Puis dans celle-ci : End of explanation """
liufuyang/deep_learning_tutorial
course-deeplearning.ai/course1-nn-and-deeplearning/Logistic+Regression+with+a+Neural+Network+mindset+v3.ipynb
mit
import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset %matplotlib inline """ Explanation: Logistic Regression with a Neural Network mindset Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning. Instructions: - Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so. You will learn to: - Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - numpy is the fundamental package for scientific computing with Python. - h5py is a common package to interact with a dataset that is stored on an H5 file. - matplotlib is a famous library to plot graphs in Python. - PIL and scipy are used here to test your model with your own picture at the end. End of explanation """ # Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset() """ Explanation: 2 - Overview of the Problem set Problem Statement: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px). You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat. Let's get more familiar with the dataset. Load the data by running the following code. End of explanation """ # Example of a picture index = 25 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.") """ Explanation: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing). Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images. End of explanation """ train_set_y.shape ### START CODE HERE ### (≈ 3 lines of code) m_train = train_set_y.shape[1] m_test = test_set_y.shape[1] num_px = train_set_x_orig.shape[1] ### END CODE HERE ### print ("Number of training examples: m_train = " + str(m_train)) print ("Number of testing examples: m_test = " + str(m_test)) print ("Height/Width of each image: num_px = " + str(num_px)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_set_x shape: " + str(train_set_x_orig.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x shape: " + str(test_set_x_orig.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) """ Explanation: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. Exercise: Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image) Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0]. End of explanation """ # Reshape the training and test examples ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0])) """ Explanation: Expected Output for m_train, m_test and num_px: <table style="width:15%"> <tr> <td>**m_train**</td> <td> 209 </td> </tr> <tr> <td>**m_test**</td> <td> 50 </td> </tr> <tr> <td>**num_px**</td> <td> 64 </td> </tr> </table> For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns. Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1). A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use: python X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X End of explanation """ train_set_x = train_set_x_flatten/255. test_set_x = test_set_x_flatten/255. """ Explanation: Expected Output: <table style="width:35%"> <tr> <td>**train_set_x_flatten shape**</td> <td> (12288, 209)</td> </tr> <tr> <td>**train_set_y shape**</td> <td>(1, 209)</td> </tr> <tr> <td>**test_set_x_flatten shape**</td> <td>(12288, 50)</td> </tr> <tr> <td>**test_set_y shape**</td> <td>(1, 50)</td> </tr> <tr> <td>**sanity check after reshaping**</td> <td>[17 31 56 22 33]</td> </tr> </table> To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255. One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). <!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !--> Let's standardize our dataset. End of explanation """ # GRADED FUNCTION: sigmoid def sigmoid(z): """ Compute the sigmoid of z Arguments: z -- A scalar or numpy array of any size. Return: s -- sigmoid(z) """ ### START CODE HERE ### (≈ 1 line of code) s = 1.0 / (1.0 + np.exp(-z)) ### END CODE HERE ### return s print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2])))) """ Explanation: <font color='blue'> What you need to remember: Common steps for pre-processing a new dataset are: - Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...) - Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1) - "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images. You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network! <img src="images/LogReg_kiank.png" style="width:650px;height:400px;"> Mathematical expression of the algorithm: For one example $x^{(i)}$: $$z^{(i)} = w^T x^{(i)} + b \tag{1}$$ $$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$ The cost is then computed by summing over all training examples: $$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$ Key steps: In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm ## The main steps for building a Neural Network are: 1. Define the model structure (such as number of input features) 2. Initialize the model's parameters 3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent) You often build 1-3 separately and integrate them into one function we call model(). 4.1 - Helper functions Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp(). End of explanation """ # GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) """ ### START CODE HERE ### (≈ 1 line of code) w = np.zeros((dim, 1)) b = 0 ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b)) """ Explanation: Expected Output: <table> <tr> <td>**sigmoid([0, 2])**</td> <td> [ 0.5 0.88079708]</td> </tr> </table> 4.2 - Initializing parameters Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation. End of explanation """ # GRADED FUNCTION: propagate def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation. np.log(), np.dot() """ m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (≈ 2 lines of code) A = sigmoid(np.dot(w.T, X) + b) # compute activation cost = - 1.0 / m * np.sum(Y * np.log(A) + (1.0 - Y) * np.log(1-A)) # compute cost ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (≈ 2 lines of code) dw = 1.0 / m * np.dot(X, (A - Y).T) db = 1.0 / m * np.sum(A - Y) ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost)) """ Explanation: Expected Output: <table style="width:15%"> <tr> <td> ** w ** </td> <td> [[ 0.] [ 0.]] </td> </tr> <tr> <td> ** b ** </td> <td> 0 </td> </tr> </table> For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagation Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters. Exercise: Implement a function propagate() that computes the cost function and its gradient. Hints: Forward Propagation: - You get X - You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$ - You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$ Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$ $$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$ End of explanation """ # GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. """ costs = [] for i in range(num_iterations): # Cost and gradient calculation (≈ 1-4 lines of code) ### START CODE HERE ### grads, cost = propagate(w, b, X, Y) ### END CODE HERE ### # Retrieve derivatives from grads dw = grads["dw"] db = grads["db"] # update rule (≈ 2 lines of code) ### START CODE HERE ### w = w - learning_rate * dw b = b - learning_rate * db ### END CODE HERE ### # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training examples if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False) print ("w = " + str(params["w"])) print ("b = " + str(params["b"])) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) """ Explanation: Expected Output: <table style="width:50%"> <tr> <td> ** dw ** </td> <td> [[ 0.99993216] [ 1.99980262]]</td> </tr> <tr> <td> ** db ** </td> <td> 0.499935230625 </td> </tr> <tr> <td> ** cost ** </td> <td> 6.000064773192205</td> </tr> </table> d) Optimization You have initialized your parameters. You are also able to compute a cost function and its gradient. Now, you want to update the parameters using gradient descent. Exercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate. End of explanation """ # GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1,m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### (≈ 1 line of code) A = sigmoid(np.dot(w.T, X) + b) ### END CODE HERE ### for i in range(A.shape[1]): # Convert probabilities A[0,i] to actual predictions p[0,i] ### START CODE HERE ### (≈ 4 lines of code) Y_prediction[0, i] = A[0,i] > 0.5 ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction print ("predictions = " + str(predict(w, b, X))) """ Explanation: Expected Output: <table style="width:40%"> <tr> <td> **w** </td> <td>[[ 0.1124579 ] [ 0.23106775]] </td> </tr> <tr> <td> **b** </td> <td> 1.55930492484 </td> </tr> <tr> <td> **dw** </td> <td> [[ 0.90158428] [ 1.76250842]] </td> </tr> <tr> <td> **db** </td> <td> 0.430462071679 </td> </tr> </table> Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions: Calculate $\hat{Y} = A = \sigma(w^T X + b)$ Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this). End of explanation """ # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False): """ Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. """ ### START CODE HERE ### # initialize parameters with zeros (≈ 1 line of code) w, b = initialize_with_zeros(X_train.shape[0]) # Gradient descent (≈ 1 line of code) parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = print_cost) # Retrieve parameters w and b from dictionary "parameters" w = parameters["w"] b = parameters["b"] # Predict test/train set examples (≈ 2 lines of code) Y_prediction_test = predict(w, b, X_test) Y_prediction_train = predict(w, b, X_train) ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d """ Explanation: Expected Output: <table style="width:30%"> <tr> <td> **predictions** </td> <td> [[ 1. 1.]] </td> </tr> </table> <font color='blue'> What to remember: You've implemented several functions that: - Initialize (w,b) - Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent - Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order. Exercise: Implement the model function. Use the following notation: - Y_prediction for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize() End of explanation """ d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True) """ Explanation: Run the following cell to train your model. End of explanation """ # Example of a picture that was wrongly classified. index = 1 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.") """ Explanation: Expected Output: <table style="width:40%"> <tr> <td> **Train Accuracy** </td> <td> 99.04306220095694 % </td> </tr> <tr> <td>**Test Accuracy** </td> <td> 70.0 % </td> </tr> </table> Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week! Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set. End of explanation """ # Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show() """ Explanation: Let's also plot the cost function and the gradients. End of explanation """ learning_rates = [0.01, 0.001, 0.0001] models = {} for i in learning_rates: print ("learning rate is: " + str(i)) models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False) print ('\n' + "-------------------------------------------------------" + '\n') for i in learning_rates: plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"])) plt.ylabel('cost') plt.xlabel('iterations') legend = plt.legend(loc='upper center', shadow=True) frame = legend.get_frame() frame.set_facecolor('0.90') plt.show() """ Explanation: Interpretation: You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate Reminder: In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate. Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the learning_rates variable to contain, and see what happens. End of explanation """ ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "my_image.jpg" # change this to the name of your image file ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T my_predicted_image = predict(d["w"], d["b"], my_image) plt.imshow(image) print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.") """ Explanation: Interpretation: - Different learning rates give different costs and thus different predictions results. - If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy. - In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)! End of explanation """
b4be1/ball_catcher
src3d/help/demo.ipynb
cc0-1.0
from casadi import * from casadi.tools import * # for dotdraw from matplotlib.pyplot import * %matplotlib inline x = SX.sym("x") # scalar symbolic primitives y = SX.sym("y") z = x*sin(x+y) # common mathematical operators print z dotdraw(z,direction="BT") J = jacobian(z,x) print J dotdraw(J,direction="BT") """ Explanation: CasADi demo What is CasADi? A tool for quick & efficient implementation of algorithms for dynamic optimization Open source, LGPL-licensed, <a href="http://casadi.org">casadi.org</a> C++ / C++11 Interfaces to Python, Haskell, (Matlab?) Numerical backends: <a href="https://projects.coin-or.org/Ipopt">IPOPT</a>, <a href="http://computation.llnl.gov/casc/sundials/main.html">Sundials</a>, ... Developers in group of Moritz Diehl: Joel Andersson Joris Gillis Greg Horn Outline of demo Scalar expression (SX) graphs Functions of SX graphs Matrices of scalar expressions Automatic differentiation (AD) Integrators Matrix expression (MX) graphs Functions of MX graphs Solving an optimal control problem Scalar expression (SX) graphs End of explanation """ print x*y/x-y H = hessian(z,x) print H """ Explanation: Note 1: subexpressions are shared. Graph $\leftrightarrow$ Tree Different from Maple, Matlab symbolic, sympy, ... A (very) little bit of Computer Algebra End of explanation """ f = SXFunction('f',[x,y],[z]) f.init() print f """ Explanation: Functions of SX graphs Sort graph into algorithm End of explanation """ f.setInput(1.2,0) f.setInput(3.4,1) f.evaluate() print f.getOutput(0) print f([1.2,3.4]) print f([1.2,x+y]) f.generate("f") print file("f.c").read() """ Explanation: Note 2: re-use of tape variables: live-variables End of explanation """ A = SX.sym("A",3,3) B = SX.sym("B",3) print A print solve(A,B) print trace(A) # Trace print mul(A,B) # Matrix multiplication print norm_F(A) # Frobenius norm print A[2,:] # Slicing """ Explanation: Matrices of scalar expressions End of explanation """ print A.shape, z.shape I = SX.eye(3) print I Ak = kron(I,A) print Ak """ Explanation: Rule 1: Everything is a matrix End of explanation """ with nice_stdout(): Ak.sparsity().spy() with nice_stdout(): A.sparsity().spy() with nice_stdout(): z.sparsity().spy() """ Explanation: Rule 1: Everything is a sparse matrix End of explanation """ t = SX.sym("t") # time u = SX.sym("u") # control x = SX.sym("[p,q,c]") # state p,q,c = x ode = vertcat([(1 - q**2)*p - q + u, p, p**2+q**2+u**2]) print ode, ode.shape J = jacobian(ode,x) print J f = SXFunction('f',[t,u,x],[ode]) f.init() ffwd = f.derivative(1,0) ffwd.init() fadj = f.derivative(0,1) fadj.init() # side-by-side printing print '{:*^24} || {:*^30} || {:*^30}'.format("f","ffwd","fadj") for l in zip(str(f).split("\n"),str(ffwd).split("\n"),str(fadj).split("\n")): print '{:<24} || {:<30} || {:<30}'.format(*l) """ Explanation: Automatic differentiation (AD) Consider an ode: \begin{equation} \dot{p} = (1 - q^2)p-q+u \end{equation} \begin{equation} \dot{q} = p \end{equation} \begin{equation} \dot{c} = p^2+q^2+u^2 \end{equation} End of explanation """ print I for i in range(3): print ffwd([ t,u,x, 0,0,I[:,i] ]) [1] print J """ Explanation: Performing forward sweeps gives the columns of J End of explanation """ for i in range(3): print fadj([ t,u,x, I[:,i] ]) [3] """ Explanation: Performing adjoint sweeps gives the rows of J End of explanation """ f = SXFunction('f',daeIn(x=x,t=t,p=u),daeOut(ode=ode)) f.init() print str(f)[:327] """ Explanation: Often, you can do better than slicing with unit vectors Note 3: CasADi does graph coloring for efficient sparse jacobians Integrators $\dot{x}=f(x,u,t)$ with $x = [p,q,c]^T$ End of explanation """ tf = 10.0 N = 20 dt = tf/N Phi = Integrator('integrator',"cvodes",f) Phi.setOption("name","Phi") Phi.setOption("tf",dt) Phi.init() x0 = vertcat([0,1,0]) Phi.setInput(x0,"x0") Phi.evaluate() print Phi.getOutput() x = x0 xs = [x] for i in range(N): Phi.setInput(x,"x0") Phi.evaluate() x = Phi.getOutput() xs.append(x) plot(horzcat(xs).T) legend(["p","q","c"]) """ Explanation: Construct an integrating block $x_{k+1} = \Phi(f;\Delta t;x_k,u_k)$ End of explanation """ n = 3 A = SX.sym("A",n,n) B = SX.sym("B",n,n) C = mul(A,B) print C dotdraw(C,direction='BT') """ Explanation: Rule 2: Everything is a Function (see http://docs.casadi.org) Matrix expression (MX) graphs Note 4: this is what makes CasADi stand out among AD tools Recall End of explanation """ A = MX.sym("A",n,n) B = MX.sym("B",n,n) C = mul(A,B) print C dotdraw(C,direction='BT') """ Explanation: What if you don't want to expand into scalar operations? ( avoid $O(n^3)$ storage) End of explanation """ C = solve(A,B) print C dotdraw(C,direction='BT') X0 = MX.sym("x",3) XF = Phi({'x0': X0})['xf'] expr = sin(XF)+X0 dotdraw(expr,direction='BT') """ Explanation: What if you cannot expand into matrix operations? ( numerical algorithm ) End of explanation """ F = MXFunction('F',[X0],[ expr ]) F.init() print F F.setInput(x0) F.evaluate() print F.getOutput() J = F.jacobian() J.init() print J([ x0 ]) """ Explanation: Functions of MX graphs End of explanation """ X = struct_symMX([ ( entry("x", repeat=N+1, struct=struct(["p","q","c"]) ), entry("u", repeat=N) ) ]) """ Explanation: This shows how an integrator-call can be embedded in matrix graph. More possibilities: external compiled library, a call to Matlab/Scipy Solving an optimal control problem \begin{equation} \begin{array}{cl} \underset{p(.),q(.),u(.)}{\text{minimize}} & \displaystyle \int_{0}^{T}{ p(t)^2 + q(t)^2 + u(t)^2 dt} \\ \text{subject to} & \dot{p} = (1 - q^2)p-q+u \\ & \dot{q} = p \\ & p(0) = 0, q(0) = 1 \\ &-1 \le u(t) \le 1 \end{array} \end{equation} Remember, $\dot{x}=f(x,u,t)$ with $x = [p,q,c]^T$ \begin{equation} \begin{array}{cl} \underset{x(.),u(.)}{\text{minimize}} & c(T) \\ \text{subject to} & \dot{x} = f(x,u) \\ & p(0) = 0, q(0) = 1, c(0)= 0 \\ &-1 \le u(t) \le 1 \end{array} \end{equation} Discretization with multiple shooting \begin{equation} \begin{array}{cl} \underset{x_{\bullet},u_{\bullet}}{\text{minimize}} & c_N \\ \text{subject to} & x_{k+1} - \Phi(x_k,u_k) = 0 , \quad \quad k = 0,1,\ldots, (N-1) \\ & p_0 = 0, q_0 = 1, c_0 = 0 \\ &-1 \le u_k \le 1 , \quad \quad k = 0,1,\ldots, (N-1) \end{array} \end{equation} Cast as NLP \begin{equation} \begin{array}{cl} \underset{X}{\text{minimize}} & F(X,P) \\ \text{subject to} & \text{lbx} \le X \le \text{ubx} \\ & \text{lbg} \le G(X,P) \le \text{ubg} \\ \end{array} \end{equation} End of explanation """ print X.shape print (N+1)*3+N """ Explanation: X is a symbolic matrix primitive, but with fancier indexing End of explanation """ Phi({'x0': X["x",0], 'p': X["u",0]})['xf'] """ Explanation: Demo: $\Phi(x_0,u_0)$ End of explanation """ g = [] # List of constraint expressions for k in range(N): Xf = Phi({'x0': X["x",0], 'p': X["u",0]})['xf'] g.append( X["x",k+1]-Xf ) obj = X["x",N,"c"] # c_N nlp = MXFunction( 'nlp', nlpIn(x=X), nlpOut(g=vertcat(g),f=obj) ) nlp.init() print nlp """ Explanation: $ x_{k+1} - \Phi(x_k,u_k) = 0 , \quad \quad k = 0,1,\ldots, (N-1)$ End of explanation """ jacG = nlp.jacobian("x","g") jacG.init() print jacG.getOutput().shape with nice_stdout(): jacG.getOutput()[:20,:20].sparsity().spy() """ Explanation: Block structure in the constraint Jacobian End of explanation """ solver = NlpSolver('solver',"ipopt",nlp) solver.init() solver.setInput(0,"lbg") # Equality constraints for shooting constraints solver.setInput(0,"ubg") # 0 <= g <= 0 lbx = X(-inf) ubx = X(inf) lbx["u",:] = -1; ubx["u",:] = 1 # -1 <= u(t) <= 1 lbx["x",0] = ubx["x",0] = x0 # Initial condition solver.setInput(lbx,"lbx") solver.setInput(ubx,"ubx") with nice_stdout(): solver.evaluate() print solver.getOutput() sol = X(solver.getOutput()) plot(horzcat(sol["x",:]).T) step(range(N),sol["u",:]) """ Explanation: Recall \begin{equation} \begin{array}{cl} \underset{X}{\text{minimize}} & F(X,P) \\ \text{subject to} & \text{lbx} \le X \le \text{ubx} \\ & \text{lbg} \le G(X,P) \le \text{ubg} \\ \end{array} \end{equation} End of explanation """ from IPython.display import YouTubeVideo YouTubeVideo('tmjIBpb43j0') YouTubeVideo('SW6ZJzcMWAk') """ Explanation: Wrapping up Showcase: kite-power optimization by Greg Horn, using CasADi backend End of explanation """
DJCordhose/ai
notebooks/ml/4-tf-keras-nn.ipynb
mit
import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import pandas as pd print(pd.__version__) import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) print(tf.__version__) """ Explanation: Neural Networks with TensorFlow and Keras End of explanation """ df = pd.read_csv('./insurance-customers-1500.csv', sep=';') y=df['group'] df.drop('group', axis='columns', inplace=True) X = df.as_matrix() df.describe() """ Explanation: First Step: Load Data and disassemble for our purposes We need a few more data point samples for this approach End of explanation """ # ignore this, it is just technical code # should come from a lib, consider it to appear magically # http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD']) cmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00']) cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD']) font_size=25 def meshGrid(x_data, y_data): h = 1 # step size in the mesh x_min, x_max = x_data.min() - 1, x_data.max() + 1 y_min, y_max = y_data.min() - 1, y_data.max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) return (xx,yy) def plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title="", mesh=True, fixed=None, fname=None, print=False): xx,yy = meshGrid(x_data, y_data) plt.figure(figsize=(20,10)) if clf and mesh: grid_X = np.array(np.c_[yy.ravel(), xx.ravel()]) if fixed: fill_values = np.full((len(grid_X), 1), fixed) grid_X = np.append(grid_X, fill_values, axis=1) Z = clf.predict(grid_X) Z = np.argmax(Z, axis=1) Z = Z.reshape(xx.shape) plt.pcolormesh(xx, yy, Z, cmap=cmap_light) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) if print: plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k') else: plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k') plt.xlabel(x_label, fontsize=font_size) plt.ylabel(y_label, fontsize=font_size) plt.title(title, fontsize=font_size) if fname: plt.savefig(fname) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y) X_train.shape, y_train.shape, X_test.shape, y_test.shape # tiny little pieces of feature engeneering num_categories = 3 y_train_categorical = tf.keras.utils.to_categorical(y_train, num_categories) y_test_categorical = tf.keras.utils.to_categorical(y_test, num_categories) inputs = tf.keras.Input(name='input', shape=(3, )) x = tf.keras.layers.Dense(100, name='hidden1', activation='relu')(inputs) x = tf.keras.layers.Dense(100, name='hidden2', activation='relu')(x) predictions = tf.keras.layers.Dense(3, name='softmax', activation='softmax')(x) model = tf.keras.models.Model(inputs=inputs, outputs=predictions) # loss function: http://cs231n.github.io/linear-classify/#softmax model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() %time model.fit(X_train, y_train_categorical, epochs=500, batch_size=100) train_loss, train_accuracy = model.evaluate(X_train, y_train_categorical, batch_size=100) train_accuracy test_loss, test_accuracy = model.evaluate(X_test, y_test_categorical, batch_size=100) test_accuracy model.save('insurance.hdf5') """ Explanation: Second Step: Deep Learning as Alchemy End of explanation """ kms_per_year = 20 plotPrediction(model, X_test[:, 1], X_test[:, 0], 'Age', 'Max Speed', y_test, fixed = kms_per_year, title="Test Data Max Speed vs Age with Prediction, 20 km/year") kms_per_year = 50 plotPrediction(model, X_test[:, 1], X_test[:, 0], 'Age', 'Max Speed', y_test, fixed = kms_per_year, title="Test Data Max Speed vs Age with Prediction, 50 km/year") kms_per_year = 5 plotPrediction(model, X_test[:, 1], X_test[:, 0], 'Age', 'Max Speed', y_test, fixed = kms_per_year, title="Test Data Max Speed vs Age with Prediction, 5 km/year") """ Explanation: Look at all the different shapes for different kilometers per year now we have three dimensions, so we need to set one to a certain number End of explanation """
Upward-Spiral-Science/team1
code/Spike Imaging.ipynb
apache-2.0
# Spike images from mpl_toolkits.mplot3d import axes3d import numpy as np import urllib2 import scipy.stats as stats import matplotlib.pyplot as plt from image_builder import get_image np.set_printoptions(precision=3, suppress=True) url = ('https://raw.githubusercontent.com/Upward-Spiral-Science' '/data/master/syn-density/output.csv') data = urllib2.urlopen(url) csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels) # chopping data based on thresholds on x and y coordinates x_bounds = (409, 3529) y_bounds = (1564, 3124) def check_in_bounds(row, x_bounds, y_bounds): if row[0] < x_bounds[0] or row[0] > x_bounds[1]: return False if row[1] < y_bounds[0] or row[1] > y_bounds[1]: return False if row[3] == 0: return False return True indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv, x_bounds, y_bounds)) data_thresholded = csv[indices_in_bound] n = data_thresholded.shape[0] def synapses_over_unmasked(row): s = (row[4]/row[3])*(64**3) return [row[0], row[1], row[2], s] syn_unmasked = np.apply_along_axis(synapses_over_unmasked, 1, data_thresholded) syn_normalized = syn_unmasked """ Explanation: Imaging the Spike End of explanation """ a = np.apply_along_axis(lambda x:x[4]/x[3], 1, data_thresholded) spike = a[np.logical_and(a <= 0.0015, a >= 0.0012)] n, bins, _ = plt.hist(spike, 2000) bin_max = np.where(n == n.max()) bin_width = bins[1]-bins[0] syn_normalized[:,3] = syn_normalized[:,3]/(64**3) spike = syn_normalized[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)] spike_thres = data_thresholded[np.logical_and(syn_normalized[:,3] <= 0.00131489435301+bin_width, syn_normalized[:,3] >= 0.00131489435301-bin_width)] len_spike = len(spike_thres) # Compare some of the bins represented the spike xs = np.unique(spike_thres[:,0]) ys = np.unique(spike_thres[:,1]) name = 'spike' get_image((0,10),(0,10),xs,ys,name) """ Explanation: We're going to extract images of representing the bins in the spike End of explanation """ %matplotlib inline unique_x = np.unique(spike_thres[:,0]) unique_y = np.unique(spike_thres[:,1]) unique_z = np.unique(spike_thres[:,2]) x_sum = [0] * len(unique_x) for i in range(len(unique_x)): x_sum[i] = sum(spike_thres[spike_thres[:,0]==unique_x[i]][:,4]) y_sum = [0] * len(unique_y) for i in range(len(unique_y)): y_sum[i] = sum(spike_thres[spike_thres[:,1]==unique_y[i]][:,4]) z_sum = [0] * len(unique_z) for i in range(len(unique_z)): z_sum[i] = sum(spike_thres[spike_thres[:,2]==unique_z[i]][:,4]) plt.figure() plt.figure(figsize=(28,7)) plt.subplot(131) plt.bar(unique_x, x_sum, 1) plt.xlim(450, 3600) plt.ylabel('density in synapses/voxel',fontsize=20) plt.xlabel('x-coordinate',fontsize=20) plt.title('Total Density across Each X-Layer',fontsize=20) plt.subplot(132) plt.bar(unique_y, y_sum, 1) plt.xlim(1570, 3190) plt.ylabel('density in synapses/voxel',fontsize=20) plt.xlabel('y-coordinate',fontsize=20) plt.title('Total Density across Each Y-Layer',fontsize=20) plt.subplot(133) plt.bar(unique_z, z_sum, 1) plt.ylabel('density in synapses/voxel',fontsize=20) plt.xlabel('z-coordinate',fontsize=20) plt.title('Total Density across Each Z-Layer',fontsize=20) """ Explanation: <img src='spike0_0.bmp' style="width: 800px;"/> Distributions of Synapses across x, y, z in spike End of explanation """
wso2/product-apim
modules/recommendation-engine/repository/resources/Word2vec_Model/.ipynb_checkpoints/Build_Word2vec_model-checkpoint.ipynb
apache-2.0
model_1 = gensim.models.Word2Vec (dataset, size=300, window=10, min_count=5, workers=10) model_1.train(dataset,total_examples=len(dataset),epochs=15) """ Explanation: The 'Dataset.txt' file consists of API descriptions of over 15,000 APIs. Using the 'Dataset_PW.txt' file, a dataset which consists of sentences, is created. End of explanation """ model_1.save("word2vec_model1.model") """ Explanation: Using gensim, a word2vec model is built and trained using the above dataset. End of explanation """
moble/PostNewtonian
PNTerms/Precession.ipynb
mit
Precession_ellHat = PNCollection() Precession_chiVec1 = PNCollection() Precession_chiVec2 = PNCollection() """ Explanation: The following PNCollection objects will contain all the terms describing precession. End of explanation """ Precession_ellHat.AddDerivedVariable('gamma_PN_coeff', v**2) Precession_ellHat.AddDerivedConstant('gamma_PN_0', 1) # gamma_PN_1 is 0 Precession_ellHat.AddDerivedConstant('gamma_PN_2', -nu/3 + 1) Precession_ellHat.AddDerivedVariable('gamma_PN_3', (5*S_ell/3 + Sigma_ell*delta)/M**2) Precession_ellHat.AddDerivedConstant('gamma_PN_4', -65*nu/12 + 1) Precession_ellHat.AddDerivedVariable('gamma_PN_5', ((frac(8,9)*nu + frac(10,3))*S_ell + 2*Sigma_ell*delta)/M**2) Precession_ellHat.AddDerivedConstant('gamma_PN_6', nu**3/81 + 229*nu**2/36 - 41*pi**2*nu/192 - 2203*nu/2520 + 1) Precession_ellHat.AddDerivedVariable('gamma_PN_7', ((-6*nu**2 - 127*nu/12 + 5)*S_ell - 8*Sigma_ell*delta*nu**2/3 + (-61*nu/6 + 3)*Sigma_ell*delta)/M**2) Precession_ellHat.AddDerivedVariable('a_ell_coeff', v**7/M**3) Precession_ellHat.AddDerivedVariable('a_ell_0', 7*S_n + 3*Sigma_n*delta) # gamma_PN_1 is 0 Precession_ellHat.AddDerivedVariable('a_ell_2', (-29*nu/3-10)*S_n + (-9*nu/2-6)*delta*Sigma_n) # gamma_PN_3 is 0 Precession_ellHat.AddDerivedVariable('a_ell_4', (frac(52,9)*nu**2 + frac(59,4)*nu + frac(3,2))*S_n + (frac(17,6)*nu**2 + frac(73,8)*nu + frac(3,2))*delta*Sigma_n) def Precession_ellHatExpression(PNOrder=frac(7,2)): OmegaVec_ellHat = (gamma_PN_coeff.substitution*a_ell_coeff.substitution/v**3)\ *horner(sum([key*(v**n) for n in range(2*PNOrder+1) for key,val in Precession_ellHat.items() if val==('gamma_PN_{0}'.format(n))]))\ *horner(sum([key*(v**n) for n in range(2*PNOrder+1) for key,val in Precession_ellHat.items() if val==('a_ell_{0}'.format(n))])) return OmegaVec_ellHat # Precession_ellHatExpression() """ Explanation: Precession of orbital angular velocity $\vec{\Omega}_{\hat{\ell}}$ Bohé et al. (2013) say that the precession of the orbital angular velocity is along $\hat{n}$, with magnitude (in their notation) $a_{\ell}/r\omega = \gamma\, a_{\ell} / v^3$. NOTE: There is a 3pN gauge term in $\gamma_{\text{PN}}$ that I have simply dropped here. It is $\ln(r/r_0')$. The following two cells are Eqs. (4.3) and (4.4) of Bohé et al. (2013), respectively. End of explanation """ Precession_chiVec1.AddDerivedVariable('Omega1_coeff', v**5/M) Precession_chiVec1.AddDerivedVariable('OmegaVec1_SO_0', (frac(3,4) + frac(1,2)*nu - frac(3,4)*delta)*ellHat, datatype=ellHat.datatype) Precession_chiVec1.AddDerivedVariable('OmegaVec1_SO_2', (frac(9,16) + frac(5,4)*nu - frac(1,24)*nu**2 + delta*(-frac(9,16)+frac(5,8)*nu))*ellHat, datatype=ellHat.datatype) Precession_chiVec1.AddDerivedVariable('OmegaVec1_SO_4', (frac(27,32) + frac(3,16)*nu - frac(105,32)*nu**2 - frac(1,48)*nu**3 + delta*(-frac(27,32) + frac(39,8)*nu - frac(5,32)*nu**2))*ellHat, datatype=ellHat.datatype) """ Explanation: Precession of spins $\vec{\Omega}_{1,2}$ Equation (4.5) of Bohé et al. (2013) gives spin-orbit terms: End of explanation """ Precession_chiVec1.AddDerivedVariable('OmegaVec1_SS_1', M2**2*(-chiVec2+3*chi2_n*nHat)/M**2, datatype=nHat.datatype) #print("WARNING: OmegaVec1_SS_1 in Precession.ipynb is disabled temporarily") """ Explanation: In his Eqs. (2.4), Kidder (1995) summarized certain spin-spin terms: End of explanation """ Precession_chiVec1.AddDerivedVariable('OmegaVec1_QM_1', 3*nu*chi1_n*nHat, datatype=nHat.datatype) #print("WARNING: OmegaVec1_QM_1 in Precession.ipynb is disabled temporarily") """ Explanation: NOTE: Is Etienne's notation consistent with others? It seems like when he introduces other people's terms, he mixes $\hat{n}$ and $\hat{\ell}$. Finally, in his Eq. (2.7) Racine (2008) added a quadrupole-monopole term along $\hat{n}$: End of explanation """ for key,val in Precession_chiVec1.items(): try: tmp = key.substitution.subs({delta: -delta, M1:'swap1', M2:M1, chi1_n:'swap2', chi2_n:chi1_n, chiVec1:'swap3', chiVec2:chiVec1}).subs({'swap1':M2,'swap2':chi2_n,'swap3':chiVec2}) Precession_chiVec2.AddDerivedVariable(val.replace('OmegaVec1', 'OmegaVec2').replace('Omega1', 'Omega2'), tmp, datatype=key.datatype) except AttributeError: Precession_chiVec2.AddDerivedVariable(val.replace('OmegaVec1', 'OmegaVec2').replace('Omega1', 'Omega2'), key, datatype=key.datatype) """ Explanation: For the precession vector of the other spin, rather than re-entering the same things with 1 and 2 swapped, we just let python do it: End of explanation """ def Precession_chiVec1Expression(PNOrder=frac(7,2)): return Omega1_coeff*collect(expand(sum([key.substitution*(v**n) for n in range(2*PNOrder+1) for key,val in Precession_chiVec1.items() if val.endswith('_{0}'.format(n))])), [ellHat,nHat,lambdaHat,chiVec1,chiVec2], horner) def Precession_chiVec2Expression(PNOrder=frac(7,2)): return Omega2_coeff*collect(expand(sum([key.substitution*(v**n) for n in range(2*PNOrder+1) for key,val in Precession_chiVec2.items() if val.endswith('_{0}'.format(n))])), [ellHat,nHat,lambdaHat,chiVec1,chiVec2], horner) # Precession_chiVec1Expression() """ Explanation: Finally, we define functions to put them together: End of explanation """
SheffieldML/notebook
GPy/coregionalized_regression_tutorial.ipynb
bsd-3-clause
%pylab inline import pylab as pb pylab.ion() import GPy """ Explanation: Coregionalized Regression Model (vector-valued regression) updated: 17th June 2015 by Ricardo Andrade-Pacheco This tutorial will focus on the use and kernel selection of the $\color{firebrick}{\textbf{coregionalized regression}}$ model in GPy. Setup The first thing to do is to set the plots to be interactive and to import GPy. End of explanation """ #This functions generate data corresponding to two outputs f_output1 = lambda x: 4. * np.cos(x/5.) - .4*x - 35. + np.random.rand(x.size)[:,None] * 2. f_output2 = lambda x: 6. * np.cos(x/5.) + .2*x + 35. + np.random.rand(x.size)[:,None] * 8. #{X,Y} training set for each output X1 = np.random.rand(100)[:,None]; X1=X1*75 X2 = np.random.rand(100)[:,None]; X2=X2*70 + 30 Y1 = f_output1(X1) Y2 = f_output2(X2) #{X,Y} test set for each output Xt1 = np.random.rand(100)[:,None]*100 Xt2 = np.random.rand(100)[:,None]*100 Yt1 = f_output1(Xt1) Yt2 = f_output2(Xt2) """ Explanation: For this example we will generate an artificial dataset. End of explanation """ xlim = (0,100); ylim = (0,50) fig = pb.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) ax1.set_xlim(xlim) ax1.set_title('Output 1') ax1.plot(X1[:,:1],Y1,'kx',mew=1.5,label='Train set') ax1.plot(Xt1[:,:1],Yt1,'rx',mew=1.5,label='Test set') ax1.legend() ax2 = fig.add_subplot(212) ax2.set_xlim(xlim) ax2.set_title('Output 2') ax2.plot(X2[:,:1],Y2,'kx',mew=1.5,label='Train set') ax2.plot(Xt2[:,:1],Yt2,'rx',mew=1.5,label='Test set') ax2.legend() """ Explanation: Our two datasets look like this: End of explanation """ def plot_2outputs(m,xlim,ylim): fig = pb.figure(figsize=(12,8)) #Output 1 ax1 = fig.add_subplot(211) ax1.set_xlim(xlim) ax1.set_title('Output 1') m.plot(plot_limits=xlim,fixed_inputs=[(1,0)],which_data_rows=slice(0,100),ax=ax1) ax1.plot(Xt1[:,:1],Yt1,'rx',mew=1.5) #Output 2 ax2 = fig.add_subplot(212) ax2.set_xlim(xlim) ax2.set_title('Output 2') m.plot(plot_limits=xlim,fixed_inputs=[(1,1)],which_data_rows=slice(100,200),ax=ax2) ax2.plot(Xt2[:,:1],Yt2,'rx',mew=1.5) """ Explanation: We will also define a function that will be used later for plotting our results. End of explanation """ import GPy K=GPy.kern.RBF(1) B = GPy.kern.Coregionalize(input_dim=1,output_dim=2) multkernel = K.prod(B,name='B.K') print multkernel """ Explanation: Covariance kernel The coregionalized regression model relies on the use of $\color{firebrick}{\textbf{multiple output kernels}}$ or $\color{firebrick}{\textbf{vector-valued kernels}}$ (Álvarez, Rosasco and Lawrence, 2012), of the following form: $ \begin{align} {\bf B}\otimes{\bf K} = \left(\begin{array}{ccc} B_{1,1}\times{\bf K}({\bf X}{1},{\bf X}{1}) & \ldots & B_{1,D}\times{\bf K}({\bf X}{1},{\bf X}{D})\ \vdots & \ddots & \vdots\ B_{D,1}\times{\bf K}({\bf X}{D},{\bf X}{1}) & \ldots & B_{D,D}\times{\bf K}({\bf X}{D},{\bf X}{D}) \end{array}\right) \end{align} $. In the expression above, ${\bf K}$ is a kernel function, ${\bf B}$ is a regarded as the coregionalization matrix, and ${\bf X}_i$ represents the inputs corresponding to the $i$-th output. Notice that if $B_{i,j} = 0$ for $i \neq j$, then all the outputs are being considered as independent of each other. To ensure that the multiple output kernel is a valid kernel, we need the $\bf K$ and ${\bf B}$ to be to be valid. If $\bf K$ is already a valid kernel, we just need to ensure that ${\bf B}$ is positive definite. The last is achieved by defining ${\bf B} = {\bf W}{\bf W}^\top + {\boldsymbol \kappa}{\bf I}$, for some matrix $\bf W$ and vector ${\boldsymbol \kappa}$. In GPy, a multiple output kernel is defined in the following way: End of explanation """ #Components of B print 'W matrix\n',B.W print '\nkappa vector\n',B.kappa print '\nB matrix\n',B.B """ Explanation: The components of the kernel can be accessed as follows: End of explanation """ icm = GPy.util.multioutput.ICM(input_dim=1,num_outputs=2,kernel=GPy.kern.RBF(1)) print icm """ Explanation: We have built a function called $\color{firebrick}{\textbf{ICM}}$ that deals with the steps of defining two kernels and multiplying them together. End of explanation """ K = GPy.kern.Matern32(1) icm = GPy.util.multioutput.ICM(input_dim=1,num_outputs=2,kernel=K) m = GPy.models.GPCoregionalizedRegression([X1,X2],[Y1,Y2],kernel=icm) m['.*Mat32.var'].constrain_fixed(1.) #For this kernel, B.kappa encodes the variance now. m.optimize() print m plot_2outputs(m,xlim=(0,100),ylim=(-20,60)) """ Explanation: Now we will show how to add different kernels together to model the data in our example. Using the GPy's Coregionalized Regression Model Once we have defined an appropiate kernel for our model, its use is straightforward. In the next example we will use a $\color{firebrick}{\textbf{Matern-3/2 kernel}}$ as $\bf K$. End of explanation """ K = GPy.kern.Matern32(1) m1 = GPy.models.GPRegression(X1,Y1,kernel=K.copy()) m1.optimize() m2 = GPy.models.GPRegression(X2,Y2,kernel=K.copy()) m2.optimize() fig = pb.figure(figsize=(12,8)) #Output 1 ax1 = fig.add_subplot(211) m1.plot(plot_limits=xlim,ax=ax1) ax1.plot(Xt1[:,:1],Yt1,'rx',mew=1.5) ax1.set_title('Output 1') #Output 2 ax2 = fig.add_subplot(212) m2.plot(plot_limits=xlim,ax=ax2) ax2.plot(Xt2[:,:1],Yt2,'rx',mew=1.5) ax2.set_title('Output 2') """ Explanation: Notice that the there are two parameters for the $\color{firebrick}{\textbf{noise variance}}$. Each one corresponds to the noise of each output. But what is the advantage of this model? Well, the fit of a non-coregionalized model (i.e., two independent models) would look like this: End of explanation """ kernel = GPy.util.multioutput.ICM(input_dim=1,num_outputs=2,kernel=GPy.kern.Bias(input_dim=1)) m = GPy.models.GPCoregionalizedRegression(X_list=[X1,X2],Y_list=[Y1,Y2],kernel=kernel) m['.*bias.var'].constrain_fixed(1) #B.kappa now encodes the variance. m['.*W'].constrain_fixed(0) m.optimize() plot_2outputs(m,xlim=(-20,120),ylim=(0,60)) """ Explanation: The coregionalized model shares information across outputs, but the independent models cannot do that. In the regions where there is no training data specific to an output the independent models tend to return to the prior assumptions. In this case, where both outputs have associated patterns, the fit is better with the coregionalized model. $\color{firebrick}{\textbf{Can we improve the fit in the coregionalization?}}$ Yes, we will have a look at that in the next section. Kernel Selection The data from both outputs is not centered on zero. A way of dealing with outputs of different means or magnitudes is using a $\color{firebrick}{\textbf{bias kernel}}$. This kernel is just changing the mean (constant) of the Gaussian Process being fitted. There is no need to assume any sort of correlation between both means, so we can define ${\bf W} = {\bf 0}$. End of explanation """ K1 = GPy.kern.Bias(1) K2 = GPy.kern.Linear(1) lcm = GPy.util.multioutput.LCM(input_dim=1,num_outputs=2,kernels_list=[K1,K2]) m = GPy.models.GPCoregionalizedRegression([X1,X2],[Y1,Y2],kernel=lcm) m['.*bias.var'].constrain_fixed(1.) m['.*W'].constrain_fixed(0) m['.*linear.var'].constrain_fixed(1.) m.optimize() plot_2outputs(m,xlim=(-20,120),ylim=(0,60)) """ Explanation: At the moment, our model is only able to explain the mean of the data. However we can notice that there is a deacreasing trend in the first output and and increasent trend in the second one. In this case we can model such a trend with a $\color{firebrick}{\textbf{linear kernel}}$. Since the linear kernel only fits a line with constant slope along the output space, there is no need to assume any correlation between outputs. We could define our new multiple output kernel as follows: ${\bf K}{ICM} = {\bf B} \otimes ( {\bf K}{Bias} + {\bf K}_{Linear} )$. However, we can also define a more general kernel of the following form: ${\bf K}{LCM} = {\bf B}_1 \otimes {\bf K}{Bias} + {\bf B}2 \otimes {\bf K}{Linear}$. GPy has also a function which saves some steps in the definition of $\color{firebrick}{\textbf{LCM}}$ kernels. End of explanation """ K1 = GPy.kern.Bias(1) K2 = GPy.kern.Linear(1) K3 = GPy.kern.Matern32(1) lcm = GPy.util.multioutput.LCM(input_dim=1,num_outputs=2,kernels_list=[K1,K2,K3]) m = GPy.models.GPCoregionalizedRegression([X1,X2],[Y1,Y2],kernel=lcm) m['.*ICM.*var'].unconstrain() m['.*ICM0.*var'].constrain_fixed(1.) m['.*ICM0.*W'].constrain_fixed(0) m['.*ICM1.*var'].constrain_fixed(1.) m['.*ICM1.*W'].constrain_fixed(0) m.optimize() plot_2outputs(m,xlim=(0,100),ylim=(-20,60)) """ Explanation: Now we will model the variation along the trend defined by the linear component. We will do this with a Matern-3/2 kernel. End of explanation """ newX = np.arange(100,110)[:,None] newX = np.hstack([newX,np.ones_like(newX)]) print newX """ Explanation: Prediction at new input values Behind the scenes, this model is using an extended input space with an additional dimension that points at the output each data point belongs to. To make use of the prediction function of GPy, this model needs the input array to have the extended format. For example if we want to make predictions in the region 100 to 100 for the second output, we need to define the new inputs as follows: End of explanation """ noise_dict = {'output_index':newX[:,1:].astype(int)} """ Explanation: $\color{firebrick}{\textbf{Note:}}$ remember that Python starts counting from zero, so input 1 is actually the second input. We also need to pass another output to the predict function. This is an array that tells which $\color{firebrick}{\textbf{noise model}}$ is associated to each point to be predicted. This is a dictionary constructed as follows: End of explanation """ m.predict(newX,Y_metadata=noise_dict) """ Explanation: The astype(int) function is to ensure that the values of the dictionary are integers, otherwise the Python complains when using them as indices. Then prediction is command can then be called this way: End of explanation """
neildhir/DCBO
notebooks/nonstat_scm.ipynb
mit
%load_ext autoreload %autoreload 2 import sys sys.path.append("../src/") sys.path.append("..") from src.examples.example_setups import setup_nonstat_scm from src.utils.sem_utils.toy_sems import NonStationaryDependentSEM as NonStatSEM from src.utils.sem_utils.sem_estimate import build_sem_hat from src.experimental.experiments import run_methods_replicates from numpy.random import seed seed(seed=0) """ Explanation: Non-stationary SCM and DAG (NONSTAT.) from figure 3(c) in paper End of explanation """ T = 3 # In this example, as in the paper, we consider three time-slices init_sem, sem, dag_view, G, exploration_sets, intervention_domain, true_objective_values = setup_nonstat_scm(T=T) dag_view """ Explanation: Problem setup End of explanation """ # Simple networkx syntax is used to manipulate an object G.add_edge('X_0','Z_1') G.add_edge('Z_1','Y_2') # We print out the edge set just to make sure they have been properly added G.edges """ Explanation: Unlike the DAG used for the STAT. example (figure 1 in paper) we need to modify the structure of this DAG to represent the non-stationary evolution of the SEM used in the example. End of explanation """ type(G) """ Explanation: The above DAG is the graphical structure we will be working with and is faithful to the one used in figure one in the paper. dag is a networkx object i.e. End of explanation """ print(dag_view) """ Explanation: and is the one we will use for exploring the optimization methods further down. dag_view is simply a convenient way to visualise the structure at hand and is just a string that we visulise using pygraphviz: End of explanation """ # Contains the exploration sets we will be investigating print("Exploration sets:", exploration_sets) # The intervention domains for the manipulative variables print("Intervention domains:", intervention_domain) # The true outcome values of Y given an optimal intervention on the three time-slices print("True optimal outcome values:", [r"y^*_{} = {}".format(t,val.round(3)) for t,val in enumerate(true_objective_values)]) # Number of trials N = 10 """ Explanation: OBS. that the dag_view will not show our added edges as the object was generated before we modified the topology Notice that we make use of two types of structural equations models: init_sem and sem. The latter concerns interactions with the first time-slice in the DBN, which has no incoming edges from the previous time-slices, and is only active at $t=0$. For all other time-slices i.e. when $t>0$ the sem model is used. Other setup parameters End of explanation """ # See method for argument details change_points = T*[False] change_points[1] = True # We encode where the time-series changes stationary regime R = 3 results = run_methods_replicates(G=G, sem=NonStatSEM, make_sem_estimator=build_sem_hat, base_target_variable='Y', intervention_domain = intervention_domain, methods_list = ['DCBO', 'CBO', 'ABO', 'BO'], obs_samples = None, exploration_sets = exploration_sets, total_timesteps = T, number_of_trials = N, reps = R, # Number of replicates (how many times we run each method) n_restart = 1, save_data = False, n_obs = 10, # The method samples 10 time-series for each replicate num_anchor_points = 100, sample_anchor_points = True, controlled_experiment=True, change_points=change_points) """ Explanation: Explore optimization methods Unlike the demo in stat_scm.ipynb here we are going to demonstrate the replicate method as used to generate the results in the paper. In all these examples we do not employ any interventional data, just observational. End of explanation """ from src.experimental.analyse_results import get_relevant_results, elaborate from src.utils.plotting import plot_expected_opt_curve_paper from matplotlib.pyplot import rc # Since we didn't save the results we cannot use the pickled file so we have to convert results to the correct format data = get_relevant_results(results=results,replicates=R) exp_optimal_outcome_values_during_trials, exp_per_trial_cost = elaborate(number_of_interventions=None, n_replicates=R, data=data, best_objective_values=true_objective_values, T=T) """ Explanation: Analyse results and plot End of explanation """ plot_params = { "linewidth": 3, "linewidth_opt": 4, "alpha": 0.1, "xlim_max": N, "ncols": 5, "loc_legend": "lower right", "size_ticks": 20, "size_labels": 20, "xlabel": r'$\texttt{cost}(\mathbf{X}_{s,t}, \mathbf{x}_{s,t})$', "labels": {'DCBO': 'DCBO', 'CBO': 'CBO', 'ABO': 'ABO', 'BO': 'BO', 'True': r'$\mathbb{E} \left [Y_t \mid \textrm{do}(\mathbf{X}_{s,t}^\star = \mathbf{x}_{s,t}^\star) \right]$'}, "colors": {'DCBO': 'blue', 'CBO': 'green', 'ABO': 'orange', 'BO': 'red', 'True': 'black'}, "line_styles": {'DCBO': '-', 'CBO': '--', 'ABO': 'dashdot', 'BO': '-', 'True': ':'}, "width":10 } rc('text', usetex=True) rc('text.latex', preamble=r'\usepackage{amssymb}') rc('font', family='serif') rc('font', size=20) # Here we demonstrate how to use the plot functionality as used in the paper. Each frame corresponds to one time-slice. As we have only run each model for ten trials, we do not expect stellar results (see final frame). plot_expected_opt_curve_paper(T, true_objective_values, exp_per_trial_cost, exp_optimal_outcome_values_during_trials, plot_params, fig_size = (15,4)) """ Explanation: Plot results End of explanation """
preigemufc/1.2016.1.notebooks
Integração Numérica.ipynb
gpl-3.0
# Antes de tudo, importamos o pacote matemático numpy # que nos permite manipular matrizes e vetores. import numpy as np # Declaramos uma função onde colocaremos todo o código # para integração de Euler, que poderemos invocar facilmente # sempre que quisermos integrar numericamente uma equação # diferencial. def solveE(f, x_t0, time, dt): 'Integra numericamente um sistema de equações diferenciais\ através do método de Euler. Como argumentos recebe uma\ função "f", um valor inicial "x_t0", um intervalo de\ integração "time" e um tamanho de passo "dt". Retorna dois\ vetores: "t" o eixo do tempo e "x" a trajetória do sistema' # A descrição acima é forma de documentar a nossa função. # Abaixo, determinamos o número de passos. Em Python 3, # o operador divisão "/" retorna um float (número racional), # mas nosso número de passos precisa ser do tipo int. steps = int(time/dt) # Abaixo nós criamos uma lista que vai de zero até o final # do intervalo de integração ("time") em passos de tamanho # dt. Ela represente o eixo do tempo "t". t = np.arange(0,time,dt) x = [0]*(steps) # Esta é uma forma de declarar e inicializar uma lista # de zeros com tamanho para armazenar todos os passos. # A lista "x" armazerá todos os valores x(t). # Abaixo armazenamos a condição inicial. x[0] = x_t0 for t_n in range(1,steps): # O nosso loop terá "steps" passos e "t" irá variar # de 1 à "steps". O valor de "t" servirá para acessarmos # e armazenarmos valores dentro da lista "x". x[t_n] = x[t_n - 1] + f(x[t_n - 1])*dt # Nossa função retorna a o eixo do tempo "t" e a # trajetória estimada "x" durante esse intervalo. # Para podermos plotar gráfitos, precisamos converter "x" # de uma lista do Python para um vetor do Numpy. return t, x """ Explanation: Integração Numérica Revisão de derivada Suponhamos que nosso sistema é descrito por uma função $x(t)$. Nosso sistema é descrito por uma função do tempo porque ele é um sistema dinâmico, ou seja, ele evolui de um estado anterior para um estado seguinte de acordo com o tempo. No entanto, ao utilizarmos a teoria de sistemas dinâmicos como ferramenta, raramente descrevemos nossos sistemas através de funções "explícitas". Isso ocorre porque é muito difícil para nós conhecermos o comportamento global do sistema antes de estudá-lo. Por exemplo, se estamos estudando o crescimento de uma colônia de bactérias da espécie E. coli, a princípio nós não sabemos como parâmetros específicos de taxa de crescimento, eficiência metabólica, meio de cultura etc, afetam esse crescimento, especialmente quando comparados com os de outras espécies, como B. subtilis. Portanto, frequentemente descrevemos nosso sistema através de equações de equações diferenciais, ou seja, derivadas. Relembre que uma derivada pode ser vista como uma "velocidade instantânea" ou uma taxa de variação. Podemos até não conhecer o comportamento global $x(t)$ do nosso sistema (por exemplo, como a nossa colônia de bactérias se desenvolve ao longo de todo o tempo), mas é muito mais prático para nós como pesquisadores determinarmos como o sistema se comporta no instante específico em que podemos observá-lo. Nós podemos então tentar descrever como esperamos que o sistema se comporte a cada instante que observamos, e por isso utilizamos derivadas. Outra forma de interpretar a derivada é como uma aproximação de uma função. Na figura 1 abaixo, esse processo de aproximação é esquematizado. Nós temos em azul a função $x(t)$ que queremos aproximar, que é não-linear. Em vermelho nós tentamos aproximar a função $x(t)$ durante o invervalo que vai de $t_0$ à $t_1$, através de sua derivada ($\frac{dx}{dt}$), que é uma função linear cujo gráfico é uma reta de inclinação $\theta$. <img src="https://raw.githubusercontent.com/preigemufc/1.2016.1.notebooks/master/eulers_method.png" alt="Método de Euler" style="width:500px;height:400px;"> Figura 1. Esquema de aproximação de uma função não-linear $x(t)$ através de sua derivada $\frac{dx}{dt}$, que por sua vez pode ser aproximada por $\frac{\Delta x}{\Delta t}$. Observe que, por geometria, a razão $\frac{\Delta x}{\Delta t}$ é dada pela tangente do ângulo $\theta$ descrito pela reta no intervalo de aproximação. Observe que a aproximação não é perfeita, pois a reta vermelha não fornece exatamente os mesmos valores da curva azul dentro do intervalo $[t_0,t_1]$, a não ser pelos próprios pontos $t_0$ e $t_1$, onde os gráficos concordam perfeitamente. Lembre que pela definição de derivada, para termos uma aproximação perfeita no instante $t$, nós devemos aproximar a curva $x(t)$ dentro de um intervalo $\Delta t = t_1 - t_0$ o menor possível, e representamos esse processo de minimizar o intervalo $t_1 - t_0$ através de um operador chamado limite. $$ \frac{dx}{dt} = \lim_{\Delta t \rightarrow 0} \frac{\Delta x}{\Delta t} = \lim_{t_0 \rightarrow t_1} \frac{x(t_1) - x(t_0)}{t_1 - t_0} $$ Integração de Euler De posse desses conhecimentos, podemos utilizar a derivada $\frac{dx}{dt}$ para aproximarmos a função $x(t)$ em cada ponto da órbita do nosso sistema. Relembre o conceito de órbita (ou trajetória) de um sistema dinâmico. Definição. A órbita de um sistema dinâmico é o conjunto de pontos por ele visitados a cada intervalo de tempo. Para observar a órbita do nosso sistema, precisamos escolher um valor inicial $x(t_0)$. Em seguida, calculamos a posição do sistema no instante seguinte por $x(t_1) = x(x(t_0))$. Depois, repetimos o procedimento, $x(t_2) = x(x(t_1))$ e assim por diante, até o instante $t_n$. A posição no instante $t_n$ é dada por \begin{equation} \begin{array}{c} \underbrace{x(x(...(x(t_0))))} \ n \textrm{ vezes} \end{array} = \begin{array}{c} \underbrace{x \circ x \circ ... \circ x(x(t_0))} \ n \textrm{ vezes} \end{array} = x^n(t_0) \end{equation} Nós vamos utilizar informações sobre a derivada de $x(t)$ para estimarmos o valor de $x(t_n)$ em cada intervalo $t_n$, e assim fazermos uma aproximação do comportamento global do nosso sistema a partir de uma condição inicial $x(t_0)$. Vamos aproximar a derivada $\frac{dx}{dt}$ através de $\tan(\theta_n)$ dentro do intervalo $\Delta t_n = t_n - t_{n - 1}$. Assim, podemos olhar para a definição de derivada acima e ver como podemos estimar o valor de $x(t_n)$ no instante $t_n$ a partir de $x(t_{n - 1})$: $$ \frac{dx}{dt} \approx \tan(\theta_n) = \frac{\Delta x}{\Delta t_n} = \frac{x(t_n) - x(t_{n - 1})}{t_n - t_{n - 1}} $$ Ou seja, \begin{eqnarray} \tan(\theta_n) = \frac{\Delta x}{\Delta t_n} \ \Delta x = \Delta t_n \tan(\theta_n) \ x(t_n) - x(t_{n - 1}) = \Delta t_n \tan(\theta_n) \ x(t_n) = x(t_{n - 1}) + \Delta t_n \tan(\theta_n) \end{eqnarray} E portanto, Fórmula de integração numérica de Euler $$ x(t_n) \approx x(t_{n - 1}) + \Delta t_n \frac{dx(t_n)}{dt} $$ Dessa forma, temos posse de um método para estimar o estado $x(t)$ do nosso sistema a cada instante $t$. Se nós tivermos posse da derivada da nossa função de estado $\frac{dx}{dt} = f(x)$ como uma função da própria variável de estado $x(t)$, podemos utilizar $f(x)$, um intervalo regular $\Delta t$ e uma condição inicial $x(t_0)$ para substituir na fórmula de integração e estimar $x(t_n)$ a cada intervalo. Mas como traduzir isso para um código de Python? Abaixo nós escrevemos uma linha de código que traduz diretamente a fórmula de integração de Euler para um comando de Python. x[t_n] = x[t_n - 1] + f(x[t_n - 1])*dt Obs. No código, escrevemos $\Delta t$ como dt. Não confunda este dt do código com o $dt$ da derivada de $x(t)$. O próximo passo é inserir essa linha de código dentro de um loop que repita esse processo a partir de um valor inicial durante um intervalo de tempo determinado por nós e que tenha um tamanho definido para cada passo dt. Para isso, precisamos determinar o número de passos que vamos calcular dentro de todo o intervalo de integração. Para tanto, devemos criar variáveis que guardem esses valores. Por fim, é mais conveniente para nós colocarmos todo esse código dentro de uma função do Python definida por nós, porque assim podemos reutilizar este código facilmente para qualquer equação diferencial que pretendermos estudar. End of explanation """ help(solveE) """ Explanation: Escrever uma descrição para documentar nossas funções logo abaixo da sua declaração nos permite extrair informações delas utilizando o comando help(). End of explanation """ def g_x(x, r = .5, k = 10): 'Determina a taxa de crescimento de uma população "x"\ de acordo com o modelo logístico.' return -r*(x**2)/k + r*x """ Explanation: Integrando sistemas de equações diferenciais Crescimento logístico Que sistemas de equações diferenciais vamos utilizar para testar nossa função? Podemos utilizar um modelo populacional, no caso, o crescimento logístico. Suponha que temos uma colônia de bactérias cujo crescimento pode ser estimado através da sequinte equação diferencial: \begin{eqnarray} \frac{dx}{dt} = g(x) = xr\left(1 - \frac{x}{K}\right) \ g(x) = -r\frac{x^2}{K} + rx \end{eqnarray} Onde - r crescimento populacional per capita - K capacidade máxima do meio (população máxima que os recursos do meio podem suportar) Observe que quando $x = K$, o meio não consegue mais suportar o crescimento e a população para de crescer, $\frac{dx}{dt} = 0$. Vamos escrever um código para representar $g(x)$. End of explanation """ g_x(10) """ Explanation: Nós colocamos as os parâmetros r e k como variáveis padrão da nossa função g_x para que não seja preciso declarar esses valores quando invocarmos a função. Por exemplo: End of explanation """ g_x(10,.5,20) """ Explanation: Mas podemos invocar nossa função g_x com até 3 argumentos, se quisermos alterar os valores de $r$ e $K$: End of explanation """ x_0 = 1 time = 15 dt = .01 # como a função retorna 2 resultados, nós precisamos # guardá-los em 2 variáveis, ou então numa lista t1,x1 = solveE(g_x,x_0,time,dt) """ Explanation: Agora, vamos testar nosso código para integrar equações diferenciais com a equação do crescimento logístico: End of explanation """ # exibe gráficos neste documento %matplotlib inline # importa o pacote gráfico pylab (Matplotlib) import pylab as py # plota o gráfico py.plot(t1,x1) """ Explanation: Os comandos abaixo plotam um gráfico do nosso sistema. End of explanation """ def v_xy(v, a = 1.5, b = 1, c = 3, d = 1): 'Determina a evolução de um sistema predador-presa\ do tipo Lotka-Volterra no instante "t". Retorna um\ vetor "v_n" com "v_n[0]" sendo as presas e "v_n[1]"\ os predadores.' x = v[0] y = v[1] dx = a*x - b*x*y dy = -c*y + d*x*y vn = np.array([dx,dy]) return vn """ Explanation: Modelo predador-presa de Lotka-Volterra O que acontece quando temos um sistema com mais do que uma variável de estado? O que acontece, por exemplo, quando temos duas populações evoluindo e interagindo, ao invés de apenas uma? Bom, para estudar isso, nós precisamos de mais do que uma equação diferencial. Entretanto, os métodos que nós utilizamos para uma equação se estendem para sistemas de quaiser número de equações! Nós não precisamos mudar nossa função solveE, e podemos estender nossa análise da derivada de $x(t)$ à sistemas de equações envolvendo várias derivadas. Para demonstrar a função que escrevemos em Python para integrar numericamente sistemas de equações diferenciais, vamos tentar integrar um sistema com duas equações diferenciais. Para modelar um sistema biológico em que duas populações interagem ecologicamente através da predação, Lotka e Volterra cada um independentemente desenvolveu o seguinte modelo: \begin{eqnarray} \frac{dx}{dt} = ax - bxy \ \frac{dy}{dt} = -cy + dxy \end{eqnarray} Onde $x$: população de presas $y$: população de predadores $a$: taxa de natalidade de presas $b$: taxa de predação de $x$ por $y$ $c$: taxa de mortalidade de predadores $d$: eficiência de conversão de biomassa de presa para predador Vamos escrever uma função em Python que represente o modelo de Lotka-Volterra. Para tanto, como primeiro argumento da nossa função nós temos que passar uma lista v com 2 valores, um para cada variável de estado. No corpo da função nós vamos extrair esses valores e lhes dar nomes x e y apenas por clareza. End of explanation """ [1,2]*3 """ Explanation: Observação importante. Na penúltima linha acima nós pegamos a lista vn que queremos retornar e a convertemos num vetor no Numpy. Nós fazemos isso para que, durante a etapa de integração do sistema pela função solveE, nós possamos multiplicar esse vetor pelo número dt. Por exemplo, se você pegar uma lista em Python e tentar multiplicá-la por um número, o seguinte acontece: End of explanation """ np.array([1,2])*3 """ Explanation: Nós precisamos converter essa lista para um vetor do Numpy se não quisermos que nosso código forneça algum erro estranho: End of explanation """ v_0 = [10,1] time = 20 dt = .01 t2,v2 = solveE(v_xy,v_0,time,dt) """ Explanation: Abaixo nós integramos o sistema utilizando nossa função. Note que como condição inicial nós passamos uma lista com 2 valores, um sendo a condição inicial de presas $x(t_0)$ e outro sendo a condição inicial de predadores $y(t_0)$. End of explanation """ py.plot(t2,np.array(v2).T[0]) py.plot(t2,np.array(v2).T[1]) """ Explanation: Em seguida, nós plotamos o resultado. Devido a que a função v_xy retorna uma lista, o processo de extrair os dados fica um pouco mais complicado, e nós vamos omitir a explicação para essa sintaxe. End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_objects_from_arrays.ipynb
bsd-3-clause
# Author: Jaakko Leppakangas <jaeilepp@student.jyu.fi> # # License: BSD (3-clause) import numpy as np import neo import mne print(__doc__) """ Explanation: Creating MNE objects from data arrays In this simple example, the creation of MNE objects from numpy arrays is demonstrated. In the last example case, a NEO file format is used as a source for the data. End of explanation """ sfreq = 1000 # Sampling frequency times = np.arange(0, 10, 0.001) # Use 10000 samples (10s) sin = np.sin(times * 10) # Multiplied by 10 for shorter cycles cos = np.cos(times * 10) sinX2 = sin * 2 cosX2 = cos * 2 # Numpy array of size 4 X 10000. data = np.array([sin, cos, sinX2, cosX2]) # Definition of channel types and names. ch_types = ['mag', 'mag', 'grad', 'grad'] ch_names = ['sin', 'cos', 'sinX2', 'cosX2'] """ Explanation: Create arbitrary data End of explanation """ # It is also possible to use info from another raw object. info = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types) """ Explanation: Create an :class:info &lt;mne.Info&gt; object. End of explanation """ raw = mne.io.RawArray(data, info) # Scaling of the figure. # For actual EEG/MEG data different scaling factors should be used. scalings = {'mag': 2, 'grad': 2} raw.plot(n_channels=4, scalings=scalings, title='Data from arrays', show=True, block=True) # It is also possible to auto-compute scalings scalings = 'auto' # Could also pass a dictionary with some value == 'auto' raw.plot(n_channels=4, scalings=scalings, title='Auto-scaled Data from arrays', show=True, block=True) """ Explanation: Create a dummy :class:mne.io.RawArray object End of explanation """ event_id = 1 # This is used to identify the events. # First column is for the sample number. events = np.array([[200, 0, event_id], [1200, 0, event_id], [2000, 0, event_id]]) # List of three arbitrary events # Here a data set of 700 ms epochs from 2 channels is # created from sin and cos data. # Any data in shape (n_epochs, n_channels, n_times) can be used. epochs_data = np.array([[sin[:700], cos[:700]], [sin[1000:1700], cos[1000:1700]], [sin[1800:2500], cos[1800:2500]]]) ch_names = ['sin', 'cos'] ch_types = ['mag', 'mag'] info = mne.create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types) epochs = mne.EpochsArray(epochs_data, info=info, events=events, event_id={'arbitrary': 1}) picks = mne.pick_types(info, meg=True, eeg=False, misc=False) epochs.plot(picks=picks, scalings='auto', show=True, block=True) """ Explanation: EpochsArray End of explanation """ nave = len(epochs_data) # Number of averaged epochs evoked_data = np.mean(epochs_data, axis=0) evokeds = mne.EvokedArray(evoked_data, info=info, tmin=-0.2, comment='Arbitrary', nave=nave) evokeds.plot(picks=picks, show=True, units={'mag': '-'}, titles={'mag': 'sin and cos averaged'}) """ Explanation: EvokedArray End of explanation """ # The events are spaced evenly every 1 second. duration = 1. # create a fixed size events array # start=0 and stop=None by default events = mne.make_fixed_length_events(raw, event_id, duration=duration) print(events) # for fixed size events no start time before and after event tmin = 0. tmax = 0.99 # inclusive tmax, 1 second epochs # create :class:`Epochs <mne.Epochs>` object epochs = mne.Epochs(raw, events=events, event_id=event_id, tmin=tmin, tmax=tmax, baseline=None, verbose=True) epochs.plot(scalings='auto', block=True) """ Explanation: Create epochs by windowing the raw data. End of explanation """ duration = 0.5 events = mne.make_fixed_length_events(raw, event_id, duration=duration) print(events) epochs = mne.Epochs(raw, events=events, tmin=tmin, tmax=tmax, baseline=None, verbose=True) epochs.plot(scalings='auto', block=True) """ Explanation: Create overlapping epochs using :func:mne.make_fixed_length_events (50 % overlap). This also roughly doubles the amount of events compared to the previous event list. End of explanation """ # The example here uses the ExampleIO object for creating fake data. # For actual data and different file formats, consult the NEO documentation. reader = neo.io.ExampleIO('fakedata.nof') bl = reader.read(cascade=True, lazy=False)[0] # Get data from first (and only) segment seg = bl.segments[0] title = seg.file_origin ch_names = list() data = list() for ai, asig in enumerate(seg.analogsignals): # Since the data does not contain channel names, channel indices are used. ch_names.append('Neo %02d' % (ai + 1,)) # We need the ravel() here because Neo < 0.5 gave 1D, Neo 0.5 gives # 2D (but still a single channel). data.append(asig.rescale('V').magnitude.ravel()) data = np.array(data, float) sfreq = int(seg.analogsignals[0].sampling_rate.magnitude) # By default, the channel types are assumed to be 'misc'. info = mne.create_info(ch_names=ch_names, sfreq=sfreq) raw = mne.io.RawArray(data, info) raw.plot(n_channels=4, scalings={'misc': 1}, title='Data from NEO', show=True, block=True, clipping='clamp') """ Explanation: Extracting data from NEO file End of explanation """
martinjrobins/hobo
examples/optimisation/transformed-parameters.ipynb
bsd-3-clause
import matplotlib.pyplot as plt import numpy as np import pints import pints.toy as toy # Set some random seed so this notebook can be reproduced np.random.seed(10) # Load a logistic forward model model = toy.LogisticModel() """ Explanation: Optimisation in a transformed parameter space This example shows you how to run an optimisation in a transformed parameter space, using a pints.Transformation object. Parameter transformations can often significantly improve the performance and robustness of an optimisation (see e.g. [1]). In addition, some methods have requirements (e.g. that all parameters are unconstrained, or that all parameters have similar magnitudes) that prevent them from being used on certain models in their untransformed form. [1] Whittaker, DG, Clerx, M, Lei, CL, Christini, DJ, Mirams, GR. Calibration of ionic and cellular cardiac electrophysiology models. WIREs Syst Biol Med. 2020; 12:e1482. https://doi.org/10.1002/wsbm.1482 We start by loading a pints.Forwardmodel implementation, in this case a logistic model. End of explanation """ # Create some toy data real_parameters = [0.015, 400] # [r, K] times = np.linspace(0, 1000, 1000) values = model.simulate(real_parameters, times) # Add noise values += np.random.normal(0, 10, values.shape) # Create an object with links to the model and time series problem = pints.SingleOutputProblem(model, times, values) # Select a score function score = pints.SumOfSquaresError(problem) """ Explanation: We then define some parameters and set up the problem for the optimisation. The parameter vector for the toy logistic model is $\theta_\text{original} = [r, K]$, where $r$ is the growth rate and $K$ is called the carrying capacity. End of explanation """ x0 = [0.5, 0.1] # [r, K] sigma0 = [0.01, 2.0] """ Explanation: In this example, we will pick some difficult starting points for the optimisation: End of explanation """ found_parameters, found_value = pints.optimise( score, x0, sigma0, method=pints.NelderMead, transformation=None, ) # Show score of true solution print('Score at true solution: ') print(score(real_parameters)) # Compare parameters with original print('Found solution: True parameters:' ) for k, x in enumerate(found_parameters): print(pints.strfloat(x) + ' ' + pints.strfloat(real_parameters[k])) # Show quality of fit plt.figure() plt.xlabel('Time') plt.ylabel('Value') plt.plot(times, values, alpha=0.25, label='Nosiy data') plt.plot(times, problem.evaluate(found_parameters), label='Fit without transformation') plt.legend() plt.show() """ Explanation: Now we run a Nelder-Mead optimisation without doing any parameter transformation to check its performance. End of explanation """ # No transformation: [r] -> [r] transform_r = pints.IdentityTransformation(n_parameters=1) # Log-transformation: [K] -> [log(K)] transform_K = pints.LogTransformation(n_parameters=1) # The full transformation: [r, K] -> [r, log(K)] transformation = pints.ComposedTransformation(transform_r, transform_K) """ Explanation: As we can see, the optimiser made some initial improvements, but then got stuck somewhere in $[r, K]$ space, and failed to converge to the true parameters. We can improve its performance by defining a parameter transformation so that it searches in $\theta = [r, \log(K)]$ space instead. To do this, we'll create a pints.Transformation object, that leaves $r$ alone, but applies a log-transformation to $K$. This is implemented by defining an IdentifyTransformation for $r$, a LogTransformation for $K$, and then creating a ComposedTransformation for the full parameter vector $\theta$: End of explanation """ found_parameters_trans, found_value_trans = pints.optimise( score, x0, sigma0, method=pints.NelderMead, transformation=transformation, # Pass the transformation to the optimiser ) # Show score of true solution print('Score at true solution: ') print(score(real_parameters)) # Compare parameters with original print('Found solution: True parameters:' ) for k, x in enumerate(found_parameters_trans): print(pints.strfloat(x) + ' ' + pints.strfloat(real_parameters[k])) # Show quality of fit plt.figure() plt.xlabel('Time') plt.ylabel('Value') plt.plot(times, values, alpha=0.25, label='Nosiy data') plt.plot(times, problem.evaluate(found_parameters), label='Fit without transformation') plt.plot(times, problem.evaluate(found_parameters_trans), label='Fit with transformation') plt.legend() plt.show() """ Explanation: The resulting Transformation object can be passed in the optimise method, as shown below, but can also be used in combination with Controller classes such as the pints.OptimisationController or pints.MCMCController. End of explanation """
uliang/First-steps-with-the-Python-language
Day 1 - Unit 1.1.ipynb
mit
# change this cell into a Markdown cell. Then type something here and execute it (Shift-Enter) """ Explanation: 1. Your first steps with Python 1.1 Introduction Python is a general purpose programming language. It is used extensively for scientific computing, data analytics and visualization, web development and software development. It has a wide user base and excellent library support. There are many ways to use and interact with the Python language. The first way is to access it directly from the command prompt and calling python &lt;script&gt;.py. This runs a script written in Python and does whatever you have programmed the computer to do. But scripts have to be written and how do we actually write Python scripts? Actually Python scripts are just .txt files. So you could just open a .txt file and write a script, saving the file with a .py extension. The downsides of this approach is obvious to anyone working with Windows. Usually, Python source code is written-not with Microsoft Word- but with and Integrated Development Environment. An IDE combines a text editor with a running Python console to test code and actually do work with Python without switching from one program to another. If you learnt the C, or C++ language, you will be familiar with Vim. Other popular IDE's for Python are Pycharm, Spyder and the Jupyter Notebook. In this course, we will use the Jupyter Notebook as our IDE because of its ease of use ability to execute code cell by cell. It integrates with markdown so that one can annotate and document your code on the fly! All in all, it is an excellent tool for teaching and learning Python before one migrates to more advanced tools like Spyder for serious scripting and development work. 1.2 Your best friends. In order to get the most from Python, your best source of reference is the Python documentation. Getting good at Python is a matter using it regularly and familiarizing yourself with the keywords, constructs and commonly used idioms. Learn to use the Shift-Tab when coding. This activates a hovering tooltip that provides documentation for keywords, functions and even variables that you have declared in your environment. This convenient tooltip and be expanded into a pop-up window on your browser for easy reference. Use this often to reference function signatures, documentation and general help. Jupyter notebook comes with Tab completion. This quality of life assists you in typing code by listing possible autocompletion options so that you don't have to type everything out! Use Tab completion as often as you can. This makes coding faster and less tedious. Tab completion also allows you to check out various methods on classes which comes in handy when learning a library for the first time (like matplotlib or seaborn). Finally ask Google. Once you have acquired enough "vocabulary", you can begin to query Google with your problem. And more often that not, somehow has experienced the same conundrum and left a message on Stackexchange. Browsing the solutions listed there is a powerful way to learn programming skills. 1.3 The learning objectives for this unit The learning objectives of this first unit are: Getting around the Jupyter notebook. Learning how to print("Hello world!") Using and coding with basic Python objects: int, str, float and bool. Using the type function. What are variables and valid variable names. Using the list object and list methods. Learning how to access items in list. Slicing and indexing. 2. Getting around the Jupyter notebook 2.1 Cells and colors, just remember, green is for go All code is written in cells. Cells are where code blocks go. You execute a cell by pressing Shift-Enter or pressing the "play" button. Or you could just click on the drop down menu and select "Run cell" but who would want to do that! In general, cells have two uses: One for writing "live" Python code which can be executed and one more to write documentation using markdown. To toggle between the two cell types, press Escape to exit from "edit" mode. The edges of the cell should turn blue. Now you are in "command" mode. Escape actually activates "command" mode. Enter activates "edit" mode. With the cell border coloured blue, press M to enter into markdown mode. You should see the In [ ]: prompt dissappear. Press Enter to change the border to green. This means you can now "edit" markdown. How does one change from markdown to a live coding cell? In "command" mode (remember blue border) press Y. Now the cell is "hot". When you Shift-Enter, you will execute code. If you happen to write markdown when in a "coding" cell, the Python kernel will shout at you. (Means raise an error message) 2.1.1 Practise makes perfect Now its time for you to try. In the cell below, try switching to Markdown. Press Enter to activate "edit" mode and type some text in the cell. Press Shift-Enter and you should see the output rendered in html. Note that this is not coding yet End of explanation """ '''Make sure you are in "edit" mode and that this cell is for Coding ( You should see the In [ ]:) on the left of the cell. ''' print("Hello world!") """ Explanation: 2.2 Your first script It is a time honoured tradition that your very first program should be to print "Hello world!" How is this achieved in Python? End of explanation """ # print your name in this cell. """ Explanation: Notice that Hello world! is printed at the bottom of the cell as an output. In general, this is how output of a python code is displayed to you. print is a special function in Python. It's purpose is to display output to the console. Notice that we pass an argument-in this case a string "Hello world!"- to the function. All arguments passed to the function must be enclosed in round brackets and this signals to the Python interpreter to execute a function named print with the argument "Hello world!". 2.2.1 Self introductions Your next exercise is to print your own name to the console. Remember to enclose your name in " " or ' ' End of explanation """ # Addition 5+3 # Subtraction 8-9 # Multiplication 3*12 # Division 48/12 """ Explanation: 2.3 Commenting Commenting is a way to annotate and document code. There are two ways to do this: Inline using the # character or by using ''' &lt;documentation block&gt; ''', the latter being multi-line and hence used mainly for documenting functions or classes. Comments enclosed using ''' '''' style commenting are actually registed in Jupyter notebook and can be accessed from the Shift-Tab tooltip! One should use # style commenting very sparingly. By right, code should be clear enough that # inline comments are not needed. However, # has a very important function. It is used for debugging and trouble-shooting. This is because commented code sections are never executed when you execute a cell (Shift-Enter) 3. Python's building blocks Python is an Object Oriented Programming language. That means to all of python is made out of objects which are instances of classes. The main point here is that I am going to introduce 4 basic objects of Python which form the backbone of any program or script. Integers or int. Strings or str. You've met one of these: "Hello world!". For those who know about character encoding, it is highly encouraged to code Python with UTF-8 encoding. Float or float. Basically the computer version of real numbers. Booleans or bool. In Python, true and false are indicated by the reserved keywords True and False. Take note of the capitalized first letter. 3.1 Numbers You can't call yourself a scientific computing language without the ability to deal with numbers. The basic arithmetic operations for numbers are exactly as you expect it to be End of explanation """ # Exponentiation. Limited precision though! 16**0.5 # Residue class modulo n 5%2 """ Explanation: Note the floating point answer. In previous versions of Python, / meant floor division. This is no longer the case in Python 3 End of explanation """ # Guess the output before executing this cell. Come on, don't cheat! 6%(1+3) """ Explanation: In the above 5%2 means return me the remainder after 5 is divided by 2 (which is indeed 1). 3.1.1 Precedence A note on arithmetic precedence. As one expects, () have the highest precedence, following by * and /. Addition and subtraction have the lowest precedence. End of explanation """ # Assignment x=1 y=2 x+y x/y """ Explanation: It is interesting to note that the % operator is not distributive. 3.1.2 Variables In general, one does not have to declare variables in python before using it. We merely need to assign numbers to variables. In the computer, this means that a certain place in memory has been allocated to store that particular number. Assignment to variables is executed by the = operator. The equal sign in Python is the binary comparison == operator. Python is case sensitive. So a variable name A is different from a. Variables cannot begin with numbers and cannot have empty spaces between them. So my variable is not a valid variable. Usually what is done is to write my_variable After assigning numbers to variables, the variable can be used to represent the number in any arithmetic operation. End of explanation """ x=5 x+y-2 """ Explanation: Notice that after assignment, I can access the variables in a different cell. However, if you reassign a variable to a different number, the old values for that variable are overwritten. End of explanation """ # For example x = x+1 print(x) """ Explanation: Now try clicking back to the cell x+y and re-executing it. What do you the answer will be? Even though that cell was above our reassignment cell, nevertheless re-executing that cell means executing that block of code that the latest values for that variable. It is for this reason that one must be very careful with the order of execution of code blocks. In order to help us keep track of the order of execution, each cell has a counter next to it. Notice the In [n]. Higher values of n indicates more recent executions. Variables can also be reassigned End of explanation """ # reset x to 5 x=5 x += 1 print(x) x = 5 #What do you think the values of x will be for x -= 1, x *= 2 or x /= 2? # Test it out in the space below print(x) """ Explanation: So what happened here? Well, if we recall x originally was assigned 5. Therefore x+1 would give us 6. This value is then reassigned to the exact same location in memory represented by the variable x. So now that piece of memory contains the value 6. We then use the print function to display the content of x. As this is a often used pattern, Python has a convenience syntax for this kind assignment End of explanation """ 0.1+0.2 """ Explanation: 3.1.3 Floating point precision All of the above applies equally to floating point numbers (or real numbers). However, we must be mindful of floating point precision. End of explanation """ # Noting the difference between printing quoted variables (strings) and printing the variable itself. x = 5 print(x) print('x') """ Explanation: The following exerpt from the Python documentation explains what is happening quite clearly. To be fair, even our decimal system is inadequate to represent rational numbers like 1/3, 1/11 and so on. 3.2 Strings Strings are basically text. These are enclosed in ' ' or " ". The reason for having two ways of denoting strings is because we may need to nest a string within a string like in 'The quick brown fox "jumped" over the lazy old dog'. This is especially useful when setting up database queries and the like. End of explanation """ my_name = 'Tang U-Liang' print(my_name) # String formatting: Using the % age = 35 print('Hello doctor, my name is %s. I am %d years old. I weigh %.1f kg' % (my_name, age, 70.25)) # or using .format method print("Hi, I'm {name}. Please register {name} for this conference".format(name=my_name)) """ Explanation: In the second print function, the text 'x' is printed while in the first print function, it is the contents of x which is printed to the console. 3.2.1 String formatting Strings can be assigned to variables just like numbers. And these can be recalled in a print function. End of explanation """ fruit = 'Apple' drink = 'juice' print(fruit+drink) # concatenation #Don't like the lack of spacing between words? print(fruit+' '+drink) """ Explanation: When using % to indicate string substitution, take note of the common formatting "placeholders" %s to substitue strings. %d for printing integer substitutions %.1f means to print a floating point number up to 1 decimal place. Note that there is no rounding The utility of the .format method arises when the same string needs to printed in various places in a larger body of text. This avoids duplicating code. Also did you notice I used double quotation. Why? More about string formats can be found in this excellent blog post 3.2.2 Weaving strings into one beautiful tapestry of text Besides the .format and % operation on text, we can concatenate strings using + operator. However, strings cannot be changed once declared and assigned to variables. This property is called immutability End of explanation """ print(fruit[0]) print(fruit[1]) """ Explanation: Use [] to access specific letters in the string. Python uses 0 indexing. So the first letter is accessed by my_string[0] while my_string[1] accesses the second letter. End of explanation """ favourite_drink = fruit+' '+drink print("Printing the first to 3rd letter.") print(favourite_drink[0:3]) print("\nNow I want to print the second to seventh letter:") print(favourite_drink[1:7]) """ Explanation: Slicing is a way of get specific subsets of the string. If you let $x_n$ denote the $n+1$-th letter (note zero indexing) in a string (and by letter this includes whitespace characters as well!) then writing my_string[i:j] returns a subset $$x_i, x_{i+1}, \ldots, x_{j-1}$$ of letters in a string. That means the slice [i:j] takes all subsets of letters starting from index i and stops one index before the index indicated by j. 0 indexing and stopping point convention frequently trips up first time users. So take special note of this convention. 0 indexing is used throughout Python especially in matplotlib and pandas. End of explanation """ print(favourite_drink[0:7:2]) # Here's a trick, try this out print(favourite_drink[3:0:-1]) """ Explanation: Notice the use of \n in the second print function. This is called a newline character which does exactly what its name says. Also in the third print function notice the seperation between e and j. It is actually not seperated. The sixth letter is a whitespace character ' '. Slicing also utilizes arithmetic progressions to return even more specific subsets of strings. So [i:j:k] means that the slice will return $$ x_{i}, x_{i+k}, x_{i+2k}, \ldots, x_{i+mk}$$ where $m$ is the largest (resp. smallest) integer such that $i+mk \leq j-1$ (resp $1+mk \geq j+1$ if $i\geq j$) End of explanation """ # Write your answer here and check it with the output below """ Explanation: So what happened above? Well [3:0:-1] means that starting from the 4-th letter $x_3$ which is 'l' return a subtring including $x_{2}, x_{1}$ as well. Note that the progression does not include $x_0 =$ 'A' because the stopping point is non-inclusive of j. The slice [:j] or [i:] means take substrings starting from the beginning up to the $j$-th letter (i.e. the $x_{j-1}$ letter) and substring starting from the $i+1$-th (i.e. the $x_{i}$) letter to the end of the string. 3.2.3 A mini challenge Print the string favourite_drink in reverse order. How would you do it? End of explanation """ x = 5.0 type(x) type(favourite_drink) type(True) type(500) """ Explanation: Answer: eciuj elppA 3.3 The type function All objects in python are instances of classes. It is useful sometimes to find out what type of object we are looking at, especially if it has been assigned to a variable. For this we use the type function. End of explanation """ # Here's a list called staff containing his name, his age and current renumeration staff = ['Andy', 28, 980.15] """ Explanation: 4. list, here's where the magic begins list are the fundamental data structure in Python. These are analogous to arrays in C or Java. If you use R, lists are analogous to vectors (and not R list) Declaring a list is as simple as using square brackets [ ] to enclose a list of objects (or variables) seperated by commas. End of explanation """ len(staff) """ Explanation: 4.1 Properties of list objects and indexing One of the fundamental properties we can ask about lists is how many objects they contain. We use the len (short for length) function to do that. End of explanation """ staff[0] """ Explanation: Perhaps you want to recover that staff's name. It's in the first position of the list. End of explanation """ # type your answer here and run the cell """ Explanation: Notice that Python still outputs to console even though we did not use the print function. Actually the print function prints a particularly "nice" string representation of the object, which is why Andy is printed without the quotation marks if print was used. Can you find me Andy's age now? End of explanation """ staff[1:3] """ Explanation: The same slicing rules for strings apply to lists as well. If we wanted Andy's age and wage, we would type staff[1:3] End of explanation """ nested_list = ['apples', 'banana', [1.50, 0.40]] """ Explanation: This returns us a sub-list containing Andy's age and renumeration. 4.2 Nested lists Lists can also contain other lists. This ability to have a nested structure in lists gives it flexibility. End of explanation """ # Accesing items from within a nested list structure. print(nested_list[2]) # Assigning nested_list[2] to a variable. The variable price represents a list price = nested_list[2] print(type(price)) # Getting the smaller of the two floats print(nested_list[2][1]) """ Explanation: Notice that if I type nested_list[2], Python will return me the list [1.50, .40]. This can be accessed again using indexing (or slicing notation) [ ]. End of explanation """ # append staff.append('Finance') print(staff) # pop away the information about his salary andys_salary = staff.pop(2) print(andys_salary) print(staff) # oops, made a mistake, I want to reinsert information about his salary staff.insert(3, andys_salary) print(staff) contacts = [99993535, "andy@company.com"] staff = staff+contacts # reassignment of the concatenated list back to staff print(staff) """ Explanation: 4.3 List methods Right now, let us look at four very useful list methods. Methods are basically operations which modify lists. These are: pop which allows us to remove an item in a list. So for example if $x_0, x_1, \ldots, x_n$ are items in a list, calling my_list.pop(r) will modify the list so that it contains only $$x_0, \ldots, x_{r-1}, x_{r+1},\ldots, x_n$$ while returning the element $x_r$. append which adds items to the end of the list. Let's say $x_{n+1}$ is the new object you wish to append to the end of the list. Calling the method my_list.append(x_n+1) will modify the list inplace so that the list will now contain $$x_0, \ldots, x_n, x_{n+1}$$ Note that append does not return any output! insert which as the name suggests, allows us to add items to a list in a particular index location When using this, type my_list.insert(r, x_{n+1}) with the second argument to the method the object you wish to insert and r the position (still 0 indexed) where this object ought to go in that list. This method modifies the list inplace and does not return any output. After calling the insert method, the list now contains $$x_0,\ldots, x_{r-1}, x_{n+1}, x_{r}, \ldots, x_n$$ This means that my_list[r] = $x_{n+1}$ while my_list[r+1] = $x_{r}$ + is used to concatenate two lists. If you have two lists and want to join them together producing a union of two (or more lists), use this binary operator. This works by returning a union of two lists. So $$[ x_1,\ldots, x_n] + [y_1,\ldots, y_m]$$ is the list containing $$ x_1,\ldots, x_n,y_1, \ldots, y_m$$ This change is not permanent unless you assign the result of the operation to another variable. End of explanation """ staff = ['Andy', 28, 'Finance', 980.15, 99993535, 'andy@company.com'] staff # type your answer here print(staff) """ Explanation: 4.3.1 Your first programming challenge Move information for Andy's email to the second position (i.e. index 1) in the list staff in one line of code End of explanation """
tensorflow/docs-l10n
site/zh-cn/tutorials/load_data/text.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2018 The TensorFlow Authors. End of explanation """ import tensorflow as tf import tensorflow_datasets as tfds import os """ Explanation: 使用 tf.data 加载文本数据 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/text"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 TensorFlow.org 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/text.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 上运行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/text.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />查看 GitHub 上的资源</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/text.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载 notebook</a> </td> </table> Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 官方英文文档。如果您有改进此翻译的建议, 请提交 pull request 到 tensorflow/docs GitHub 仓库。要志愿地撰写或者审核译文,请加入 docs-zh-cn@tensorflow.org Google Group。 本教程为你提供了一个如何使用 tf.data.TextLineDataset 来加载文本文件的示例。TextLineDataset 通常被用来以文本文件构建数据集(原文件中的一行为一个样本) 。这适用于大多数的基于行的文本数据(例如,诗歌或错误日志) 。下面我们将使用相同作品(荷马的伊利亚特)三个不同版本的英文翻译,然后训练一个模型来通过单行文本确定译者。 环境搭建 End of explanation """ DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt'] for name in FILE_NAMES: text_dir = tf.keras.utils.get_file(name, origin=DIRECTORY_URL+name) parent_dir = os.path.dirname(text_dir) parent_dir """ Explanation: 三个版本的翻译分别来自于: William Cowper — text Edward, Earl of Derby — text Samuel Butler — text 本教程中使用的文本文件已经进行过一些典型的预处理,主要包括删除了文档页眉和页脚,行号,章节标题。请下载这些已经被局部改动过的文件。 End of explanation """ def labeler(example, index): return example, tf.cast(index, tf.int64) labeled_data_sets = [] for i, file_name in enumerate(FILE_NAMES): lines_dataset = tf.data.TextLineDataset(os.path.join(parent_dir, file_name)) labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i)) labeled_data_sets.append(labeled_dataset) """ Explanation: 将文本加载到数据集中 迭代整个文件,将整个文件加载到自己的数据集中。 每个样本都需要单独标记,所以请使用 tf.data.Dataset.map 来为每个样本设定标签。这将迭代数据集中的每一个样本并且返回( example, label )对。 End of explanation """ BUFFER_SIZE = 50000 BATCH_SIZE = 64 TAKE_SIZE = 5000 all_labeled_data = labeled_data_sets[0] for labeled_dataset in labeled_data_sets[1:]: all_labeled_data = all_labeled_data.concatenate(labeled_dataset) all_labeled_data = all_labeled_data.shuffle( BUFFER_SIZE, reshuffle_each_iteration=False) """ Explanation: 将这些标记的数据集合并到一个数据集中,然后对其进行随机化操作。 End of explanation """ for ex in all_labeled_data.take(5): print(ex) """ Explanation: 你可以使用 tf.data.Dataset.take 与 print 来查看 (example, label) 对的外观。numpy 属性显示每个 Tensor 的值。 End of explanation """ tokenizer = tfds.features.text.Tokenizer() vocabulary_set = set() for text_tensor, _ in all_labeled_data: some_tokens = tokenizer.tokenize(text_tensor.numpy()) vocabulary_set.update(some_tokens) vocab_size = len(vocabulary_set) vocab_size """ Explanation: 将文本编码成数字 机器学习基于的是数字而非文本,所以字符串需要被转化成数字列表。 为了达到此目的,我们需要构建文本与整数的一一映射。 建立词汇表 首先,通过将文本标记为单独的单词集合来构建词汇表。在 TensorFlow 和 Python 中均有很多方法来达成这一目的。在本教程中: 迭代每个样本的 numpy 值。 使用 tfds.features.text.Tokenizer 来将其分割成 token。 将这些 token 放入一个 Python 集合中,借此来清除重复项。 获取该词汇表的大小以便于以后使用。 End of explanation """ encoder = tfds.features.text.TokenTextEncoder(vocabulary_set) """ Explanation: 样本编码 通过传递 vocabulary_set 到 tfds.features.text.TokenTextEncoder 来构建一个编码器。编码器的 encode 方法传入一行文本,返回一个整数列表。 End of explanation """ example_text = next(iter(all_labeled_data))[0].numpy() print(example_text) encoded_example = encoder.encode(example_text) print(encoded_example) """ Explanation: 你可以尝试运行这一行代码并查看输出的样式。 End of explanation """ def encode(text_tensor, label): encoded_text = encoder.encode(text_tensor.numpy()) return encoded_text, label def encode_map_fn(text, label): # py_func doesn't set the shape of the returned tensors. encoded_text, label = tf.py_function(encode, inp=[text, label], Tout=(tf.int64, tf.int64)) # `tf.data.Datasets` work best if all components have a shape set # so set the shapes manually: encoded_text.set_shape([None]) label.set_shape([]) return encoded_text, label all_encoded_data = all_labeled_data.map(encode_map_fn) """ Explanation: 现在,在数据集上运行编码器(通过将编码器打包到 tf.py_function 并且传参至数据集的 map 方法的方式来运行)。 End of explanation """ train_data = all_encoded_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE) train_data = train_data.padded_batch(BATCH_SIZE) test_data = all_encoded_data.take(TAKE_SIZE) test_data = test_data.padded_batch(BATCH_SIZE) """ Explanation: 将数据集分割为测试集和训练集且进行分支 使用 tf.data.Dataset.take 和 tf.data.Dataset.skip 来建立一个小一些的测试数据集和稍大一些的训练数据集。 在数据集被传入模型之前,数据集需要被分批。最典型的是,每个分支中的样本大小与格式需要一致。但是数据集中样本并不全是相同大小的(每行文本字数并不相同)。因此,使用 tf.data.Dataset.padded_batch(而不是 batch )将样本填充到相同的大小。 End of explanation """ sample_text, sample_labels = next(iter(test_data)) sample_text[0], sample_labels[0] """ Explanation: 现在,test_data 和 train_data 不是( example, label )对的集合,而是批次的集合。每个批次都是一对(多样本, 多标签 ),表示为数组。 End of explanation """ vocab_size += 1 """ Explanation: 由于我们引入了一个新的 token 来编码(填充零),因此词汇表大小增加了一个。 End of explanation """ model = tf.keras.Sequential() """ Explanation: 建立模型 End of explanation """ model.add(tf.keras.layers.Embedding(vocab_size, 64)) """ Explanation: 第一层将整数表示转换为密集矢量嵌入。更多内容请查阅 Word Embeddings 教程。 End of explanation """ model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64))) """ Explanation: 下一层是 LSTM 层,它允许模型利用上下文中理解单词含义。 LSTM 上的双向包装器有助于模型理解当前数据点与其之前和之后的数据点的关系。 End of explanation """ # 一个或多个紧密连接的层 # 编辑 `for` 行的列表去检测层的大小 for units in [64, 64]: model.add(tf.keras.layers.Dense(units, activation='relu')) # 输出层。第一个参数是标签个数。 model.add(tf.keras.layers.Dense(3, activation='softmax')) """ Explanation: 最后,我们将获得一个或多个紧密连接的层,其中最后一层是输出层。输出层输出样本属于各个标签的概率,最后具有最高概率的分类标签即为最终预测结果。 End of explanation """ model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) """ Explanation: 最后,编译这个模型。对于一个 softmax 分类模型来说,通常使用 sparse_categorical_crossentropy 作为其损失函数。你可以尝试其他的优化器,但是 adam 是最常用的。 End of explanation """ model.fit(train_data, epochs=3, validation_data=test_data) eval_loss, eval_acc = model.evaluate(test_data) print('\nEval loss: {}, Eval accuracy: {}'.format(eval_loss, eval_acc)) """ Explanation: 训练模型 利用提供的数据训练出的模型有着不错的精度(大约 83% )。 End of explanation """
AnyBody-Research-Group/AnyPyTools
docs/Tutorial/02_Generating_macros.ipynb
mit
from anypytools.macro_commands import (MacroCommand, Load, SetValue, SetValue_random, Dump, SaveDesign, LoadDesign, SaveValues, LoadValues, UpdateValues, OperationRun) """ Explanation: Creating AnyScript Macros AnyPyTools can create AnyScript macros automatically. Doing so simplifies the process of writing complex macros and makes it easier to do things like parameter studies, Monte Carlo simulation, etc. There is a class for every macro command: End of explanation """ macrolist = [ Load('Knee.any', defs={'SUBJECT':'"S02"', 'TRIAL':'"T04"'}), OperationRun('Main.MyStudy.InverseDynamics'), Dump('Main.MyStudy.Output.MaxMuscleActivity'), ] macrolist """ Explanation: A quick example The following shows how the to generate a simple macro. End of explanation """ from anypytools import AnyPyProcess app = AnyPyProcess() app.start_macro(macrolist); """ Explanation: Each macro object will generate the macro commands with the correct syntax. The macro can be launched using the start_macro() method of the AnyPyProcess object. End of explanation """ from anypytools import AnyMacro macrolist = [ Load('Knee.any' ), OperationRun('Main.MyStudy.InverseDynamics'), ] mg = AnyMacro(macrolist) mg """ Explanation: Overview of macro commands The macro_commands module have classes for generating many of the standard AnyScipt macro commands. Load(mainfile, defines, paths): load command OperationRun(var): select operation and run Dump(var): classoperation "Dump" LoadDesign(var, filename): classoperation "Load design" SaveDesign(var, filename): classoperation "Save design" LoadValues(filename): classoperation Main "Load Values" SaveValues(filename): classoperation Main "Save Values" UpdateValues(): classoperation "Update Values" SetValue(var,value): classoperation "Set Value" MacroCommand(macro_string): Add abitrary macro string Creating many macros The macro in the previous example would have been easy to write manually. However, in some cases we want to create many macros. Then it is a big advantage to generate them programmatically. To generate many macros we need an extra class AnyMacro to wrap our macro list. End of explanation """ mg = AnyMacro(macrolist, number_of_macros = 5) mg """ Explanation: By default AnyMacro just behaves as a container for our macro. But has additional attributes that specify how many macros we want. End of explanation """ mg.create_macros(2) """ Explanation: This can also be overidden when calling its create_macros() function End of explanation """ from anypytools import AnyPyProcess app = AnyPyProcess() output = app.start_macro(mg.create_macros(100)) """ Explanation: This list of macros can also be passed to the 'start_macro' function to executed in parallel. End of explanation """ parameter_list = [2.2, 2.5, 2.7, 2.9, 3.1] mg = AnyMacro(SetValue('Main.MyParameter', parameter_list )) mg.create_macros(5) """ Explanation: Running many macros is only really useful if the macros are different. Some macros classes, like SetValue(), accepts lists of values which it distributes across the generated macros. Imagine a list of 5 parameters. We want to create five macros that use these values: End of explanation """ patella_tendon_lengths = [ 0.02 + i*0.01 for i in range(7) ] print(patella_tendon_lengths) """ Explanation: A simple parameter study Let us combine the previous to create a parameter study. We will continue with the simplified knee model where we left off in the previous tutorial. The parameter study will vary the patella tendon length from 2.0cm to 8.0cm, and observe the effect on maximum muscle activity. First we create a list of patella length parameters. End of explanation """ macro = [ Load('Knee.any'), SetValue('Main.MyModel.PatellaLigament.DriverPos', patella_tendon_lengths ), OperationRun('Main.MyStudy.InverseDynamics'), Dump('Main.MyStudy.Output.Abscissa.t'), Dump('Main.MyStudy.Output.MaxMuscleActivity'), Dump('Main.MyModel.PatellaLigament.DriverPos'), ] parameter_study_macro = AnyMacro(macro, number_of_macros= len(patella_tendon_lengths) ) """ Explanation: This list of values is added to the macros with the SetValue class. End of explanation """ output = app.start_macro(parameter_study_macro) %matplotlib inline import matplotlib.pyplot as plt for data in output: max_activity = data['Main.MyStudy.Output.MaxMuscleActivity'] time = data['Main.MyStudy.Output.Abscissa.t'] patella_ligament_length = data['Main.MyModel.PatellaLigament.DriverPos'][0] plt.plot(time, max_activity, label='{:.1f} cm'.format(100* patella_ligament_length) ) plt.title('Effect of changing patella tendon length') plt.xlabel('Time steps') plt.ylabel('Max muscle activity') plt.legend(bbox_to_anchor=(1.05, 1), loc=2); """ Explanation: We can now run the model and analyze the resulting maximum muscle activity by plotting the data in the output variable: End of explanation """
WomensCodingCircle/CodingCirclePython
Lesson02_Conditionals/.ipynb_checkpoints/Conditional Execution-checkpoint.ipynb
mit
cleaned_room = True took_out_trash = False print(cleaned_room) print(type(took_out_trash)) """ Explanation: Conditional Execution Boolean Expressions We introduce a new type, the boolean. A boolean can have one of two values: True or False End of explanation """ print(5 == 6) print("Hello" != "Goodbye") # You can compare to variables too x = 5 print(5 >= x) print(x is True) """ Explanation: You can compare values together and get a boolean result Operator Meaning x == y x equal to y x != y x not equal to y x > y x greater than y x < y x less than y x >= y x greater than or equal to y x <= y x less than or equal to y x is y x is the same as y x is not y x is not the same as y By using the operators in an expression the result evaluates to a boolean. x and y can be any type of value End of explanation """ # cleaned_room is true if cleaned_room: print("Good girl! You can watch TV.") # took_out_trash if false if took_out_trash: print("Thank you!") print(took_out_trash) """ Explanation: TRY IT See if 5.0000001 is greater than 5 Conditional Execution We can write programs that change their behavior depending on the conditions. We use an if statement to run a block of code if a condition is true. It won't run if the condition is false. if (condition): code_to_execute # if condition is true In python indentation matters. The code to execute must be indented (4 spaces is best, though I like tabs) more than the if condition. End of explanation """ # cleaned_room is true if cleaned_room: print("Good job! You can watch TV.") print("Or play outside") # took_out_trash is false if took_out_trash: print("Thank you!") print("You are a good helper") print("It is time for lunch") """ Explanation: You can include more than one statement in the block of code in the if statement. You can tell python that this code should be part of the if statement by indenting it. This is called a 'block' of code if (condition): statement1 statement2 statement3 You can tell python that the statement is not part of the if block by dedenting it to the original level if (condition): statement1 statement2 statement3 # statement3 will run even if condition is false End of explanation """ candies_taken = 4 if (candies_taken < 3): print('Enjoy!') else: print('Put some back') """ Explanation: In alternative execution, there are two possibilities. One that happens if the condition is true, and one that happens if it is false. It is not possible to have both execute. You use if/else syntax if (condition): code_runs_if_true else: code_runs_if_false Again, note the colons and spacing. These are necessary in python. End of explanation """ did_homework = True took_out_trash = True cleaned_room = False allowance = 0 if (cleaned_room): allowance = 10 elif (took_out_trash): allowance = 5 elif (did_homework): allowance = 4 else: allowance = 2 print(allowance) """ Explanation: Chained conditionals allow you to check several conditions. Only one block of code will ever run, though. To run a chained conditional, you use if/elif/else syntax. You can use as many elifs as you want. if (condition1): run_this_code1 elif (condition2): run_this_code2 elif (condition3): run_this_code3 else: run_this_code4 You are not required to have an else block. if (condition1): run_this_code1 elif (condition2): run_this_code2 Each condition is checked in order. If the first is false, the next is checked, and so on. If one of them is true, the corresponding branch executes, and the statement ends. Even if more than one condition is true, only the first true branch executes. End of explanation """ print(True and True) print(False or True) print(not False) """ Explanation: TRY IT Check if did_homework is true, if so, print out "You can play a video game", otherwise print out "Go get your backpack" Logical Operators Logical operators allow you to combine two or more booleans. They are and, or, not End of explanation """ cleaned_room = True took_out_trash = False if (cleaned_room and took_out_trash): print("Let's go to Chuck-E-Cheese's.") else: print("Get to work!") if (not did_homework): print("You're going to get a bad grade.") """ Explanation: You can use the logical operators in if statements End of explanation """ allowance = 1 if (allowance > 2): if (allowance >= 8): print("Buy toys!") else: print("Buy candy!") else: print("Save it until I have enough to buy something good.") """ Explanation: TRY IT Check if the room is clean or the trash is taken out and if so print "Here is your allowance" Nested Conditionals You can nest conditional branches inside another. You just indent each level more. if (condition): run_this else: if (condition2): run_this2 else: run_this3 Avoid nesting too deep, it becomes difficult to read. End of explanation """ try: print("Before") y = 5/0 print("After") except: print("I'm sorry, the universe doesn't work that way...") """ Explanation: Catching exceptions using try and except You can put code into a try/except block. If the code has an error in the try block, it will stop running and go to the except block. If there is no error, the try block completes and the except block never runs. try: code except: code_runs_if_error End of explanation """ inp = input('Enter Fahrenheit Temperature:') try: fahr = float(inp) cel = (fahr - 32.0) * 5.0 / 9.0 print(cel) except: print('Please enter a number') """ Explanation: This can be useful when evaluating a user's input, to make sure it is what you expected. End of explanation """ if ((1 < 2) or (5/0)): print("How did we do that?") """ Explanation: TRY IT Try converting the string 'hi' into an integer. If there is an error, print "What did you think would happen?" Short-circuit evaluation of logical expressions Python (and most other languages) are very lazy about logical expressions. As soon as it knows the value of the whole expression, it stops evaluating the expression. if (condition1 and condition2): run_code In the above example, if condition1 is false then condition2 is never evaluated. End of explanation """
icoxfog417/enigma_abroad
pola/machine/topic_model_evaluation.ipynb
mit
# enable showing matplotlib image inline %matplotlib inline # autoreload module %load_ext autoreload %autoreload 2 PROJECT_ROOT = "/" def load_local_package(): import os import sys root = os.path.join(os.getcwd(), "../../") sys.path.append(root) # load project root return root PROJECT_ROOT = load_local_package() """ Explanation: Evaluate the Topic Model Preparation End of explanation """ def load_corpus(file_name, test_size=0.3): import os from pola.machine.topic_model import GTopicModel path = os.path.join(PROJECT_ROOT, "./data/" + file_name) m = GTopicModel.load(path) return m model = load_corpus("cityspots_doc_edited_model.gensim") """ Explanation: Load Topic Model End of explanation """ # show its perplexity print("topic count is {0}. perplexity is {1}".format(model.topic_count, model.perplexity())) """ Explanation: Evaluate Topic Model End of explanation """ def plot_distance_matrix(m): import numpy as np import matplotlib.pylab as plt # make distance matrix mt = [] for i in range(m.topic_count): d = m.calc_distances(i) d = sorted(d, key=lambda _d: _d[0]) d.insert(i, (i, 0)) d = [_d[1] for _d in d] mt.append(d) mt = np.array(mt) # plot matrix fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.set_aspect("equal") plt.imshow(mt, interpolation="nearest", cmap=plt.cm.ocean) plt.yticks(range(mt.shape[0])) plt.xticks(range(mt.shape[1])) plt.colorbar() plt.show() plot_distance_matrix(model) """ Explanation: Visualize Topic Model Check the distance between each topic If we success to categorize the documents well, then the distance of each topic should be far apart. End of explanation """ def show_document_topics(m, sample_size=200, width=1): import random import numpy as np import matplotlib.pylab as plt # make document/topics matrix document_topics = [] samples = random.sample(range(len(m.doc.archives)), sample_size) for i in samples: ts = m.get_document_topics(i) document_topics.append([v[1] for v in ts]) document_topics = np.array(document_topics) # draw cumulative bar chart fig = plt.figure(figsize=(20, 3)) N, K = document_topics.shape indices = np.arange(N) height = np.zeros(N) bar = [] for k in range(K): color = plt.cm.coolwarm(k / K, 1) p = plt.bar(indices, document_topics[:, k], width, bottom=None if k == 0 else height, color=color) height += document_topics[:, k] bar.append(p) plt.ylim((0, 1)) plt.xlim((0, document_topics.shape[0])) topic_labels = ['Topic #{}'.format(k) for k in range(K)] plt.legend([b[0] for b in bar], topic_labels) plt.show(bar) show_document_topics(model) """ Explanation: Check the topics in documents If we success to categorize the documents well, each document has one mainly topic. End of explanation """ def visualize_topic(m, word_count=10, fontsize_base=10): import matplotlib.pylab as plt from matplotlib.font_manager import FontProperties font = lambda s: FontProperties(fname=r'C:\Windows\Fonts\meiryo.ttc', size=s) # get words in topic topic_words = [] for t in range(m.topic_count): words = m.get_topic_words(t, topn=word_count) topic_words.append(words) # plot words fig = plt.figure(figsize=(8, 5)) for i, ws in enumerate(topic_words): sub = fig.add_subplot(1, m.topic_count, i + 1) plt.ylim(0, word_count + 0.5) plt.xticks([]) plt.yticks([]) plt.title("Topic #{}".format(i)) for j, (share, word) in enumerate(ws): size = fontsize_base + (fontsize_base * share * 2) w = "%s(%1.3f)" % (word, share) plt.text(0.3, word_count-j-0.5, w, ha="left", fontproperties=font(size)) plt.tight_layout() plt.show() def show_city_spots(m, show_count=5): import os from application.models.spot import CitySpots path = os.path.join(PROJECT_ROOT, "./data/spots.json") cityspots = CitySpots.load(path) print(len(cityspots)) for t in range(m.topic_count): indices = m.get_topic_documents(t) print("show topic {0} city spots".format(t)) for i, r in indices[:show_count]: print("({0}): {1}".format(r, cityspots[i].city.name)) for s in cityspots[i].spots: print("\t {0}".format(s.title)) visualize_topic(model) show_city_spots(model) """ Explanation: Visualize words in topics To consider about the name of topic, show the words in topics. End of explanation """
WillenZh/deep-learning-project
language-translation/dlnd_language_translation.ipynb
mit
""" DON'T MODIFY ANYTHING IN THIS CELL """ import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) """ Explanation: 语言翻译 在此项目中,你将了解神经网络机器翻译这一领域。你将用由英语和法语语句组成的数据集,训练一个序列到序列模型(sequence to sequence model),该模型能够将新的英语句子翻译成法语。 获取数据 因为将整个英语语言内容翻译成法语需要大量训练时间,所以我们提供了一小部分的英语语料库。 End of explanation """ view_sentence_range = (0, 10) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) """ Explanation: 探索数据 研究 view_sentence_range,查看并熟悉该数据的不同部分。 End of explanation """ def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_text_to_ids(text_to_ids) """ Explanation: 实现预处理函数 文本到单词 id 和之前的 RNN 一样,你必须首先将文本转换为数字,这样计算机才能读懂。在函数 text_to_ids() 中,你需要将单词中的 source_text 和 target_text 转为 id。但是,你需要在 target_text 中每个句子的末尾,添加 &lt;EOS&gt; 单词 id。这样可以帮助神经网络预测句子应该在什么地方结束。 你可以通过以下代码获取 &lt;EOS&gt; 单词ID: python target_vocab_to_int['&lt;EOS&gt;'] 你可以使用 source_vocab_to_int 和 target_vocab_to_int 获得其他单词 id。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ helper.preprocess_and_save_data(source_path, target_path, text_to_ids) """ Explanation: 预处理所有数据并保存 运行以下代码单元,预处理所有数据,并保存到文件中。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np import helper (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() """ Explanation: 检查点 这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,可以从这里继续。预处理的数据已保存到磁盘上。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__) print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) """ Explanation: 检查 TensorFlow 版本,确认可访问 GPU 这一检查步骤,可以确保你使用的是正确版本的 TensorFlow,并且能够访问 GPU。 End of explanation """ def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ # TODO: Implement Function return None, None, None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_model_inputs(model_inputs) """ Explanation: 构建神经网络 你将通过实现以下函数,构建出要构建一个序列到序列模型所需的组件: model_inputs process_decoding_input encoding_layer decoding_layer_train decoding_layer_infer decoding_layer seq2seq_model 输入 实现 model_inputs() 函数,为神经网络创建 TF 占位符。该函数应该创建以下占位符: 名为 “input” 的输入文本占位符,并使用 TF Placeholder 名称参数(等级(Rank)为 2)。 目标占位符(等级为 2)。 学习速率占位符(等级为 0)。 名为 “keep_prob” 的保留率占位符,并使用 TF Placeholder 名称参数(等级为 0)。 在以下元祖(tuple)中返回占位符:(输入、目标、学习速率、保留率) End of explanation """ def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for dencoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_process_decoding_input(process_decoding_input) """ Explanation: 处理解码输入 使用 TensorFlow 实现 process_decoding_input,以便删掉 target_data 中每个批次的最后一个单词 ID,并将 GO ID 放到每个批次的开头。 End of explanation """ def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_encoding_layer(encoding_layer) """ Explanation: 编码 实现 encoding_layer(),以使用 tf.nn.dynamic_rnn() 创建编码器 RNN 层级。 End of explanation """ def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param sequence_length: Sequence Length :param decoding_scope: TenorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Train Logits """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_train(decoding_layer_train) """ Explanation: 解码 - 训练 使用 tf.contrib.seq2seq.simple_decoder_fn_train() 和 tf.contrib.seq2seq.dynamic_rnn_decoder() 创建训练分对数(training logits)。将 output_fn 应用到 tf.contrib.seq2seq.dynamic_rnn_decoder() 输出上。 End of explanation """ def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param maximum_length: The maximum allowed time steps to decode :param vocab_size: Size of vocabulary :param decoding_scope: TensorFlow Variable Scope for decoding :param output_fn: Function to apply the output layer :param keep_prob: Dropout keep probability :return: Inference Logits """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer_infer(decoding_layer_infer) """ Explanation: 解码 - 推论 使用 tf.contrib.seq2seq.simple_decoder_fn_inference() 和 tf.contrib.seq2seq.dynamic_rnn_decoder() 创建推论分对数(inference logits)。 End of explanation """ def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_state: The encoded state :param vocab_size: Size of vocabulary :param sequence_length: Sequence Length :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param keep_prob: Dropout keep probability :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function return None, None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_decoding_layer(decoding_layer) """ Explanation: 构建解码层级 实现 decoding_layer() 以创建解码器 RNN 层级。 使用 rnn_size 和 num_layers 创建解码 RNN 单元。 使用 lambda 创建输出函数,将输入,也就是分对数转换为类分对数(class logits)。 使用 decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) 函数获取训练分对数。 使用 decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) 函数获取推论分对数。 注意:你将需要使用 tf.variable_scope 在训练和推论分对数间分享变量。 End of explanation """ def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param sequence_length: Sequence Length :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training Logits, Inference Logits) """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_seq2seq_model(seq2seq_model) """ Explanation: 构建神经网络 应用你在上方实现的函数,以: 向编码器的输入数据应用嵌入。 使用 encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob) 编码输入。 使用 process_decoding_input(target_data, target_vocab_to_int, batch_size) 函数处理目标数据。 向解码器的目标数据应用嵌入。 使用 decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob) 解码编码的输入数据。 End of explanation """ # Number of Epochs epochs = None # Batch Size batch_size = None # RNN Size rnn_size = None # Number of Layers num_layers = None # Embedding Size encoding_embedding_size = None decoding_embedding_size = None # Learning Rate learning_rate = None # Dropout Keep Probability keep_probability = None """ Explanation: 训练神经网络 超参数 调试以下参数: 将 epochs 设为 epoch 次数。 将 batch_size 设为批次大小。 将 rnn_size 设为 RNN 的大小。 将 num_layers 设为层级数量。 将 encoding_embedding_size 设为编码器嵌入大小。 将 decoding_embedding_size 设为解码器嵌入大小 将 learning_rate 设为训练速率。 将 keep_probability 设为丢弃保留率(Dropout keep probability)。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_source_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob = model_inputs() sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model( tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) tf.identity(inference_logits, 'logits') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([input_shape[0], sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) """ Explanation: 构建图表 使用你实现的神经网络构建图表。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import time def get_accuracy(target, logits): """ Calculate accuracy """ max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1]), (0,0)], 'constant') return np.mean(np.equal(target, np.argmax(logits, 2))) train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = helper.pad_sentence_batch(source_int_text[:batch_size]) valid_target = helper.pad_sentence_batch(target_int_text[:batch_size]) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): start_time = time.time() _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, sequence_length: target_batch.shape[1], keep_prob: keep_probability}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits) end_time = time.time() print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') """ Explanation: 训练 利用预处理的数据训练神经网络。如果很难获得低损失值,请访问我们的论坛,看看其他人是否遇到了相同的问题。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ # Save parameters for checkpoint helper.save_params(save_path) """ Explanation: 保存参数 保存 batch_size 和 save_path 参数以进行推论(for inference)。 End of explanation """ """ DON'T MODIFY ANYTHING IN THIS CELL """ import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() """ Explanation: 检查点 End of explanation """ def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_sentence_to_seq(sentence_to_seq) """ Explanation: 句子到序列 要向模型提供要翻译的句子,你首先需要预处理该句子。实现函数 sentence_to_seq() 以预处理新的句子。 将句子转换为小写形式 使用 vocab_to_int 将单词转换为 id 如果单词不在词汇表中,将其转换为&lt;UNK&gt; 单词 id End of explanation """ translate_sentence = 'he saw a old yellow truck .' """ DON'T MODIFY ANYTHING IN THIS CELL """ translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('logits:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)])) print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])) """ Explanation: 翻译 将 translate_sentence 从英语翻译成法语。 End of explanation """
GregDMeyer/dynamite
examples/2-Subspaces.ipynb
mit
from dynamite.operators import sigmax, sigmaz, index_sum, op_sum # the None default argument will be important later def build_hamiltonian(L): interaction = op_sum(index_sum(sigmax(0)*sigmax(i), size=L) for i in range(1,L)) uniform_field = 0.5*index_sum(sigmaz(), size=L) return interaction + uniform_field # look at an example build_hamiltonian(20) """ Explanation: Using subspaces Often times, we encounter operators that have some conservation law. dynamite can take advantage of conservation laws in the $\sigma_z$ product state basis by working in a restricted subspace. For this demonstration, we'll use the following long-range XX+Z model: End of explanation """ %matplotlib inline build_hamiltonian(8).spy() """ Explanation: If we look at the nonzero structure of the matrix, it's not at all clear that it's block diagonal: End of explanation """ H = build_hamiltonian(20) print('full space dimension: ', H.dim) from dynamite.subspaces import Parity H.subspace = Parity('even') print('parity subspace dimension:', H.dim) """ Explanation: This is a graphical representation of the matrix, where each black dot represents a nonzero element. It turns out it is block-diagonal, we just need to reorder the rows. In fact, we can see by inspection of the Hamiltonian's definition that the X terms are always two-body, meaning that parity is conserved in the Z product state basis. We can easily apply this subspace in dynamite: End of explanation """ from dynamite.states import State ket = State(L=20, subspace=Parity('even')) print('vector length:', ket.vec.size) """ Explanation: As expected, the dimension was cut in half! The same subspace can be applied to states, and even globally: End of explanation """ from dynamite import config config.L = 20 config.subspace = Parity('even') # now we never have to specify the subspace! and we only need to give # build_hamiltonian the value of L so it knows the longest long-range interaction H = build_hamiltonian(config.L) ket = State() print('H size:', H.L) print('H subspace:', H.subspace) print('ket subspace:', ket.subspace) """ Explanation: Let's set everything globally so we don't have to keep writing lengths and subspaces everywhere. End of explanation """ from dynamite.operators import sigmay def build_XXYY(L=None): return index_sum(sigmax(0)*sigmax(1) + sigmay(0)*sigmay(1), size=L) # our operator size is still set from config build_XXYY() config.L = 8 build_XXYY().spy() """ Explanation: The Auto subspace In some cases, it might not be clear if the Hamiltonian is block diagonal. In other cases, the subspace might just might be something that is not built in to dynamite. Conservation of total magnetization is a good example. Let's take the XXYY model: End of explanation """ from dynamite.subspaces import Auto H = build_XXYY() # we want the subspace conserved by Hamiltonian H, that contains # the state with four up spins followed by four down spins subspace = Auto(H, 'UUUUDDDD') H.subspace = subspace H.spy() """ Explanation: How can we take advantage of conservation of total magnetization? With the Auto subspace: End of explanation """ from math import factorial def choose(n, k): return factorial(n) // (factorial(k)*factorial(n-k)) print('subspace dimension:', subspace.get_dimension()) print('8 choose 4: ', choose(8, 4)) """ Explanation: As expected, the dimension has been reduced significantly! In fact, it has been reduced to 8 choose 4, which is what we would expect for total spin conservation: End of explanation """ # only three down spins subspace = Auto(H, 'UUUUUDDD') print('subspace dimension:', subspace.get_dimension()) print('8 choose 3: ', choose(8, 3)) """ Explanation: Or we can do a different total spin sector: End of explanation """
kimmintae/MNIST
MNIST Competiton_9980/mnist_competition_9980_Final.ipynb
mit
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # test data test_images = mnist.test.images.reshape(10000, 28, 28, 1) test_labels = mnist.test.labels[:] augmentation_size = 440000 images = np.concatenate((mnist.train.images.reshape(55000, 28, 28, 1), mnist.validation.images.reshape(5000, 28, 28, 1)), axis=0) labels = np.concatenate((mnist.train.labels, mnist.validation.labels), axis=0) datagen_list = [ ImageDataGenerator(rotation_range=10), ImageDataGenerator(rotation_range=20), ImageDataGenerator(rotation_range=30), ImageDataGenerator(width_shift_range=0.1), ImageDataGenerator(width_shift_range=0.2), ImageDataGenerator(width_shift_range=0.3), ] for datagen in datagen_list: datagen.fit(images) for image, label in datagen.flow(images, labels, batch_size=augmentation_size, shuffle=True, seed=2017): images = np.concatenate((images, image), axis=0) labels = np.concatenate((labels, label), axis=0) break print('Train Data Set :', images.shape) print('Test Data Set :', test_images.shape) """ Explanation: 1. Data Agumentation image rotation(10, 20, 30) image width shift(0.1, 0.2, 0.3) End of explanation """ model1 = Sequential([Convolution2D(filters=64, kernel_size=(3, 3), padding='same', activation='elu', input_shape=(28, 28, 1)), Convolution2D(filters=128, kernel_size=(3, 3), padding='same', activation='elu'), MaxPooling2D(pool_size=(2, 2)), Dropout(0.5), Convolution2D(filters=128, kernel_size=(3, 3), padding='same', activation='elu'), Convolution2D(filters=128, kernel_size=(3, 3), padding='same', activation='elu'), MaxPooling2D(pool_size=(2, 2)), Dropout(0.5), Convolution2D(filters=128, kernel_size=(3, 3), padding='same', activation='elu'), MaxPooling2D(pool_size=(2, 2)), Dropout(0.5), Flatten(), Dense(1024, activation='elu'), Dropout(0.5), Dense(1024, activation='elu'), Dropout(0.5), Dense(10, activation='softmax'), ]) model1.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy']) model1.fit(images, labels, batch_size=256, epochs=20, shuffle=True, verbose=1, validation_data=(test_images, test_labels)) model_json = model1.to_json() with open("model1.json", "w") as json_file: json_file.write(model_json) model1.save_weights("model1.h5") print("Saved model to disk") """ Explanation: 2. Training Architecture Convolution * 2 + MaxPool + Dropout Convolution * 2 + MaxPool + Dropout Convolution + MaxPool + Dropout Dense + Dropout Dense + Dropout Output Model Point Using Small Vggnet Ensemble Three different Convolutional Layer filter sizes(Model1 = 3, Model2 = 5, Model3 = 7) Elu Activation Function Adam Optimizer(learning rate = 0.0001) Data Augmentation(image rotation, image width shift) Batch Data Shuffle in training Model 1 End of explanation """ model2 = Sequential([Convolution2D(filters=64, kernel_size=(5, 5), padding='same', activation='elu', input_shape=(28, 28, 1)), Convolution2D(filters=128, kernel_size=(5, 5), padding='same', activation='elu'), MaxPooling2D(pool_size=(2, 2)), Dropout(0.5), Convolution2D(filters=128, kernel_size=(5, 5), padding='same', activation='elu'), Convolution2D(filters=128, kernel_size=(5, 5), padding='same', activation='elu'), MaxPooling2D(pool_size=(2, 2)), Dropout(0.5), Convolution2D(filters=128, kernel_size=(5, 5), padding='same', activation='elu'), MaxPooling2D(pool_size=(2, 2)), Dropout(0.5), Flatten(), Dense(1024, activation='elu'), Dropout(0.5), Dense(1024, activation='elu'), Dropout(0.5), Dense(10, activation='softmax'), ]) model2.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy']) model2.fit(images, labels, batch_size=256, epochs=20, shuffle=True, verbose=1, validation_data=(test_images, test_labels)) model_json = model2.to_json() with open("model2.json", "w") as json_file: json_file.write(model_json) model2.save_weights("model2.h5") print("Saved model to disk") """ Explanation: Model 2 End of explanation """ model3 = Sequential([Convolution2D(filters=64, kernel_size=(7, 7), padding='same', activation='elu', input_shape=(28, 28, 1)), Convolution2D(filters=128, kernel_size=(7, 7), padding='same', activation='elu'), MaxPooling2D(pool_size=(2, 2)), Dropout(0.5), Convolution2D(filters=128, kernel_size=(7, 7), padding='same', activation='elu'), Convolution2D(filters=128, kernel_size=(7, 7), padding='same', activation='elu'), MaxPooling2D(pool_size=(2, 2)), Dropout(0.5), Convolution2D(filters=128, kernel_size=(7, 7), padding='same', activation='elu'), MaxPooling2D(pool_size=(2, 2)), Dropout(0.5), Flatten(), Dense(1024, activation='elu'), Dropout(0.5), Dense(1024, activation='elu'), Dropout(0.5), Dense(10, activation='softmax'), ]) model3.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['accuracy']) model3.fit(images, labels, batch_size=256, epochs=20, shuffle=True, verbose=1, validation_data=(test_images, test_labels)) model_json = model3.to_json() with open("model3.json", "w") as json_file: json_file.write(model_json) model3.save_weights("model3.h5") print("Saved model to disk") """ Explanation: Model 3 End of explanation """ from keras.models import model_from_json from tensorflow.examples.tutorials.mnist import input_data from keras.optimizers import Adam import numpy as np mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # test data test_images = mnist.test.images.reshape(10000, 28, 28, 1) test_labels = mnist.test.labels[:] # load json and create model def model_open(name, test_images, test_labels): json_file = open(name + '.json', 'r') loaded_model_json = json_file.read() json_file.close() loaded_model = model_from_json(loaded_model_json) # load weights into new model loaded_model.load_weights(name + '.h5') print("Loaded model from disk") loaded_model.compile(optimizer=Adam(lr=0.0001), loss='categorical_crossentropy', metrics=['acc']) prob = loaded_model.predict_proba(test_images) acc = np.mean(np.equal(np.argmax(prob, axis=1), np.argmax(test_labels, axis=1))) print('\nmodel : %s, test accuracy : %.4f\n' % (name, acc)) return prob, acc model_1_prob, model_1_acc = model_open('model1', test_images, test_labels) model_2_prob, model_2_acc = model_open('model2', test_images, test_labels) model_3_prob, model_3_acc = model_open('model3', test_images, test_labels) """ Explanation: 3. Evaluate End of explanation """ def model_ensemble(prob1, acc1, prob2, acc2, prob3, acc3): prob_list = [prob1, prob2, prob3] acc_list = [acc1, acc2, acc3] idx_acc_list = {idx: acc for idx, acc in enumerate(acc_list)} sorted_acc_list = [idx for idx, _ in sorted(idx_acc_list.items(), key=lambda value: (value[1], value[0]), reverse=True)] final_prob = 0 for i in sorted_acc_list: final_prob += prob_list[i] * (i+1) final_score = np.mean(np.equal(np.argmax(final_prob, axis=1), np.argmax(test_labels, axis=1))) # Test print('Final test accuracy : %.4f' % final_score) model_ensemble(model_1_prob, model_1_acc, model_2_prob, model_2_acc, model_3_prob, model_3_acc) """ Explanation: 4. Final Result(Ensemble) This ensembe method gives higher weight to the model with higher performance. If the performances of two models were equal, the higher weight was given to the one with a bigger filter size best accuracy with a single model is $0.9972$ Final Ensemble Accuracy is $0.9980$ End of explanation """
cdawei/flickr-photo
src/traj_Melb.ipynb
gpl-2.0
%matplotlib inline import os, sys, time import pandas as pd import numpy as np from datetime import datetime import matplotlib.pyplot as plt def print_progress(cnt, total): """Display a progress bar""" assert(cnt > 0 and total > 0 and cnt <= total) length = 80 ratio = cnt / total n = int(length * ratio) sys.stdout.write('\r[%-80s] %d%%' % ('-'*n, int(ratio*100))) sys.stdout.flush() data_dir = '../data' fvisits = os.path.join(data_dir, 'userVisits-Melb-1.csv') fpoi = os.path.join(data_dir, 'poi-Melb-all.csv') fpoi_new = os.path.join(data_dir, 'poi-Melb-1.csv') fphoto = os.path.join(data_dir, 'Melb_photos_bigbox.csv') ftraj_all = os.path.join(data_dir, 'traj-all-Melb-1.csv') ftraj_noshort = os.path.join(data_dir, 'traj-noshort-Melb.csv') ftraj_nofew = os.path.join(data_dir, 'traj-nofew-Melb.csv') fvisits2 = os.path.join(data_dir, 'userVisits-Melb-allPOI.csv') fpoi2 = os.path.join(data_dir, 'costProfCat-MelbPOI-all.csv') """ Explanation: Generate Trajectories from Flickr Photos Taken in Melbourne Table of Contents 1. Load POI Data 1. Load Photo Data 1. Map Photos to POIs & Build Trajectories 1. Approach I - Greedy 1. Approach II - Dynamic Programming 1. Save Trajectory Data 1. Compute Trajectory Statistics 1. Filtering out Short Trajectories 1. Filtering out Users with Few Trajectories End of explanation """ poi_df = pd.read_csv(fpoi) poi_df.head() poi_df.drop(['poiURL', 'poiName'], axis=1, inplace=True) poi_df.set_index('poiID', inplace=True) print('#POIs:', poi_df.shape[0]) len(poi_df['poiTheme'].unique()) """ Explanation: <a id='sec1'></a> 1. Load POI Data End of explanation """ photo_df = pd.read_csv(fphoto, skipinitialspace=True, parse_dates=[2]) photo_df.head() """ Explanation: <a id='sec2'></a> 2. Load Photo Data End of explanation """ print(photo_df['Accuracy'].unique()) photo_df = photo_df[photo_df['Accuracy'] == 16] print(photo_df['Accuracy'].unique()) """ Explanation: Remove photos with low accuracies (accuracy $< 16$). End of explanation """ photo_df.drop(['Accuracy', 'URL', 'Marker(photo=0 video=1)'], axis=1, inplace=True) """ Explanation: Remove columns that will not be used. End of explanation """ photo_df['dateTaken'] = photo_df['Timestamp'].apply(lambda x: x.timestamp()) photo_df.drop('Timestamp', axis=1, inplace=True) photo_df['dateTaken'] = photo_df['dateTaken'].astype(np.int) """ Explanation: Convert datatime to unix epoch. End of explanation """ photo_df.rename(columns={'Photo_ID':'photoID', 'User_ID':'userID', 'Longitude':'photoLon', 'Latitude':'photoLat'}, \ inplace=True) photo_df.head() photo_df.shape print('#Photos:', photo_df['photoID'].unique().shape[0]) print('#Users:', photo_df['userID'].unique().shape[0]) photo_df.set_index('photoID', inplace=True) photo_df['poiID'] = -1 photo_df['trajID'] = -1 photo_df.head() """ Explanation: Rename columns. End of explanation """ def calc_dist_vec(longitudes1, latitudes1, longitudes2, latitudes2): """Calculate the distance (unit: km) between two places on earth, vectorised""" # convert degrees to radians lng1 = np.radians(longitudes1) lat1 = np.radians(latitudes1) lng2 = np.radians(longitudes2) lat2 = np.radians(latitudes2) radius = 6371.0088 # mean earth radius, en.wikipedia.org/wiki/Earth_radius#Mean_radius # The haversine formula, en.wikipedia.org/wiki/Great-circle_distance dlng = np.fabs(lng1 - lng2) dlat = np.fabs(lat1 - lat2) dist = 2 * radius * np.arcsin( np.sqrt( (np.sin(0.5*dlat))**2 + np.cos(lat1) * np.cos(lat2) * (np.sin(0.5*dlng))**2 )) return dist """ Explanation: <a id='sec3'></a> 3. Map Photos to POIs & Build Trajectories Generate travel history for each user from the photos taken by him/her. End of explanation """ calc_dist_vec(poi_df.loc[0, 'poiLon'], poi_df.loc[0, 'poiLat'], poi_df.loc[0, 'poiLon'], poi_df.loc[0, 'poiLat']) """ Explanation: Sanity check. End of explanation """ SUPER_FAST = 150 / (60 * 60) # 150 km/h filter_tags = pd.Series(data=np.zeros(photo_df.shape[0], dtype=np.bool), index=photo_df.index) cnt = 0 total = photo_df['userID'].unique().shape[0] for user in sorted(photo_df['userID'].unique().tolist()): udf = photo_df[photo_df['userID'] == user].copy() udf.sort_values(by='dateTaken', ascending=True, inplace=True) udists = calc_dist_vec(udf['photoLon'][:-1].values, udf['photoLat'][:-1].values, \ udf['photoLon'][1: ].values, udf['photoLat'][1: ].values) assert(udists.shape[0] == udf.shape[0]-1) superfast = np.zeros(udf.shape[0]-1, dtype=np.bool) for i in range(udf.shape[0]-1): ix1 = udf.index[i] ix2 = udf.index[i+1] dtime = udf.loc[ix2, 'dateTaken'] - udf.loc[ix1, 'dateTaken'] assert(dtime >= 0) if dtime == 0: superfast[i] = True speed = udists[i] / dtime if speed > SUPER_FAST: superfast[i] = True for j in range(superfast.shape[0]-1): if superfast[j] and superfast[j+1]: # jx0-->SUPER_FAST-->jx-->SUPER_FAST-->jx1: remove photo jx jx = udf.index[j+1] filter_tags.loc[jx] = True cnt += 1; print_progress(cnt, total) for jx in filter_tags.index: if filter_tags.loc[jx] == True: photo_df.drop(jx, axis=0, inplace=True) photo_df.shape """ Explanation: Filtering out photos that leads to super fast speeds. End of explanation """ poi_distmat = pd.DataFrame(data=np.zeros((poi_df.shape[0], poi_df.shape[0]), dtype=np.float), \ index=poi_df.index, columns=poi_df.index) for ix in poi_df.index: poi_distmat.loc[ix] = calc_dist_vec(poi_df.loc[ix, 'poiLon'], poi_df.loc[ix, 'poiLat'], \ poi_df['poiLon'], poi_df['poiLat']) """ Explanation: Distance between POIs. End of explanation """ photo_poi_distmat = pd.DataFrame(data=np.zeros((photo_df.shape[0], poi_df.shape[0]), dtype=np.float), \ index=photo_df.index, columns=poi_df.index) for i in range(photo_df.shape[0]): ix = photo_df.index[i] photo_poi_distmat.loc[ix] = calc_dist_vec(photo_df.loc[ix, 'photoLon'], photo_df.loc[ix, 'photoLat'], \ poi_df['poiLon'], poi_df['poiLat']) print_progress(i+1, photo_df.shape[0]) """ Explanation: Distance between photos and POIs. End of explanation """ DIST_MAX = 0.2 # 0.2km """ Explanation: "Map a photo to a POI if their coordinates differ by $<200$m based on the Haversine formula" according to the IJCAI15 paper. End of explanation """ TIME_GAP = 8 * 60 * 60 # 8 hours users = sorted(photo_df['userID'].unique().tolist()) """ Explanation: Time gap is $8$ hours according to the IJCAI15 paper. End of explanation """ traj_greedy = photo_df.copy() cnt = 0 for ix in traj_greedy.index: min_ix = photo_poi_distmat.loc[ix].idxmin() if photo_poi_distmat.loc[ix, min_ix] > DIST_MAX: # photo is taken at position far from any POI, do NOT use it pass else: traj_greedy.loc[ix, 'poiID'] = poi_df.index[min_ix] # map photo to the closest POI # all POIs that are very close to a photo are an option to map #photo_df.loc[ix, 'poiID'] = str(poi_df.index[~(dists > dist_max)].tolist()) cnt += 1; print_progress(cnt, traj_greedy.shape[0]) """ Explanation: <a id='sec3.1'></a> 3.1 Map Photos to POIs: Approach I - Greedy Map photo to the closest POI. End of explanation """ traj_greedy = traj_greedy[traj_greedy['poiID'] != -1] tid = 0 cnt = 0 for user in users: udf = traj_greedy[traj_greedy['userID'] == user].copy() udf.sort_values(by='dateTaken', ascending=True, inplace=True) if udf.shape[0] == 0: cnt += 1; print_progress(cnt, len(users)) continue traj_greedy.loc[udf.index[0], 'trajID'] = tid for i in range(1, udf.shape[0]): ix1 = udf.index[i-1] ix2 = udf.index[i] if udf.loc[ix2, 'dateTaken'] - udf.loc[ix1, 'dateTaken'] > TIME_GAP: tid += 1 traj_greedy.loc[ix2, 'trajID'] = tid else: traj_greedy.loc[ix2, 'trajID'] = tid tid += 1 # for trajectories of the next user cnt += 1; print_progress(cnt, len(users)) """ Explanation: Build trajectories. End of explanation """ map_dp = False def decode_photo_seq(photo_seq, poi_distmat, photo_poi_distmat, DIST_MAX, ALPHA=1): """ Map a sequence of photos to a set of POI such that the total cost, i.e. cost = sum(distance(photo_i, POI_i)) + sum(distance(POI_i, POI_{i+1})) is minimised. Implemented using DP. """ assert(len(photo_seq) > 0) assert(DIST_MAX > 0) if len(photo_seq) == 1: # only one POI in this sequence ix = photo_seq[0] assert(ix in photo_poi_distmat.index) return [photo_poi_distmat.loc[ix].idxmin()] # set of POIs that are close to any photo in the input sequence of photos poi_t = [] for jx in photo_seq: poi_t = poi_t + poi_distmat.index[~(photo_poi_distmat.loc[jx] > DIST_MAX)].tolist() columns = sorted(set(poi_t)) # cost_df.iloc[i, j] stores the minimum cost of photo sequence [..., 'photo_i'] among all # possible POI sequences end with 'POI_j', 'photo_i' was mapped to 'POI_j' cost_df = pd.DataFrame(data=np.zeros((len(photo_seq), len(columns)), dtype=np.float), \ index=photo_seq, columns=columns) # trace_df.iloc[i, j] stores the (previous) 'POI_k' such that the cost of POI sequence # [... --> 'POI_k' (prev POI) --> 'POI_j' (current POI)] is cost_df.iloc[i, j] trace_df = pd.DataFrame(data=np.zeros((len(photo_seq), len(columns)), dtype=np.int), \ index=photo_seq, columns=columns) # NO predecessor for the start POI trace_df.iloc[0] = -1 # costs for the first row are just the distances (or np.inf) from the first photo to all POIs for kx in cost_df.columns: ix = photo_seq[0] dist = photo_poi_distmat.loc[ix, kx] cost_df.loc[ix, kx] = np.inf if dist > DIST_MAX else dist # compute minimum costs recursively for i in range(1, len(photo_seq)): ix = cost_df.index[i] prev = cost_df.index[i-1] for jx in cost_df.columns: # distance(photo_i, POI_j) + alpha * distance(POI_k, POI_j) + previous cost # if distance(photo_i, POI_j) <= DIST_MAX else np.inf costs = [np.inf if photo_poi_distmat.loc[ix, jx] > DIST_MAX else \ photo_poi_distmat.loc[ix, jx] + ALPHA * poi_distmat.loc[kx, jx] + cost_df.loc[prev, kx] \ for kx in cost_df.columns] min_idx = np.argmin(costs) cost_df.loc[ix, jx] = costs[min_idx] trace_df.loc[ix, jx] = cost_df.columns[min_idx] # trace back pN = cost_df.loc[cost_df.index[-1]].idxmin() # the end POI seq_reverse = [pN] # sequence of POI in reverse order row_idx = trace_df.shape[0] - 1 # trace back from the last row of trace_df while (row_idx > 0): # the first row are all -1 ix = trace_df.index[row_idx] jx = seq_reverse[-1] poi = trace_df.loc[ix, jx] seq_reverse.append(poi) row_idx -= 1 return seq_reverse[::-1] # reverse the sequence """ Explanation: <a id='sec3.2'></a> 3.2 Map Photos to POIs: Approach II - Dynamic Programming Given a sequence of photos and a set of POIs, map the sequence of photos to the set of POI such that the total cost is minimised, i.e., \begin{equation} \text{minimize} \sum_i \text{distance}(\text{photo}i, \text{POI}_i) + \alpha \sum_i \text{distance}(\text{POI}_i, \text{POI}{i+1}) \end{equation} where $\text{photo}_i$ is mapped to $\text{POI}_i$, $\alpha$ is a trade-off parameter. End of explanation """ if map_dp == True: traj_dp = photo_df.copy() ALPHA = 1 if map_dp == True: tid = 0 cnt = 0 for user in users: udf = traj_dp[traj_dp['userID'] == user].copy() udf.sort_values(by='dateTaken', ascending=True, inplace=True) # filtering out photos that are far from all POIs for ix in udf.index: if photo_poi_distmat.loc[ix].min() > DIST_MAX: udf.drop(ix, axis=0, inplace=True) if udf.shape[0] == 0: cnt += 1; print_progress(cnt, len(users)) continue photo_seq = [udf.index[0]] for i in range(1, udf.shape[0]): ix1 = photo_seq[-1] ix2 = udf.index[i] if udf.loc[ix2, 'dateTaken'] - udf.loc[ix1, 'dateTaken'] > TIME_GAP: assert(len(photo_seq) > 0) poi_seq = decode_photo_seq(photo_seq, poi_distmat, photo_poi_distmat, DIST_MAX, ALPHA) assert(len(poi_seq) == len(photo_seq)) for j in range(len(poi_seq)): jx = photo_seq[j] poi = poi_seq[j] traj_dp.loc[jx, 'poiID'] = poi traj_dp.loc[jx, 'trajID'] = tid tid += 1 photo_seq.clear() photo_seq.append(ix2) else: photo_seq.append(ix2) assert(len(photo_seq) > 0) poi_seq = decode_photo_seq(photo_seq, poi_distmat, photo_poi_distmat, DIST_MAX, ALPHA) assert(len(poi_seq) == len(photo_seq)) for j in range(len(poi_seq)): jx = photo_seq[j] poi = poi_seq[j] traj_dp.loc[jx, 'poiID'] = poi traj_dp.loc[jx, 'trajID'] = tid tid += 1 cnt += 1; print_progress(cnt, len(users)) """ Explanation: Build travel sequences and map photos to POIs. End of explanation """ def calc_cost(traj_df, poi_distmat, photo_poi_distmat): cost = 0 traj_df = traj_df[traj_df['poiID'] != -1] for tid in sorted(traj_df['trajID'].unique().tolist()): tdf = traj_df[traj_df['trajID'] == tid].copy() tdf.sort_values(by='dateTaken', ascending=True, inplace=True) cost += np.trace(photo_poi_distmat.loc[tdf.index, tdf['poiID']]) cost += np.trace(poi_distmat.loc[tdf['poiID'][:-1].values, tdf['poiID'][1:].values]) return cost calc_cost(traj_greedy, poi_distmat, photo_poi_distmat) if map_dp == True: calc_cost(traj_dp, poi_distmat, photo_poi_distmat) traj_greedy['trajID'].max() if map_dp == True: traj_dp['trajID'].max() """ Explanation: Compute the total cost of trajectories. End of explanation """ compare_interactive = False if compare_interactive == True and map_dp == True: for tid in sorted(traj_dp['trajID'].unique().tolist()): if tid == -1: continue tdf1 = traj_dp[traj_dp['trajID'] == tid].copy() tdf1.sort_values(by='dateTaken', ascending=True, inplace=True) tdf2 = traj_greedy[traj_greedy['trajID'] == tid].copy() tdf2.sort_values(by='dateTaken', ascending=True, inplace=True) print(tdf1) print(tdf2) input('Press any key to continue...') """ Explanation: Compare trajectories interactively. End of explanation """ visits = traj_greedy[traj_greedy['poiID'] != -1] #visits = traj_dp[traj_dp['poiID'] != -1] visits.head() """ Explanation: <a id='sec4'></a> 4. Save Trajectory Data Save trajectories and related POIs to files. End of explanation """ uservisits = visits.copy() uservisits.rename(columns={'trajID':'seqID'}, inplace=True) uservisits.to_csv(fvisits, index=False) """ Explanation: Save visits data. End of explanation """ poiix = sorted(visits['poiID'].unique().tolist()) poi_df.loc[poiix].to_csv(fpoi_new, index=True) """ Explanation: Save POIs to CSV file. End of explanation """ poifreq = visits[['poiID', 'dateTaken']].copy().groupby('poiID').agg(np.size) poifreq.rename(columns={'dateTaken':'poiFreq'}, inplace=True) """ Explanation: Count the number of photos taken at each POI. End of explanation """ visits_df = visits.copy() visits_df.reset_index(inplace=True) visits_df.drop(['photoLon', 'photoLat'], axis=1, inplace=True) visits_df['dateTaken'] = visits_df['dateTaken'].astype(np.int) visits_df.rename(columns={'trajID':'seqID'}, inplace=True) visits_df['poiTheme'] = poi_df.loc[visits_df['poiID'], 'poiTheme'].tolist() visits_df['poiFreq'] = poifreq.loc[visits_df['poiID'], 'poiFreq'].astype(np.int).tolist() """ Explanation: Save data in file format like IJCAI datasets: user visits data, POI related data. End of explanation """ visits_df.sort_values(by='dateTaken', ascending=True, inplace=True) visits_df.head() """ Explanation: Sort photos by date taken. End of explanation """ cols = ['photoID', 'userID', 'dateTaken', 'poiID', 'poiTheme', 'poiFreq', 'seqID'] visits_df.to_csv(fvisits2, sep=';', quoting=2, columns=cols, index=False) fvisits2 """ Explanation: Save visits data. End of explanation """ costprofit_df = pd.DataFrame(columns=['from', 'to', 'cost', 'profit', 'category']) pois = sorted(visits_df['poiID'].unique()) for poi1 in pois: for poi2 in pois: if poi1 == poi2: continue ix = costprofit_df.shape[0] costprofit_df.loc[ix, 'from'] = poi1 costprofit_df.loc[ix, 'to'] = poi2 costprofit_df.loc[ix, 'cost'] = poi_distmat.loc[poi1, poi2] * 1000 # meters costprofit_df.loc[ix, 'profit'] = poifreq.loc[poi2, 'poiFreq'] costprofit_df.loc[ix, 'category'] = poi_df.loc[poi2, 'poiTheme'] costprofit_df.head() """ Explanation: POI related data: cost=POI-POI distance (meters), profit=frequency (#photos taken at the second POI). End of explanation """ cols = ['from', 'to', 'cost', 'profit', 'category'] costprofit_df.to_csv(fpoi2, sep=';', quoting=2, columns=cols, index=False) """ Explanation: Save POI related data. End of explanation """
ankoorb/scipy2015_tutorial
notebooks/3. Fitting Regression Models.ipynb
cc0-1.0
from io import StringIO data_string = """ Drugs Score 0 1.17 78.93 1 2.97 58.20 2 3.26 67.47 3 4.69 37.47 4 5.83 45.65 5 6.00 32.92 6 6.41 29.97 """ lsd_and_math = pd.read_table(StringIO(data_string), sep='\t', index_col=0) lsd_and_math """ Explanation: Regression modeling A general, primary goal of many statistical data analysis tasks is to relate the influence of one variable on another. For example: how different medical interventions influence the incidence or duration of disease how baseball player's performance varies as a function of age. how test scores are correlated with tissue LSD concentration End of explanation """ lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8)) """ Explanation: Taking LSD was a profound experience, one of the most important things in my life --Steve Jobs End of explanation """ sum_of_squares = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x) ** 2) sum_of_squares([0,1], lsd_and_math.Drugs, lsd_and_math.Score) x, y = lsd_and_math.T.values b0, b1 = fmin(sum_of_squares, [0,1], args=(x,y)) b0, b1 ax = lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8)) ax.plot([0,10], [b0, b0+b1*10]) ax = lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8), ylim=(20, 90)) ax.plot([0,10], [b0, b0+b1*10]) for xi, yi in zip(x,y): ax.plot([xi]*2, [yi, b0+b1*xi], 'k:') """ Explanation: We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$. $M(Y|X) = E(Y|X)$ $M(Y|X) = Pr(Y=1|X)$ In general, $$M(Y|X) = f(X)$$ for linear regression $$M(Y|X) = f(X\beta)$$ where $f$ is some function, for example a linear function: <div style="font-size: 150%;"> $y_i = \beta_0 + \beta_1 x_{1i} + \ldots + \beta_k x_{ki} + \epsilon_i$ </div> Regression is a weighted sum of independent predictors and $\epsilon_i$ accounts for the difference between the observed response $y_i$ and its prediction from the model $\hat{y_i} = \beta_0 + \beta_1 x_i$. This is sometimes referred to as process uncertainty. Interpretation: coefficients represent the change in Y for a unit increment of the predictor X. Two important regression assumptions: normal errors homoscedasticity Parameter estimation We would like to select $\beta_0, \beta_1$ so that the difference between the predictions and the observations is zero, but this is not usually possible. Instead, we choose a reasonable criterion: the smallest sum of the squared differences between $\hat{y}$ and $y$. <div style="font-size: 120%;"> $$R^2 = \sum_i (y_i - [\beta_0 + \beta_1 x_i])^2 = \sum_i \epsilon_i^2 $$ </div> Squaring serves two purposes: to prevent positive and negative values from cancelling each other out to strongly penalize large deviations. Whether or not the latter is a desired depends on the goals of the analysis. In other words, we will select the parameters that minimize the squared error of the model. End of explanation """ sum_of_absval = lambda theta, x, y: np.sum(np.abs(y - theta[0] - theta[1]*x)) b0, b1 = fmin(sum_of_absval, [0,0], args=(x,y)) print('\nintercept: {0:.2}, slope: {1:.2}'.format(b0,b1)) ax = lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8)) ax.plot([0,10], [b0, b0+b1*10]) """ Explanation: Alternative loss functions Minimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences: End of explanation """ sum_squares_quad = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)) ** 2) b0,b1,b2 = fmin(sum_squares_quad, [1,1,-1], args=(x,y)) print('\nintercept: {0:.2}, x: {1:.2}, x2: {2:.2}'.format(b0,b1,b2)) ax = lsd_and_math.plot(x='Drugs', y='Score', style='ro', legend=False, xlim=(0,8)) xvals = np.linspace(0, 8, 100) ax.plot(xvals, b0 + b1*xvals + b2*(xvals**2)) """ Explanation: We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model: <div style="font-size: 150%;"> $y_i = \beta_0 + \beta_1 x_i + \beta_2 x_i^2 + \epsilon_i$ </div> End of explanation """ sum_squares_cubic = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2) - theta[3]*(x**3)) ** 2) salmon = pd.read_table("../data/salmon.dat", delim_whitespace=True, index_col=0) plt.plot(salmon.spawners, salmon.recruits, 'r.') b0,b1,b2,b3 = fmin(sum_squares_cubic, [0,1,-1,0], args=(salmon.spawners, salmon.recruits)) xvals = np.arange(500) plt.plot(xvals, b0 + b1*xvals + b2*(xvals**2) + b3*(xvals**3)) """ Explanation: Although a polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters. For some data, it may be reasonable to consider polynomials of order>2. For example, consider the relationship between the number of spawning salmon and the number of juveniles recruited into the population the following year; one would expect the relationship to be positive, but not necessarily linear. End of explanation """ from sklearn import linear_model straight_line = linear_model.LinearRegression() straight_line.fit(x[:, np.newaxis], y) straight_line.coef_ straight_line.intercept_ plt.plot(x, y, 'ro') plt.plot(x, straight_line.predict(x[:, np.newaxis]), color='blue', linewidth=3) """ Explanation: Linear Regression with scikit-learn In practice, we need not fit least squares models by hand because they are implemented generally in packages such as scikit-learn and statsmodels. For example, scikit-learn package implements least squares models in its LinearRegression class: End of explanation """ from patsy import dmatrix X = dmatrix('salmon.spawners + I(salmon.spawners**2)') """ Explanation: For more general regression model building, its helpful to use a tool for describing statistical models, called patsy. With patsy, it is easy to specify the desired combinations of variables for any particular analysis, using an "R-like" syntax. patsy parses the formula string, and uses it to construct the approriate design matrix for the model. For example, the quadratic model specified by hand above can be coded as: End of explanation """ poly_line = linear_model.LinearRegression(fit_intercept=False) poly_line.fit(X, salmon.recruits) poly_line.coef_ plt.plot(salmon.spawners, salmon.recruits, 'ro') frye_range = np.arange(500) plt.plot(frye_range, poly_line.predict(dmatrix('frye_range + I(frye_range**2)')), color='blue') """ Explanation: The dmatrix function returns the design matrix, which can be passed directly to the LinearRegression fitting method. End of explanation """ medals = pd.read_csv('../data/medals.csv') medals.head() """ Explanation: Generalized linear models Often our data violates one or more of the linear regression assumptions: non-linear non-normal error distribution heteroskedasticity this forces us to generalize the regression model in order to account for these characteristics. As a motivating example, consider the Olympic medals data that we compiled earlier in the tutorial. End of explanation """ medals.plot(x='population', y='medals', kind='scatter') """ Explanation: We expect a positive relationship between population and awarded medals, but the data in their raw form are clearly not amenable to linear regression. End of explanation """ medals.plot(x='log_population', y='medals', kind='scatter') """ Explanation: Part of the issue is the scale of the variables. For example, countries' populations span several orders of magnitude. We can correct this by using the logarithm of population, which we have already calculated. End of explanation """ linear_medals = linear_model.LinearRegression() X = medals.log_population[:, np.newaxis] linear_medals.fit(X, medals.medals) ax = medals.plot(x='log_population', y='medals', kind='scatter') ax.plot(medals.log_population, linear_medals.predict(X), color='red', linewidth=2) """ Explanation: This is an improvement, but the relationship is still not adequately modeled by least-squares regression. End of explanation """ # Poisson negative log-likelhood poisson_loglike = lambda beta, X, y: -(-np.exp(X.dot(beta)) + y*X.dot(beta)).sum() """ Explanation: This is due to the fact that the response data are counts. As a result, they tend to have characteristic properties. discrete positive variance grows with mean to account for this, we can do two things: (1) model the medal count on the log scale and (2) assume Poisson errors, rather than normal. Recall the Poisson distribution from the previous section: $$p(y)=\frac{e^{-\lambda}\lambda^y}{y!}$$ $Y={0,1,2,\ldots}$ $\lambda > 0$ $$E(Y) = \text{Var}(Y) = \lambda$$ So, we will model the logarithm of the expected value as a linear function of our predictors: $$\log(\lambda) = X\beta$$ In this context, the log function is called a link function. This transformation implies the mean of the Poisson is: $$\lambda = \exp(X\beta)$$ We can plug this into the Poisson likelihood and use maximum likelihood to estimate the regression covariates $\beta$. $$\log L = \sum_{i=1}^n -\exp(X_i\beta) + Y_i (X_i \beta)- \log(Y_i!)$$ As we have already done, we just need to code the kernel of this likelihood, and optimize! End of explanation """ poisson_loglike([0,1], medals[['log_population']].assign(intercept=1), medals.medals) """ Explanation: Let's use the assign method to add a column of ones to the design matrix. End of explanation """ b1, b0 = fmin(poisson_loglike, [0,1], args=(medals[['log_population']].assign(intercept=1).values, medals.medals.values)) b0, b1 """ Explanation: We will use Nelder-Mead to minimize the negtive log-likelhood. End of explanation """ ax = medals.plot(x='log_population', y='medals', kind='scatter') xvals = np.arange(12, 22) ax.plot(xvals, np.exp(b0 + b1*xvals), 'r--') """ Explanation: The resulting fit looks reasonable. End of explanation """ # Write your answer here """ Explanation: Exercise: Multivariate GLM Add the OECD indicator variable to the model, and estimate the model coefficients. End of explanation """ ax = medals[medals.oecd==1].plot(x='log_population', y='medals', kind='scatter', alpha=0.8) medals[medals.oecd==0].plot(x='log_population', y='medals', kind='scatter', color='red', alpha=0.5, ax=ax) """ Explanation: Interactions among variables Interactions imply that the effect of one covariate $X_1$ on $Y$ depends on the value of another covariate $X_2$. $$M(Y|X) = \beta_0 + \beta_1 X_1 + \beta_2 X_2 +\beta_3 X_1 X_2$$ the effect of a unit increase in $X_1$: $$M(Y|X_1+1, X_2) - M(Y|X_1, X_2)$$ $$\begin{align} &= \beta_0 + \beta_1 (X_1 + 1) + \beta_2 X_2 +\beta_3 (X_1 + 1) X_2 - [\beta_0 + \beta_1 X_1 + \beta_2 X_2 +\beta_3 X_1 X_2] \ &= \beta_1 + \beta_3 X_2 \end{align}$$ End of explanation """ y = medals.medals X = dmatrix('log_population * oecd', data=medals) X """ Explanation: Interaction can be interpreted as: $X_1$ interacts with $X_2$ $X_1$ modifies the effect of $X_2$ $X_2$ modifies the effect of $X_1$ $X_1$ and $X_2$ are non-additive or synergistic Let's construct a model that predicts medal count based on population size and OECD status, as well as the interaction. We can use patsy to set up the design matrix. End of explanation """ interaction_params = fmin(poisson_loglike, [0,1,1,0], args=(X, y)) interaction_params """ Explanation: Now, fit the model. End of explanation """ y = medals.medals X = dmatrix('center(log_population) * oecd', data=medals) X fmin(poisson_loglike, [0,1,1,0], args=(X, y)) """ Explanation: Notice anything odd about these estimates? The main effect of the OECD effect is negative, which seems counter-intuitive. This is because the variable is interpreted as the OECD effect when the log-population is zero. This is not particularly meaningful. We can improve the interpretability of this parameter by centering the log-population variable prior to entering it into the model. This will result in the OECD main effect being interpreted as the marginal effect of being an OECD country for an average-sized country. End of explanation """ def calc_poly(params, data): x = np.c_[[data**i for i in range(len(params))]] return np.dot(params, x) x, y = lsd_and_math.T.values sum_squares_poly = lambda theta, x, y: np.sum((y - calc_poly(theta, x)) ** 2) betas = fmin(sum_squares_poly, np.zeros(7), args=(x,y), maxiter=1e6) plt.plot(x, y, 'ro') xvals = np.linspace(0, max(x), 100) plt.plot(xvals, calc_poly(betas, xvals)) """ Explanation: Model Selection How do we choose among competing models for a given dataset? More parameters are not necessarily better, from the standpoint of model fit. For example, fitting a 6th order polynomial to the LSD example certainly results in an overfit. End of explanation """ n = len(x) aic = lambda rss, p, n: n * np.log(rss/(n-p-1)) + 2*p RSS1 = sum_of_squares(fmin(sum_of_squares, [0,1], args=(x,y)), x, y) RSS2 = sum_squares_quad(fmin(sum_squares_quad, [1,1,-1], args=(x,y)), x, y) print('\nModel 1: {0}\nModel 2: {1}'.format(aic(RSS1, 2, n), aic(RSS2, 3, n))) """ Explanation: One approach is to use an information-theoretic criterion to select the most appropriate model. For example Akaike's Information Criterion (AIC) balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC as: $$AIC = n \log(\hat{\sigma}^2) + 2p$$ where $p$ is the number of parameters in the model and $\hat{\sigma}^2 = RSS/(n-p-1)$. Notice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases. AIC is a metric of information distance between a given model and a notional "true" model. Since we don't know the true model, the AIC value itself is not meaningful in an absolute sense, but is useful as a relative measure of model quality. To apply AIC to model selection, we choose the model that has the lowest AIC value. End of explanation """ # Write your answer here """ Explanation: Hence, on the basis of "information distance", we would select the 2-parameter (linear) model. Exercise: Olympic medals model selection Use AIC to select the best model from the following set of Olympic medal prediction models: population only population and OECD interaction model For these models, use the alternative form of AIC, which uses the log-likelhood rather than the residual sums-of-squares: $$AIC = -2 \log(L) + 2p$$ End of explanation """ titanic = pd.read_excel("../data/titanic.xls", "titanic") titanic.name jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) plt.yticks([0,1]) plt.ylabel("survived") plt.xlabel("log(fare)") """ Explanation: Logistic Regression Fitting a line to the relationship between two variables using the least squares approach is sensible when the variable we are trying to predict is continuous, but what about when the data are dichotomous? male/female pass/fail died/survived Let's consider the problem of predicting survival in the Titanic disaster, based on our available information. For example, lets say that we want to predict survival as a function of the fare paid for the journey. End of explanation """ x = np.log(titanic.fare[titanic.fare>0]) y = titanic.survived[titanic.fare>0] betas_titanic = fmin(sum_of_squares, [1,1], args=(x,y)) jitter = np.random.normal(scale=0.02, size=len(titanic)) plt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3) plt.yticks([0,1]) plt.ylabel("survived") plt.xlabel("log(fare)") plt.plot([0,7], [betas_titanic[0], betas_titanic[0] + betas_titanic[1]*7.]) """ Explanation: I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale. Clearly, fitting a line through this data makes little sense, for several reasons. First, for most values of the predictor variable, the line would predict values that are not zero or one. Second, it would seem odd to choose least squares (or similar) as a criterion for selecting the best line. End of explanation """ logit = lambda p: np.log(p/(1.-p)) unit_interval = np.linspace(0,1) plt.plot(unit_interval/(1-unit_interval), unit_interval) plt.xlabel(r'$p/(1-p)$') plt.ylabel('p'); """ Explanation: If we look at this data, we can see that for most values of fare, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the "survived" (y=1) side for larger values of fare than on the "died" (y=0) side. Stochastic model Rather than model the binary outcome explicitly, it makes sense instead to model the probability of death or survival in a stochastic model. Probabilities are measured on a continuous [0,1] scale, which may be more amenable for prediction using a regression line. We need to consider a different probability model for this exerciese however; let's consider the Bernoulli distribution as a generative model for our data: <div style="font-size: 120%;"> $$f(y|p) = p^{y} (1-p)^{1-y}$$ </div> where $y = {0,1}$ and $p \in [0,1]$. So, this model predicts whether $y$ is zero or one as a function of the probability $p$. Notice that when $y=1$, the $1-p$ term disappears, and when $y=0$, the $p$ term disappears. So, the model we want to fit should look something like this: <div style="font-size: 120%;"> $$p_i = \beta_0 + \beta_1 x_i + \epsilon_i$$ </div> However, since $p$ is constrained to be between zero and one, it is easy to see where a linear (or polynomial) model might predict values outside of this range. As with the Poisson regression, we can modify this model slightly by using a link function to transform the probability to have an unbounded range on a new scale. Specifically, we can use a logit transformation as our link function: <div style="font-size: 120%;"> $$\text{logit}(p) = \log\left[\frac{p}{1-p}\right] = x$$ </div> Here's a plot of $p/(1-p)$ End of explanation """ plt.plot(logit(unit_interval), unit_interval) plt.xlabel('logit(p)') plt.ylabel('p'); """ Explanation: And here's the logit function: End of explanation """ invlogit = lambda x: 1. / (1 + np.exp(-x)) """ Explanation: The inverse of the logit transformation is: <div style="font-size: 150%;"> $$p = \frac{1}{1 + \exp(-x)}$$ </div> End of explanation """ def logistic_like(theta, x, y): p = invlogit(theta[0] + theta[1] * x) # Return negative of log-likelihood return -np.sum(y * np.log(p) + (1-y) * np.log(1 - p)) """ Explanation: So, now our model is: <div style="font-size: 120%;"> $$\text{logit}(p_i) = \beta_0 + \beta_1 x_i + \epsilon_i$$ </div> We can fit this model using maximum likelihood. Our likelihood, again based on the Bernoulli model is: <div style="font-size: 120%;"> $$L(y|p) = \prod_{i=1}^n p_i^{y_i} (1-p_i)^{1-y_i}$$ </div> which, on the log scale is: <div style="font-size: 120%;"> $$l(y|p) = \sum_{i=1}^n y_i \log(p_i) + (1-y_i)\log(1-p_i)$$ </div> We can easily implement this in Python, keeping in mind that fmin minimizes, rather than maximizes functions: End of explanation """ x, y = titanic[titanic.fare.notnull()][['fare', 'survived']].values.T """ Explanation: Remove null values from variables (a bad idea, which we will show later) ... End of explanation """ b0, b1 = fmin(logistic_like, [0.5,0], args=(x,y)) b0, b1 jitter = np.random.normal(scale=0.01, size=len(x)) plt.plot(x, y+jitter, 'r.', alpha=0.3) plt.yticks([0,.25,.5,.75,1]) xvals = np.linspace(0, 600) plt.plot(xvals, invlogit(b0+b1*xvals)) """ Explanation: ... and fit the model. End of explanation """ logistic = linear_model.LogisticRegression() logistic.fit(x[:, np.newaxis], y) logistic.coef_ """ Explanation: As with our least squares model, we can easily fit logistic regression models in scikit-learn, in this case using the LogisticRegression. End of explanation """ # Write your answer here """ Explanation: Exercise: multivariate logistic regression Which other variables might be relevant for predicting the probability of surviving the Titanic? Generalize the model likelihood to include 2 or 3 other covariates from the dataset. End of explanation """