markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Compute loss With logits, we can compute loss:
labels = np.random.randint(num_classes, size=(batch_size)) loss = modeling.losses.weighted_sparse_categorical_crossentropy_loss( labels=labels, predictions=tf.nn.log_softmax(logits, axis=-1)) print(loss)
official/colab/nlp/nlp_modeling_library_intro.ipynb
tombstone/models
apache-2.0
n=2
df = run(N[0]) df
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
saketkc/hatex
mit
n=3
df = run(N[1]) df
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
saketkc/hatex
mit
n=4
count_dict = run(N[2]) count_dict
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
saketkc/hatex
mit
N=100
df = run(N[3], show=False)
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
saketkc/hatex
mit
$$\sum_{s \in S_a}\pi(s)c(s)=E[c(s)]$$ and similarly, $$\sum_{s \in S_a}\pi(s)c^2(s)=E[c^2(s)]=Var(c(s))+E^2[c(s)]$$
expectation = sum(df[r'Simulated $\pi(s)$']*df['c(s)']) expectation2 = sum(df[r'Simulated $\pi(s)$']*df['c(s)']*df['c(s)']) t_expectation = np.mean(df['c(s)']) t_expectation2 = np.var(df['c(s)'])+np.mean(df['c(s)'])**2 print 'Simulated E[c(s)] = {}\t\t Theoretical(M0M) E[c(s)] = {}'.format(expectation, t_expectation) p...
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
saketkc/hatex
mit
I use method of moments to calculate the Theoretical values. They seem to be in sync with the simulated values. The estimates seem to be in sync even though MOM is just a first approximation because the sample size is large enough to capture the dynamics of the population distribution.
cycles = df['c(s)'] plt.hist(cycles, normed=True)
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
saketkc/hatex
mit
Problem 3 Part (A)
def run(): N = 1000 N_iterations = 200 chrom_length = 3*(10**9) transposon_length = 3*1000 mu = 0.05 t_positions = [] n_initial = np.random.random_integers(N-1) x_initial = np.random.random_integers(chrom_length-1) offspring_positions = [] all_positions = [[] for t in range(N)]...
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
saketkc/hatex
mit
The above example shows one case when the "the transposon does not spread"
all_t_count = run() nondie_out_transposons = all_t_count fig, axs = plt.subplots(2,2) axs[0][0].plot([x[0] for x in nondie_out_transposons]) axs[0][0].set_title('No. of Transpososn v/s Generations') axs[0][0].set_xlabel('Generations') axs[0][0].set_ylabel('No. of Transpososn') axs[0][1].plot([x[1] for x in die_out_tr...
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
saketkc/hatex
mit
The above example shows one case when the "the transposon does spread", with rate being exponential and the common transposons still being limited Part (B) Treating the total number of transposons $N(t)$ at any time $t$ to be a branching process, then $N(t+1) = \sum_{i=1}^{N(t)} W_{t,i}$ Where $W_{t,i}$ is the number o...
from math import log N=10**7 mu=0.01 L=3*(10**9) t = log(0.1*N*L*L/2)/log(1+mu) print(t)
2015_Fall/MATH-578B/Homework5/Homework5.ipynb
saketkc/hatex
mit
1. Introduction Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection. The advantages of support vector machines are: Effective in high dimensional spaces. Still effective in cases where number of dimensions is greater than the number of samp...
X = [[0, 0], [1, 1]] y = [0, 1] clf = svm.SVC() clf.fit(X, y)
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
After being fitted, the model can then be used to predict new values:
clf.predict([[2., 2.]])
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
SVMs decision function depends on some subset of the training data, called the support vectors. Some properties of these support vectors can be found in members support_vectors_, support_ and n_support:
# get support vectors print clf.support_vectors_ # get indices of support vectors print clf.support_ # get number of support vectors for each class print clf.n_support_
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
3.2.1 Example: Iris Dataset. As an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository</a>. This data set contains 3 classes of 50 instances each, where each class refers to a type of ...
# Adapted from a notebook by Jason Brownlee def loadDataset(filename, split): xTrain = [] cTrain = [] xTest = [] cTest = [] with open(filename, 'rb') as csvfile: lines = csv.reader(csvfile) dataset = list(lines) for i in range(len(dataset)-1): for y in range(4): ...
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
Now, we select two classes and two attributes.
# Select attributes i = 0 # Try 0,1,2,3 j = 1 # Try 0,1,2,3 with j!=i # Select two classes c0 = 'Iris-versicolor' c1 = 'Iris-virginica' # Select two coordinates ind = [i, j] # Take training test X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all) if cTrain_all[n]==c0 or cT...
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
3.2.2. Data normalization Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized. We will define a norm...
def normalize(X, mx=None, sx=None): # Compute means and standard deviations if mx is None: mx = np.mean(X, axis=0) if sx is None: sx = np.std(X, axis=0) # Normalize X0 = (X-mx)/sx return X0, mx, sx
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set.
# Normalize data Xn_tr, mx, sx = normalize(X_tr) Xn_tst, mx, sx = normalize(X_tst, mx, sx)
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
The following figure generates a plot of the normalized training data.
# Separate components of x into different arrays (just for the plots) x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0] x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0] x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1] x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1] # Scatterplot. labels = {...
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
In order to apply the gradient descent rule, we need to define two methods: - A fit method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations. - A predict method, that receives the model weight and a set of inputs, and returns the posteri...
def logregFit(Z_tr, Y_tr, rho, n_it): # Data dimension n_dim = Z_tr.shape[1] # Initialize variables nll_tr = np.zeros(n_it) pe_tr = np.zeros(n_it) w = np.random.randn(n_dim,1) # Running the gradient descent algorithm for n in range(n_it): # Compute posterior probabili...
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\intercal)^\intercal$.
# Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 200 # Number of iterations # Compute Z's Z_tr = np.c_[np.ones(n_tr), Xn_tr] Z_tst = np.c_[np.ones(n_tst), Xn_tst] n_dim = Z_tr.shape[1] # Convert target arrays to column vectors Y_tr2 = Y_tr[np.newaxis].T Y_tst2 = Y_tst[np.newaxis].T # Run...
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
3.2.3. Free parameters Under certaing conditions, the gradient descent method can be shown to converge assymtotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors: Number of iter...
# Create a regtangular grid. x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max() y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max() dx = x_max - x_min dy = y_max - y_min h = dy /400 xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h), np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy...
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
3.2.5. Polynomial Logistic Regression The error rates of the logitic regression model can be potentially reduce by using polynomial transformations. To compute the polinomial transformation up to a given degree, we can use the PolynomialFeatures method in sklearn.preprocessing.
# Parameters of the algorithms rho = float(1)/50 # Learning step n_it = 500 # Number of iterations g = 5 # Degree of polynomial # Compute Z_tr poly = PolynomialFeatures(degree=g) Z_tr = poly.fit_transform(Xn_tr) # Normalize columns (this is useful to make algorithms more stable).) Zn, mz, sz = normalize(Z_tr[:,1:...
C4.Classification_SVM/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
ML4DS/ML4all
mit
And now let's test if the Basemap package is loaded and the graphics displayed correctly.
map = Basemap(projection='ortho', lat_0=50, lon_0=-100, resolution='l', area_thresh=1000.0) map.drawcoastlines() plt.show()
maps-ipython.ipynb
bobbyangelov/ipython-notebooks
mit
Now to the cool part!
import csv # Open the cities population data file. filename = 'city_longlat.csv' # Create empty lists for the latitudes, longitudes and population. lats, lons, population = [], [], [] # Read through the entire file, skip the first line, # and pull out the data we need. with open(filename) as f: # Create a csv r...
maps-ipython.ipynb
bobbyangelov/ipython-notebooks
mit
1) Document In the case of the text that we just imported, each entry in the list is a "document"--a single body of text, hopefully with some coherent meaning.
print("One document: \"{}\"".format(documents[0]))
language-processing-vocab/language_processing_vocab.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
2) Tokenization We split each document into smaller pieces ("tokens") in a process called tokenization. Tokens can be counted, and most importantly, compared between documents. There are potentially many different ways to tokenize text--splitting on spaces, removing punctionation, diving the document into n-character p...
from nltk.stem import porter from nltk.tokenize import TweetTokenizer # tokenize the documents # find good information on tokenization: # https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html # find documentation on pre-made tokenizers and options here: # http://www.nltk.org/api/nltk.tokenize.html tknz...
language-processing-vocab/language_processing_vocab.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
3) Text corpus The text corpus is a collection of all of the documents (Tweets) that we're interested in modeling. Topic modeling and/or clustering on a corpus tends to work best if that corpus has some similar themes--this will mean that some tokens overlap, and we can get signal out of when documents share (or do not...
# number of documents in the corpus print("There are {} documents in the corpus.".format(len(documents)))
language-processing-vocab/language_processing_vocab.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
4) Stop words: Stop words are simply tokens that we've chosen to remove from the corpus, for any reason. In English, removing words like "and", "the", "a", "at", and "it" are common choices for stop words. Stop words can also be edited per project requirement, in case some words are too common in a particular dataset t...
from nltk.corpus import stopwords stopset = set(stopwords.words('english')) print("The English stop words list provided by NLTK: ") print(stopset) stopset.update(["twitter"]) # add token stopset.remove("i") # remove token print("\nAdd or remove stop words form the set: ") print(stopset)
language-processing-vocab/language_processing_vocab.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
5) Vectorize: Transform each document into a vector. There are several good choices that you can make about how to do this transformation, and I'll talk about each of them in a second. In order to vectorize documents in a corpus (without any dimensional reduction around the vocabulary), think of each document as a row ...
# we're going to use the vectorizer functions that scikit learn provides # define the tokenizer that we want to use # must be a callable function that takes a document and returns a list of tokens tknzr = TweetTokenizer(reduce_len = True) stemmer = porter.PorterStemmer() def myTokenizer(doc): return [stemmer.stem(...
language-processing-vocab/language_processing_vocab.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
Bag of words Taking all the words from a document, and sticking them in a bag. Order does not matter, which could cause a problem. "Alice loves cake" might have a different meaning than "Cake loves Alice." Frequency Counting the number of times a word appears in a document. Tf-Idf (term frequency inverse document frequ...
# documentation on this sckit-learn function here: # http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html tfidf_vectorizer = TfidfVectorizer(tokenizer = myTokenizer, stop_words = stopset) tfidf_vectorized_documents = tfidf_vectorizer.fit_transform(documents) tfidf_vect...
language-processing-vocab/language_processing_vocab.ipynb
fionapigott/Data-Science-45min-Intros
unlicense
To begin exploring the data we took a sample of the SEER data, defined the features and dependent variable, printed the top few lines to ensure a successful data ingest, and ran descriptive statistics
FEATURES = [ "Birth Year", "Age at Diagnosis", "Race", "Origin", "laterality", "Radiation", "Histrec", "ER Status", "PR Status", "Behanal", "Stage", "Numprimes", "Survival Time", "Bucket" ] LABEL_MAP = { 0: "< 60 Months", 1: "60 < months > 120", 2: "...
CapstoneSEER/SEER Data Analysis Phase 3- Data Exploration.ipynb
georgetown-analytics/envirohealth
mit
Next we checked our data type and determine the frequency of each class
print (df.groupby('Bucket')['Bucket'].count())
CapstoneSEER/SEER Data Analysis Phase 3- Data Exploration.ipynb
georgetown-analytics/envirohealth
mit
We used a histogram to see the distribution of survival time in months
fig = plt.figure() ax = fig.add_subplot(111) ax.hist(df['Survival Time'], bins = 10, range = (df['Survival Time'].min(),df['Survival Time'].max())) plt.title('Survival Time Distribution') plt.xlabel('Survival Time') plt.ylabel('Months') plt.show()
CapstoneSEER/SEER Data Analysis Phase 3- Data Exploration.ipynb
georgetown-analytics/envirohealth
mit
Next we played around with a few visualization to get a better understanding of the data
scatter_matrix(df, alpha=0.2, figsize=(12, 12), diagonal='kde') plt.show() plt.figure(figsize=(12,12)) parallel_coordinates(df, 'Bucket') plt.show()
CapstoneSEER/SEER Data Analysis Phase 3- Data Exploration.ipynb
georgetown-analytics/envirohealth
mit
The plot below shows a lot of overlap between the 3 classes which alludes to the fact that classification models may not perform great. However, the plot also shows a more clear classification along the birth year and age at diagnosis features.
plt.figure(figsize=(12,12)) radviz(df, 'Bucket') plt.show()
CapstoneSEER/SEER Data Analysis Phase 3- Data Exploration.ipynb
georgetown-analytics/envirohealth
mit
Next we moved to creating survival charts using a larger sample size. We created a class with a "plot_survival" function. For the graph we picked variables that the scientific literature finds significant-- Stage, ER status, PR status, age, and radiation treatment. The second plot compares the frequency of survival for...
class ExploreSeer(MasterSeer): def __init__(self, path=r'./data/', testMode=False, verbose=True, sample_size=5000): # user supplied parameters self.testMode = testMode # import one file, 500 records and return self.verbose = verbose # prints status messages self.sam...
CapstoneSEER/SEER Data Analysis Phase 3- Data Exploration.ipynb
georgetown-analytics/envirohealth
mit
With 5 latent states:
with bayesianpy.data.DataSet(df, f, logger) as dataset: tpl = bayesianpy.template.MixtureNaiveBayes(logger, continuous=df, latent_states=5) network = tpl.create(bayesianpy.network.NetworkFactory(logger)) model = bayesianpy.model.NetworkModel(network, logger) model.train(dataset.subset(train.index.toli...
examples/notebook/diabetes_non_linear_regression.ipynb
morganics/bayesianpy
apache-2.0
Finally 10 latent states:
with bayesianpy.data.DataSet(df, f, logger) as dataset: tpl = bayesianpy.template.MixtureNaiveBayes(logger, continuous=df, latent_states=10) network = tpl.create(bayesianpy.network.NetworkFactory(logger)) model = bayesianpy.model.NetworkModel(network, logger) model.train(dataset.subset(train.index.tol...
examples/notebook/diabetes_non_linear_regression.ipynb
morganics/bayesianpy
apache-2.0
<h2> Get the xcms feature table </h2>
### Subdivide the data into a feature table local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/' data_path = local_path + '/revo_healthcare/data/processed/MTBLS315/'\ 'uhplc_pos/xcms_result_4.csv' ## Import the data and remove extraneous columns df = pd.read_csv(data_path, index_col=0) # convert column names ...
notebooks/MTBLS315/exploratory/auc_vs_rt_window_plots.ipynb
irockafe/revo_healthcare
mit
<h2> Get mapping between samples names, class names, and file names </h2>
# Get mapping between sample name and assay names path_sample_name_map = (local_path + 'revo_healthcare/data/raw/' + 'MTBLS315/metadata/a_UPLC_POS_nmfi_and_bsi_diagnosis.txt') # Sample name to Assay name # Index is the sample name # value we want is the Assay name sample_df = pd.read_csv(path_sample_na...
notebooks/MTBLS315/exploratory/auc_vs_rt_window_plots.ipynb
irockafe/revo_healthcare
mit
<h2> Preprocess feature table </h2> Remove systematic intensity biases between samples. Fill nan values
# fill nan values with 1/2 the minimum from each sample fill_val = X_df_raw.min(axis=1) / 2.0 # must transpose, b/c fillna only operates along columns X_df_filled = X_df_raw.T.fillna(value=fill_val, ).T X_pqn_df_raw = preproc.correct_dilution_factor(X_df_raw, plot=True) X_pqn_df_filled = preproc.correct_dilution_facto...
notebooks/MTBLS315/exploratory/auc_vs_rt_window_plots.ipynb
irockafe/revo_healthcare
mit
<h2> Check out the intensity distributions, MW p-vals (warning that the mw-pvals assume asymptotic normal, so are probably a bit off. Don't rely on them for important stuff. But as a spot-check, should be okay)
# Do mann-whitney on case vs control def mw_pval_dist(case, control): ''' case - dataframe containing case control - dataframe with control samples All should have same features (columns) ''' # get parametric pvals mann_whitney_vals = pd.DataFrame(np.full([case.shape[1],2], np.nan), ...
notebooks/MTBLS315/exploratory/auc_vs_rt_window_plots.ipynb
irockafe/revo_healthcare
mit
<h2> Use the raw values - they appear to be closer to the case vals than the dilution-factor normalized values...? <h2> Run rt-window classifiers and capture the auc distributions to plot </h2>
# Add back mz, rt, etc. columns to feature table and reshape it to be # (feats x samples) X_df_filled_mzrt = pd.concat([df[not_samples].T, X_pqn_df_filled], axis=0).T # run a sliding windonw # Make sliding window min_val = 0 max_val = df['rt'].max() width = max_val / 5 step = width / ...
notebooks/MTBLS315/exploratory/auc_vs_rt_window_plots.ipynb
irockafe/revo_healthcare
mit
Import Census projection on population: Projection from 2014 Historical estimates from 2010 to 2014 Historical estimates from 2000 to 2010
#projection 2014+ pop_projection = df.from_csv("NP2014_D1.csv", index_col='year') pop_projection = pop_projection[(pop_projection.sex == 0) & (pop_projection.race == 0) & (pop_projection.origin == 0)] pop_projection = pop_projection.drop(['sex', 'race', 'origin'], axis=1) pop_projection = pop_projection.drop(pop_projec...
docs/notebooks/Stage I.ipynb
talumbau/taxdata
mit
Import CBO baseline
cbo_baseline = (df.from_csv("CBO_baseline.csv", index_col=0)).transpose() cbo_baseline.index = index Stage_I_factors['AGDPN'] = df(cbo_baseline.GDP/cbo_baseline.GDP[2008], index = index) Stage_I_factors['ATXPY'] = df(cbo_baseline.TPY/cbo_baseline.TPY[2008], index = index) Stage_I_factors['ASCHF'] = df(cbo_baseline.SCH...
docs/notebooks/Stage I.ipynb
talumbau/taxdata
mit
Import IRS number of returns projection
irs_returns = (df.from_csv("IRS_return_projection.csv", index_col=0)).transpose() return_growth_rate = irs_returns.pct_change()+1 return_growth_rate.Returns['2023'] = return_growth_rate.Returns['2022'] return_growth_rate.Returns['2024'] = return_growth_rate.Returns['2022'] return_growth_rate.Returns.index = index
docs/notebooks/Stage I.ipynb
talumbau/taxdata
mit
Import SOI estimates (2008 - 2012) Tax-calculator is using 08 PUF.
soi_estimates = (df.from_csv("SOI_estimates.csv", index_col=0)).transpose() historical_index = list(range(2008,2013)) soi_estimates.index = historical_index return_projection = soi_estimates for i in range(2012,2024): Single = return_projection.Single[i]*return_growth_rate.Returns[i+1] Joint = return_projectio...
docs/notebooks/Stage I.ipynb
talumbau/taxdata
mit
Encoding the words The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network. Exercise: Now you're going to en...
# Create your dictionary that maps vocab words to integers here vocab_to_int = {word:i+1 for i,word in enumerate(set(words))} # Convert the reviews to integers, same shape as reviews list, but with integers reviews_ints = [] for each in reviews: reviews_ints.append([vocab_to_int[word] for word in each.split()])
sentiment-rnn/Sentiment_RNN.ipynb
adico-somoto/deep-learning
mit
Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise: Convert labels from positive and negative to 1 and 0, respectively.
# Convert labels to 1s and 0s for 'positive' and 'negative' labels = labels.split('\n') labels = np.array([1 if each=='positive' else 0 for each in labels])
sentiment-rnn/Sentiment_RNN.ipynb
adico-somoto/deep-learning
mit
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters. Exercise: First, remove...
# Filter out that review with 0 length non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review)!=0 ] len(non_zero_idx) reviews_ints = [reviews_ints[ii] for ii in non_zero_idx] labels = [labels[ii] for ii in non_zero_idx]
sentiment-rnn/Sentiment_RNN.ipynb
adico-somoto/deep-learning
mit
Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'],...
seq_len = 200 features = np.zeros([len(reviews_ints),seq_len], dtype=int) for i,row in enumerate(reviews_ints): features[i, -len(row):] = np.array(row)[:seq_len]
sentiment-rnn/Sentiment_RNN.ipynb
adico-somoto/deep-learning
mit
Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets. Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fractio...
split_frac = 0.8 data_len = len(features) train_len = int( data_len * split_frac ) test_len = data_len - train_len train_x, val_x = features[:train_len], features[train_len:] train_y, val_y = labels[:train_len], labels[train_len:] val_x_len= int(len(val_x) * 0.5) val_x, test_x = val_x[:val_x_len], val_x[:val_x_len]...
sentiment-rnn/Sentiment_RNN.ipynb
adico-somoto/deep-learning
mit
For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. l...
n_words = len(vocab_to_int) # Create the graph object graph = tf.Graph() # Add nodes to the graph with graph.as_default(): inputs_ = tf.placeholder( tf.int32, [None,None], name='inputs') labels_ = tf.placeholder( tf.int32, [None,None], name='labels') keep_prob = tf.placeholder(tf.float32, name='keep_prob')
sentiment-rnn/Sentiment_RNN.ipynb
adico-somoto/deep-learning
mit
Embedding Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that...
# Size of the embedding vectors (number of units in the embedding layer) embed_size = 300 with graph.as_default(): embedding = tf.Variable(tf.random_uniform([n_words, embed_size], -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs_)
sentiment-rnn/Sentiment_RNN.ipynb
adico-somoto/deep-learning
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.p...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize we...
DLND Your first neural network.ipynb
brennick/project-1-first-neural-network
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
# import sys ### Set the hyperparameters here ### iterations = 15000 learning_rate = 0.1 hidden_nodes = 6 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a rando...
DLND Your first neural network.ipynb
brennick/project-1-first-neural-network
mit
generate new subhalo and central subhalo catalogs
sig = 0.0 smf = 'li-march' nsnap0 = 20 subhist = Cat.SubhaloHistory(sigma_smhm=0., smf_source='li-march', nsnap_ancestor=20) subhist.Build() subhist._CheckHistory()
centralms/notebooks/notes_catalog.ipynb
changhoonhahn/centralMS
mit
generate central subhalo accretion history catalogs
censub = Cat.PureCentralHistory(sigma_smhm=sig, smf_source=smf, nsnap_ancestor=nsnap0) censub.Build() censub.Downsample()
centralms/notebooks/notes_catalog.ipynb
changhoonhahn/centralMS
mit
<h3> Extract sample data from BigQuery </h3> The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then swi...
# TODO: write a BigQuery query for the above fields # Store it into a Pandas dataframe named "trips" that contains about 10,000 records.
courses/machine_learning/deepdive/02_generalization/labs/create_datasets.ipynb
turbomanage/training-data-analyst
apache-2.0
<h3> Exploring data </h3> Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
ax = sns.regplot(x = "trip_distance", y = "fare_amount", ci = None, truncate = True, data = trips)
courses/machine_learning/deepdive/02_generalization/labs/create_datasets.ipynb
turbomanage/training-data-analyst
apache-2.0
Hmm ... do you see something wrong with the data that needs addressing? It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer...
tollrides = trips[trips['tolls_amount'] > 0] tollrides[tollrides['pickup_datetime'] == '2014-05-20 23:09:00']
courses/machine_learning/deepdive/02_generalization/labs/create_datasets.ipynb
turbomanage/training-data-analyst
apache-2.0
Looking a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to b...
trips.describe()
courses/machine_learning/deepdive/02_generalization/labs/create_datasets.ipynb
turbomanage/training-data-analyst
apache-2.0
Hmm ... The min, max of longitude look strange. Finally, let's actually look at the start and end of a few of the trips.
def showrides(df, numlines): import matplotlib.pyplot as plt lats = [] lons = [] goodrows = df[df['pickup_longitude'] < -70] for iter, row in goodrows[:numlines].iterrows(): lons.append(row['pickup_longitude']) lons.append(row['dropoff_longitude']) lons.append(None) lats.append(row['pickup_lat...
courses/machine_learning/deepdive/02_generalization/labs/create_datasets.ipynb
turbomanage/training-data-analyst
apache-2.0
As you'd expect, rides that involve a toll are longer than the typical ride. <h3> Quality control and other preprocessing </h3> We need to do some clean-up of the data: <ol> <li>New York city longitudes are around -74 and latitudes are around 41.</li> <li>We shouldn't have zero passengers.</li> <li>Clean up the total_...
def sample_between(a, b): basequery = """ SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers FROM `nyc-tlc.yellow.trips` WHERE trip_distanc...
courses/machine_learning/deepdive/02_generalization/labs/create_datasets.ipynb
turbomanage/training-data-analyst
apache-2.0
We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
!head taxi-train.csv
courses/machine_learning/deepdive/02_generalization/labs/create_datasets.ipynb
turbomanage/training-data-analyst
apache-2.0
Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them. <h3> Benchmark </h3> Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark. My model is going to be to simply divide the mean fare_amount by...
from google.cloud import bigquery import pandas as pd import numpy as np import shutil def distance_between(lat1, lon1, lat2, lon2): # Haversine formula to compute distance "as the crow flies". Taxis can't fly of course. dist = np.degrees(np.arccos(np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np.cos(np.r...
courses/machine_learning/deepdive/02_generalization/labs/create_datasets.ipynb
turbomanage/training-data-analyst
apache-2.0
Coefficient of thermal expansion of three metals (units: / &#176;C) Sample | Aluminum | Copper | Steel ----- | ------------- | ------------- | ------------- 1 | 6.4e-5 | 4.5e-5 | 3.3e-5 2 | 3.01e-5 | 1.97e-5 | 1.21e-5 3 | 2.36e-5 | 1.6e-5 | 0.9e-5 4 | 3.0e-5 | 1.97e-5 | 1.2e-5 5 | 7.0e-5 | 4.0e-5 | 1....
# Enter in the raw data aluminum = np.array([6.4e-5 , 3.01e-5 , 2.36e-5, 3.0e-5, 7.0e-5, 4.5e-5, 3.8e-5, 4.2e-5, 2.62e-5, 3.6e-5]) copper = np.array([4.5e-5 , 1.97e-5 , 1.6e-5, 1.97e-5, 4.0e-5, 2.4e-5, 1.9e-5, 2.41e-5 , 1.85e-5, 3.3e-5 ]) steel = np.array([3.3e-5 , 1.2e-5 , 0.9e-5, 1.2e-5, 1.3e-5, 1.6e-5, 1.4e-5, 1.58e...
content/code/error_bars/bar_chart_with_matplotlib.ipynb
ProfessorKazarinoff/staticsite
gpl-3.0
Now it's time to build the plot. We are going to build a bar chart with three different bars, one bar for each material: Aluminum, Copper and Steel. First we will create a figure object called fig and an axis object in that figure called ax using matplotlib's plt.subplots() function. Everything in our plot will be add...
# Build the plot fig, ax = plt.subplots() ax.bar(x_pos, CTEs, align='center', alpha=0.5) ax.set_ylabel('Coefficient of Thermal Expansion ($\degree C^{-1}$)') ax.set_xticklabels(materials) ax.set_title('Coefficent of Thermal Expansion (CTE) of Three Metals') ax.yaxis.grid(True)
content/code/error_bars/bar_chart_with_matplotlib.ipynb
ProfessorKazarinoff/staticsite
gpl-3.0
Now that we can load and play wav files we actually need some wav files! I found the sounds from Starcraft to be a great source of wav files because they're short, interesting and remind me of my childhood. You can download Starcraft wav files here: http://wavs.unclebubby.com/computer/starcraft/ among other places on t...
# change to the shogun-data directory import os os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica')) %matplotlib inline import pylab as pl # load fs1,s1 = load_wav('tbawht02.wav') # Terran Battlecruiser - "Good day, commander." # plot pl.figure(figsize=(6.75,2)) pl.plot(s1) pl.title('Signal 1') pl.show() # player wavPlay...
doc/ipython-notebooks/ica/bss_audio.ipynb
sorig/shogun
bsd-3-clause
Now let's load a second audio clip:
# load fs2,s2 = load_wav('TMaRdy00.wav') # Terran Marine - "You want a piece of me, boy?" # plot pl.figure(figsize=(6.75,2)) pl.plot(s2) pl.title('Signal 2') pl.show() # player wavPlayer(s2, fs2)
doc/ipython-notebooks/ica/bss_audio.ipynb
sorig/shogun
bsd-3-clause
and a third audio clip:
# load fs3,s3 = load_wav('PZeRdy00.wav') # Protoss Zealot - "My life for Aiur!" # plot pl.figure(figsize=(6.75,2)) pl.plot(s3) pl.title('Signal 3') pl.show() # player wavPlayer(s3, fs3)
doc/ipython-notebooks/ica/bss_audio.ipynb
sorig/shogun
bsd-3-clause
Now we've got our audio files loaded up into our example program. The next thing we need to do is mix them together! First another nuance - what if the audio clips aren't the same lenth? The solution I came up with for this was to simply resize them all to the length of the longest signal, the extra length will just be...
# Adjust for different clip lengths fs = fs1 length = max([len(s1), len(s2), len(s3)]) s1 = np.resize(s1, (length,1)) s2 = np.resize(s2, (length,1)) s3 = np.resize(s3, (length,1)) S = (np.c_[s1, s2, s3]).T # Mixing Matrix #A = np.random.uniform(size=(3,3)) #A = A / A.sum(axis=0) A = np.array([[1, 0.5, 0.5], ...
doc/ipython-notebooks/ica/bss_audio.ipynb
sorig/shogun
bsd-3-clause
Now before we can work on separating these signals we need to get the data ready for Shogun, thankfully this is pretty easy!
from shogun import features # Convert to features for shogun mixed_signals = features((X).astype(np.float64))
doc/ipython-notebooks/ica/bss_audio.ipynb
sorig/shogun
bsd-3-clause
Now lets unmix those signals! In this example I'm going to use an Independent Component Analysis (ICA) algorithm called JADE. JADE is one of the ICA algorithms available in Shogun and it works by performing Aproximate Joint Diagonalization (AJD) on a 4th order cumulant tensor. I'm not going to go into a lot of detail o...
from shogun import Jade # Separating with JADE jade = Jade() signals = jade.apply(mixed_signals) S_ = signals.get_real_matrix('feature_matrix') A_ = jade.get_real_matrix('mixing_matrix') A_ = A_ / A_.sum(axis=0) print('Estimated Mixing Matrix:') print(A_)
doc/ipython-notebooks/ica/bss_audio.ipynb
sorig/shogun
bsd-3-clause
Thats all there is to it! Check out how nicely those signals have been separated and have a listen!
# Show separation results # Separated Signal i gain = 4000 for i in range(S_.shape[0]): pl.figure(figsize=(6.75,2)) pl.plot((gain*S_[i]).astype(np.int16)) pl.title('Separated Signal %d' % (i+1)) pl.show() wavPlayer((gain*S_[i]).astype(np.int16), fs)
doc/ipython-notebooks/ica/bss_audio.ipynb
sorig/shogun
bsd-3-clause
US EPA ChemView web services The documentation lists several ways of accessing data in ChemView.
URIBASE = 'http://java.epa.gov/chemview/'
notebooks/chemview-test.ipynb
akokai/chemviewing
unlicense
Getting 'chemicals' data from ChemView As a start... this downloads data for all chemicals. Let's see what we get.
uri = URIBASE + 'chemicals' r = requests.get(uri, headers = {'Accept': 'application/json, */*'}) j = json.loads(r.text) print(len(j)) df = DataFrame(j) df.tail() # Save this dataset so that I don't have to re-request it again later. df.to_pickle('../data/chemicals.pickle') df = pd.read_pickle('../data/chemicals.pi...
notebooks/chemview-test.ipynb
akokai/chemviewing
unlicense
Data wrangling
# want to interpret 'None' as NaN def scrub_None(x): s = str(x).strip() if s == 'None' or s == '': return np.nan else: return s for c in list(df.columns)[:-1]: df[c] = df[c].apply(scrub_None) df.tail()
notebooks/chemview-test.ipynb
akokai/chemviewing
unlicense
How many unique CASRNs, PMN numbers?
# CASRNS len(df.casNo.value_counts()) # PMN numbers len(df.pmnNo.value_counts())
notebooks/chemview-test.ipynb
akokai/chemviewing
unlicense
What's in 'synonyms'?
DataFrame(df.loc[4,'synonyms'])
notebooks/chemview-test.ipynb
akokai/chemviewing
unlicense
How many 'synonyms' for each entry?
df.synonyms.apply(len).describe()
notebooks/chemview-test.ipynb
akokai/chemviewing
unlicense
Do the data objects in synonyms all have the same attributes?
def getfields(x): k = set() for d in x: j = set(d.keys()) k = k | j return ','.join(sorted(k)) df.synonyms.apply(getfields).head() len(df.synonyms.apply(getfields).value_counts())
notebooks/chemview-test.ipynb
akokai/chemviewing
unlicense
All of the synonyms fields contain a variable number of objects with a uniform set of fields. Tell me more about those items with PMN numbers...
pmns = df.loc[df.pmnNo.notnull()] pmns.head()
notebooks/chemview-test.ipynb
akokai/chemviewing
unlicense
Are there any that have CASRN too? ... No.
len(pmns.casNo.dropna())
notebooks/chemview-test.ipynb
akokai/chemviewing
unlicense
expectations are assertions about data
train.expect_column_values_to_be_between("Age", 0,80) train.expect_column_values_to_be_in_set('Survived', [1, 0]) train.expect_column_mean_to_be_between("Age", 20,40) train.expect_column_values_to_match_regex('Name', '[A-Z][a-z]+(?: \([A-Z][a-z]+\))?, ', mostly=.95) train.expect_column_values_to_be_in_set("Sex", ["male...
Lectures/data_quality/Great Expectations.ipynb
eyaltrabelsi/my-notebooks
mit
Great Expectations Validation in Your pipeline
# ! great_expectations init context = ge.data_context.DataContext() context.list_expectation_suite_names() expectation_suite_name = " " batch_kwargs = {'path': "https://github.com/plotly/datasets/raw/master/titanic.csv", 'datasource': "titanic"} batch = context.get_batch(batch_kwargs, my_expectations)...
Lectures/data_quality/Great Expectations.ipynb
eyaltrabelsi/my-notebooks
mit
Test as data documentations your docs are your tests and your tests are your docs
context.build_data_docs() context.open_data_docs()
Lectures/data_quality/Great Expectations.ipynb
eyaltrabelsi/my-notebooks
mit
Arithmetic Operations Python implements seven basic binary arithmetic operators, two of which can double as unary operators. They are summarized in the following table: | Operator | Name | Description | |--------------|----------------|---------------------------...
# addition, subtraction, multiplication (4 + 8) * (6.5 - 3)
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Strings in Python 2 and 3 ```Python Python 2 print type("Hello World!") <type 'str'> this is a byte string print type(u"Hello World!") <type 'unicode'> this is a Unicode string ``` ```Python Python 3 print(type("Hello World!")) <class 'str'> this is a Unicode string print(type(b"Hello World!")) <class 'bytes'> this is...
x = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Definition It is defined with parentheses : [xx,xx,xx]. Get Element The elements are called using a square bracket with an index starting from zero : x[y], 0..N. Slice (sub-array) You can slice the array using colon, in this case a[start:end] means items start up to end-1.
print(x) print(x[0])
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
A single colon a[:] means a copy of the whole array. a[start:] return tuple of items start through the rest of the array. a[:end]return tuple of items from the beginning through end-1.
print(x[1:2]) print(x[:]) print(x[:2]) print(x[1:])
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
more interestingly, they have negative index a[-1] means last item in the array a[-2:] means last two items in the array a[:-2] means everything except the last two items
print(x[-1]) print(x[-2]) print(x[-2:]) print(x[:-2])
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
You may reversed a list with xxx[::-1].
print(x[::-1])
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Concatenate You may add up two list or we say concatenate, and multiply to duplicate the items.
print(x + [0,1]) print([0,1] + x) print([0,1] * 5)
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Sorting You may sort a list with sorted(x). Noted that it returns a new list.
print(x[::-1]) y = sorted(x[::-1]) print(y)
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Add element (append); Remove element (pop); Insert element (insert) These functions are modified in-place, i.e. the original list will be changed
print(x) x.append('A') print(x) print(x) x.insert(5,'B') # insert 'B' between x[4] and x[5], results in x[5] = 'B' print(x) print(x); x.pop(5); # Removed the x[5] item and return it print(x); x.pop(-1); # Removed the last item and return it print(x)
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Tuple A Python tuple is similar to a list. The elements are in order but fixed once they are created. In other words, they are immutable. The tuple can store differently type of elements. Definition It is defined with parentheses : (xx,xx,xx). Get Element The elements are called using a square bracket with an index s...
corr = (22.28552, 114.15769) print(corr) corr[0] = 10
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Dictionary Dictionary is more flexible than list and its index is a string, it is defined with curly bracket: data = {'k1' : y1 , 'k2' : y2 , 'k3' : y3 } k1, k2, k3 are called keys while y1,y2 and y3 are elements. Creating an empty dictionary It is defined with a pair of curly bracket or the dict() fuction: data = {} ...
# Creating an empty dictionary location = {} print(location) # Defined with a curly bracket location = { 'Berlin': (52.5170365, 13.3888599), 'London': (51.5073219, -0.1276474), 'Sydney': (-33.8548157, 151.2164539), 'Tokyo': (34.2255804, 139.294774527387), 'Pa...
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Extra reading:
### More on slicing in list and tuple start=2 end=5 step=2 print("Original:", x) print("items start through end-1 :", x[start:end]) # items start through end-1 print("items start through the rest of the array :", x[start:]) # items start through the rest of the array print("items from the beginning through end-1 :"...
notebooks/02-Python-Data-Structures.ipynb
ryan-leung/PHYS4650_Python_Tutorial
bsd-3-clause
Bulk Hamiltonian with wraparound In this first example, we compare the bulk Hamiltonian from TBmodels with that of the model in kwant, using wraparound.
model = tbmodels.Model.from_wannier_files(hr_file='data/wannier90_hr.dat')
examples/kwant_interface/kwant_interface_demo.ipynb
Z2PackDev/TBmodels
apache-2.0