markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The one below should fail, because it's outside the prior:
print(lpost([-0.1, 0.1, 0.1])) print(lpost([0.1, -0.1, 0.1])) print(lpost([0.1, 0.1, -0.1]))
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Okay, cool! This works. Now we can run MCMC!
import emcee start_pars = np.array([0.313999, 1.14635, 0.0780871]) start_cov = np.diag(start_pars/100.0) nwalkers = 100 niter = 200 ndim = len(start_pars) burnin = 50 p0 = np.array([np.random.multivariate_normal(start_pars, start_cov) for i in range(nwalkers)]) # initialize the sampler sampler = emc...
notebooks/SherpaResponses.ipynb
eblur/clarsach
gpl-3.0
Configurations
vocab = (" $%'()+,-./0123456789:;=?ABCDEFGHIJKLMNOPQRSTUVWXYZ" "\\^_abcdefghijklmnopqrstuvwxyz{|}\n") graph_path = r"./graphs" test_text_path = os.path.normpath(r"../Dataset/arvix_abstracts.txt") batch_size=50 model_param_path=os.path.normpath(r"./model_checkpoints")
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Data encoding Basic Assumption A full string sequence consists $START$ & $STOP$ signal with characters in the middle. Encoding policy A set $\mathcal{S}$ that consists of many characters is utilized to encode the characters. The $1^{st}$ entry of the vector corresponds to $UNKNOWN$ characters(l.e. characters that a...
class TextCodec: def __init__(self, vocab): self._vocab = vocab self._dim = len(vocab) + 2 def encode(self, string, sess = None, start=True, stop=True): """ Encode string. Each character is represented as a N-dimension one hot vector. N = len(self._vocab)+ 2 ...
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Test See how encoding and decoding work.
test_codec=TextCodec(vocab) test_text_encoded=test_codec.encode("Hello world!") print("Encoded text looks like:\n{}".format(test_text_encoded)) test_text_decoded=test_codec.decode(nparray=test_text_encoded,strip=False) print("Decoded text looks like:\n{}".format(test_text_decoded))
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Load data set
with open(test_text_path, "r") as f: raw_text_list = "".join(f.readlines()).split("\n") print("Loaded abstract from a total of {} theses.".format(len(raw_text_list))) # See what we have loaded sample_text_no = random.randint(0, len(raw_text_list)-1) sample_text_raw = raw_text_list[sample_text_no] print("A sample te...
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Define Batch Generator
def batch_generator(data, codec, batch_size, seq_length, reset_every): if type(data) == str: data=codec.encode(data, start=False, stop=False) head = 0 reset_index = 0 batch = [] seq = [] increment = seq_length * reset_every - 1 extras = codec.encode("", start=True, stop=True) v_s...
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Check the generator
seq_length = 100 reset_every = 2 batch_size = 2 batches = batch_generator(data=encoded_data, codec=test_codec, batch_size=batch_size, seq_length=seq_length, reset_every=reset_every) for (x, y)...
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Define model class
class DRNN(tf.nn.rnn_cell.RNNCell): def __init__(self, input_dim, hidden_dim, output_dim, num_hidden_layer, dtype=tf.float32): super(tf.nn.rnn_cell.RNNCell, self).__init__(dtype=dtype) assert type(input_dim) == int and input_dim > 0, "Invalid input dimension. " self._input_dim = input_dim ...
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Make an instance of the model and define the rest of the graph Thoughts If GRU is used, then the outputs of GRU shall not be directly used as desired output without further transforms. (e.g. A cell accpet 2 inputs, a state from the previous cell and the input of this cell(which is approximated by the state input), then...
tf.reset_default_graph() input_dim = output_dim = test_codec.dim hidden_dim = 700 num_hidden_layer = 3 rnn_cell = DRNN(input_dim=input_dim, output_dim=output_dim, num_hidden_layer=num_hidden_layer, hidden_dim=hidden_dim) batch_size = 50 init_state = tuple(tf.placeholder_with_default(input=tensor, ...
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Training
n_epoch=50 learning_rate=1e-3 train_op=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss, global_step=global_step) print_every = 50 save_every = 1000 partition_size = 100 logdir = os.path.normpath("./graphs") seq_length = 100 reset_every = 100 visualize_every = 100 learning_rate_decay = 0.9 # batch_size...
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Test online inference
def online_inference(cell, prime, sess, codec, input_tensor, init_state_tensor_tuple, output_tensor, final_state_tensor_tuple, length): final_output = [prime] zero_states = sess.run(cell.zero_state(batc...
RNN101/Text Generator.ipynb
BorisPolonsky/LearningTensorFlow
mit
Let's evaluate how much the membrane potential depends on Input resistance and membrane time constant and the sag ratio. We will create the following multivariate function: $f(k;x) = k_0 + k_1x_1 + k_2x_2 + k_3x_3$ where $k$ is a vector or parameters (contants) and $x$ is a vector of independent variables (i.e $x_1$ i...
x = df[['InputR', 'Sag','Tau_mb']] y = df[['Vrest']] # import standard regression models (sm) import statsmodels.api as sm K = sm.add_constant(x) # k0, k1, k2 and k3... # get estimation est = sm.OLS(y, K).fit() # ordinary least square regression est.summary() # need more data for kurtosis :)
Optimization/Multivariate regression.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
Generate Mock Observed Luminosity
import numpy as np # original values # sigma_L = 1 # a1 = 12 # a2 = 1.4 # a3 = fov['mass_h'].min() # a4 = 10 # sigma_obs = 2 S = 0.155 a1 = 10.709 a2 = 0.359 a3 = 2.35e14 a4 = 1.10 sigma_obs = 0.01 mean_L = a1 + a2*np.log(fov['mass_h'] / a3) + a4 * np.log(1 + fov['z']) fov['lum'] = np.random.lognormal(mean_L, S, le...
MassLuminosityProject/GenerateMockDataAndImportanceSample_2017_01_22.ipynb
davidthomas5412/PanglossNotebooks
mit
Generate Q
fov['q_lum'] = np.random.lognormal(np.log(fov['lum_obs']), sigma_obs, len(fov)) fov['q_mass_h_mean'] = a3 * (fov['q_lum'] / (np.exp(a1) * (1+fov['z']) ** a4 )) ** (1 / a2) fov['q_mass_h'] = np.random.lognormal(np.log(fov['q_mass_h_mean']), 5*S, len(fov)) plt.scatter(fov[:500]['mass_h'], fov[:500]['lum_obs'], alpha=0.4...
MassLuminosityProject/GenerateMockDataAndImportanceSample_2017_01_22.ipynb
davidthomas5412/PanglossNotebooks
mit
Multiple Conditions So now we can deal with a scenario where there are two possible decisions to be made. What about more than two decision? Say hello to "elif"!
collection = [1,2,3,4,5] if collection[0] == 0: print ("Zero!") elif collection[0] == 100: print ("Hundred!") else: print("Not Zero or Hundred") x = ["George", "Barack", "Donald"] test = "Richard" if test in x: print(test, "has been found.") else: print(test, "was not found. Let me add him to the...
07.Loop_it_up.ipynb
prasants/pyds
mit
Exercise Write some code to check if you are old enough to buy a bottle of wine. You need to be 18 or over, but if your State is Texas, you need to be 25 or over.
# Your code here
07.Loop_it_up.ipynb
prasants/pyds
mit
Adding your own input How about adding your own input and checking against that? This doesn't come in too handy in a data science environment since you typically have a well defined dataset already. Nevertheless, this is important to know.
age = int(input("Please enter your age:")) if age < 18: print("You cannot vote or buy alcohol.") elif age < 21: print("You can vote, but can't buy alcohol.") else: print("You can vote to buy alcohol. ;) ") mr_prez = ["Bill", "George", "Barack", "Donald"] name = input("Enter your name:") # Don't need to sp...
07.Loop_it_up.ipynb
prasants/pyds
mit
Loops Time to supercharge our Python usage. Loops are in some ways, the basis for automation. Check if a condition is true, then execute a step, and keep executing it till the condition is no longer true.
numbers = [1,2,3,4,5,6,7,8,9,10] for number in numbers: if number % 2 == 0: print("Divisible by 2.") else: print("Not divisible by 2.") numbers = {1,2,3,4,5,6,7,8,9,10} for num in numbers: if num%3 == 0: print("Divisible by 3.") else: print("Not divisible by 3.")
07.Loop_it_up.ipynb
prasants/pyds
mit
When using dictionaries, you can iterate through keys, values or both.
groceries = {"Milk":2.5, "Tea": 4, "Biscuits": 3.5, "Sugar":1} print(groceries.keys()) print(groceries.values()) # item here refers to the the key in set name groceries for a in groceries.keys(): print(a) for price in groceries.values(): print(price) for (key, val) in groceries.items(): print(key,val) g...
07.Loop_it_up.ipynb
prasants/pyds
mit
Exercise Print the names of the people in the dictionary 'data' Print the name of the people who have 'incubees' Print the name, and net worth of people with a net worth higher than 500,000 Print the names of people without a board seat Enter your responses in the fields below. This is solved for you if you scroll do...
data = { "Richard": { "Title": "CEO", "Employees": ["Dinesh", "Gilfoyle", "Jared"], "Awards": ["Techcrunch Disrupt"], "Previous Firm": "Hooli", "Board Seat":1, "Net Worth": 100000 }, "Jared": { "Real_Name": "Do...
07.Loop_it_up.ipynb
prasants/pyds
mit
Range of Values We often need to define a range of values for our program to iterate over.
# Generate a list on the fly nums = list(range(10)) print(nums)
07.Loop_it_up.ipynb
prasants/pyds
mit
In a defined range, the lower number is inclusive, and upper number is exclusive. So 0 to 10 would include 0 but exclude 10. So if we need a specific range, we can use this knowledge to our advantage.
nums = list(range(1,11)) print(nums)
07.Loop_it_up.ipynb
prasants/pyds
mit
We can also specify a range without explicitly defining an upper or lower range, in which case, Python does it's magic: range will be 0 to one less than the number specified.
nums = list(range(10)) print(nums)
07.Loop_it_up.ipynb
prasants/pyds
mit
We can also use the range function to perform mathematical tricks.
for i in range(1,6): print("The square of",i,"is:",i**2)
07.Loop_it_up.ipynb
prasants/pyds
mit
Or to check for certain other conditions or properties, or to define how many times an activity will be performed.
for i in range(1,10): print("*"*i)
07.Loop_it_up.ipynb
prasants/pyds
mit
Exercise Print all numbers from 1 to 20
# Your Code Here
07.Loop_it_up.ipynb
prasants/pyds
mit
Exercise Print the square of the first 10 natural numbers.
# Your Code Here
07.Loop_it_up.ipynb
prasants/pyds
mit
Become a Control Freak And now, it's time to become a master of control! A data scientist needs absolute control over loops, stopping when defined conditions are met, or carrying on till a solution if found. <img src="images/break.jpg"> Break
for i in range(1,100): print("The square of",i,"is:",i**2) if i >= 5: break print("Broken")
07.Loop_it_up.ipynb
prasants/pyds
mit
Continue Break's cousin is called Continue. If a certain condition is met, carry on.
letters = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j"] for letter in letters: print("Currently testing letter", letter) if letter == "e": print("I plead the 5th!") continue print( letter)
07.Loop_it_up.ipynb
prasants/pyds
mit
List Comprehension Remember lists? Now here's a way to power through a large list in one line! As a Data Scientist, you will need to write a lot of code very efficiently, especially in the data exploration stage. The more experiments you can run to understand your data, the better it is. This is also a very useful tool...
# Here is a standard for loop numList = [] for num in range(1,11): numList.append(num**2) print (numList)
07.Loop_it_up.ipynb
prasants/pyds
mit
So far, so good!
# Now for List Comprehension sqList = [num**2 for num in range(1,11)] print(sqList) [num**2 for num in range(1,11)]
07.Loop_it_up.ipynb
prasants/pyds
mit
How's that for speed?! Here's the format for List Comprehensions, in English. ListName = [Expected_Result_or_Operation for Item in a given range]<br> print the ListName
cubeList = [num**3 for num in range(6)] print(cubeList)
07.Loop_it_up.ipynb
prasants/pyds
mit
List comprehensions are very useful when dealing with an existing list. Let's see some examples.
nums = [1,2,3,4,5,6,7,8,9,10] # For every n in the list named nums, I want an n my_list1 = [n for n in nums] print(my_list1) # For every n in the list named nums, I want n to be squared my_list2 = [n**2 for n in nums] print(my_list2) # For every n in the list named nums, I want n, only if it is even my_list3 = [n fo...
07.Loop_it_up.ipynb
prasants/pyds
mit
How about calculating the areas of circles, given a list of radii? That too in just one line.
radius = [1.0, 2.0, 3.0, 4.0, 5.0] import math # Area of Circle = Pi * (radius**2) area = [round((r**2)*math.pi,2) for r in radius] print(area)
07.Loop_it_up.ipynb
prasants/pyds
mit
Dictionary Comprehension Let's get back to our dictionary named Data. Dictionary Comprehension can be a very efficient way to extract information out of them. Especially when you have thousands or millions of records.
data = { "Richard": { "Title": "CEO", "Employees": ["Dinesh", "Gilfoyle", "Jared"], "Awards": ["Techcrunch Disrupt"], "Previous Firm": "Hooli", "Board Seat":1, "Net Worth": 100000 }, "Jared": { "Real_Name": "Do...
07.Loop_it_up.ipynb
prasants/pyds
mit
We can also use dictionary comprehension to create new dictionaries
name = ['George HW', 'Bill', 'George', 'Barack', 'Donald', 'Bugs'] surname = ['Bush', 'Clinton', 'Bush Jr', 'Obama', 'Trump', 'Bunny'] full_names = {n:s for n,s in zip(name,surname)} full_names # What if we want to exclude certain values? full_names = {n:s for n,s in zip(name, surname) if n!='Bugs'} print(full_names)
07.Loop_it_up.ipynb
prasants/pyds
mit
Problem Statement There are influenza viruses that are collected from the "environment", or have an "unknown" host. How do we infer which hosts it came from? Well, that sounds like a Classification problem.
# Load the sequences into memory sequences = [s for s in SeqIO.parse('data/20160127_HA_prediction.fasta', 'fasta') if len(s.seq) == 566] # we are cheating and not bothering with an alignment. len(sequences) # Load the sequence IDs into memory seqids = [s.id for s in SeqIO.parse('data/20160127_HA_prediction.fasta', 'f...
03 Classification.ipynb
ericmjl/scikit-learn-tutorial
mit
Train/Test Split We're almost ready for training a machine learning model to classify the unknown hosts based on their sequence. Here's the proper procedure. Split the labelled data into a training and testing set. (~70 train/30 test to 80 train/20 test) Train and evaluate a model on the training set. Make predictions...
# Split the data into a training and testing set. X_cols = [i for i in range(0,566)] X = knowns[X_cols] Y = lb.transform(knowns['Host Species']) X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2) # Train a Random Forest Classifier. # Note: This cell takes a while; any questions? # Initialize th...
03 Classification.ipynb
ericmjl/scikit-learn-tutorial
mit
How do we evaluate how good the classification task performed? For binary classification, the Receiver-Operator Characteristic curve is a great way to evaluate a classification task. For multi-label classification, which is the case we have here, accuracy score is a great starting place.
# Let's first take a look at the accuracy score: the fraction that were classified correctly. accuracy_score(lb.inverse_transform(Y_test), lb.inverse_transform(preds))
03 Classification.ipynb
ericmjl/scikit-learn-tutorial
mit
What about those sequences for which the hosts were unknown? We can run the predict(unknown_Xs) to predict what their hosts were likely to be, given their sequence.
unknown_preds = clf.predict(unknowns[X_cols]) # make predictions; note: these are still dummy-encoded. unknown_preds = lb.inverse_transform(unknown_preds) # convert dummy-encodings back to string labels. unknown_preds
03 Classification.ipynb
ericmjl/scikit-learn-tutorial
mit
What this gives us is the class label with the highest probability of being the correct one. While we will not do this here, at this point, it would be a good idea to double-check your work with a sanity check. Are the sequences that are predicted to be Human truly of a close sequence similarity to actual Human sequen...
plt.plot(clf.feature_importances_)
03 Classification.ipynb
ericmjl/scikit-learn-tutorial
mit
シグモイド関数 ロジット関数の逆関数。 $$ \phi(z) = \frac{1}{1+e^(-z)} $$ ステップ関数とは異なり緩やかに上昇していくため、例えば結果が降水確率が0.8なら80%であるということができる。
import matplotlib.pyplot as plt import numpy as np def sigmoid(z): return 1.0 / (1.0 + np.exp(-z)) z = np.arange(-7, 7, 0.1) phi_z = sigmoid(z) plt.plot(z, phi_z) plt.axvline(0.0, color='k') plt.ylim(-0.1, 1.1) plt.xlabel('z') plt.ylabel('$\phi (z)$') # y axis ticks and gridline plt.yticks([0.0, 0.5, 1.0]) ax ...
python-machine-learning/ch03/logistic-regression.ipynb
hide-tono/python-training
apache-2.0
ロジスティック回帰の重みの学習 尤度L:結果から見たところの条件のもっともらしさ $$ L(w) = P(y|x;w) = \prod_{i=1}^nP(y^{(i)}|x^{(i)};w) = \prod_{i=1}^n(\phi(z^{(i)}))^{(y^{(i)})}(1-\phi(z^{(i)}))^{1-y^{(i)}} $$ \( P(y|x;w) \)の;wはwをパラメータに持つという意味。 対数尤度l: * アンダーフローの可能性低下 * 積が和に変換されるため加算を用いて微分できるようになる $$ l(w) = \log L(w) = \sum_{i=1}^n\bigl[(y^{(i)}\log(\phi(z^{...
from sklearn import datasets import numpy as np from sklearn.cross_validation import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.linear_model import Perceptron from sklearn.metrics import accuracy_score # Irisデータセットをロード iris = datasets.load_iris() # 3,4列目の特徴量を抽出 X = iris.data[:, [2, ...
python-machine-learning/ch03/logistic-regression.ipynb
hide-tono/python-training
apache-2.0
過学習が発生…高バリアンス 学習不足…高バイアス 共線性:特徴量の間の相関の高さ 正則化:共線性を根拠に過学習を防ぐ。極端なパラメータの重みにペナルティを科す。 L2正則化 $$ \frac{\lambda}{2}||w||^2 = \frac{\lambda}{2}\sum_{j=1}^m w^2_j $$ \( \lambda \)は正則化パラメータという。 ロジスティック回帰のコスト関数に重みをつける $$ J(w) = \sum_{i=1}^n\bigl[(-y^{(i)}\log(\phi(z^{(i)})))-({1-y^{(i)})\log(1-\phi(z^{(i)}))}\bigr] + \frac{\l...
weights, params = [], [] # numpy.arange(-5, 5)はだめ。https://github.com/numpy/numpy/issues/8917 for c in range(-5, 5): lr = LogisticRegression(C=10**c, random_state=0) lr.fit(X_train_std, y_train) weights.append(lr.coef_[1]) params.append(10**c) weights = np.array(weights) plt.plot(params, weights[:, ...
python-machine-learning/ch03/logistic-regression.ipynb
hide-tono/python-training
apache-2.0
We use the sklearn.datasets.load_digits method to load the MNIST data.
from sklearn.datasets import load_digits digits_data = load_digits() from IPython.display import display display(dir(digits_data)) display(digits_data.data.shape) display(digits_data.target.shape)
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
This dataset contains data for 1797 images. Each image is an 8*8 matrix stored as a flat-packed array. Next we combine the data and target into a single dataframe.
mnist_df = pd.DataFrame(index=digits_data.target, data=digits_data.data) mnist_df.head()
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
Next we find out How many images we have per label.
image_counts = mnist_df.groupby(mnist_df.index)[0].count() ax = image_counts.plot(kind='bar', title='Image count per label in data')
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
Next we scale mnist_df so that every feature has zero mean and unit variance. Pairwise Distances, P and $\sigma_i$s
from sklearn.preprocessing import scale mnist_df_scaled = pd.DataFrame(index=mnist_df.index, columns=mnist_df.columns, data=scale(mnist_df)) mnist_df_scaled.head()
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
From the scaled data, we must calculate the $P_{ij}$s. To do this we first calculate the pairwise distances between each pair of rows in the input data. For efficiency's sake, we use the sklearn.metrics.pairwise_distances library function. Next, we start with a given purplexity target and then calculate the individual ...
MACHINE_PRECISION = np.finfo(float).eps from sklearn.metrics import pairwise_distances def optimal_sigma(dist_i, i, target_entropy, n_iter=100, entropy_diff=1E-7): """ For the pairwise distances between the i-th feature vector and every other feature vector in the original dataset, execute a binary search...
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
Now we're going to set up TensorFlow for the KLD minimization problem.
import tensorflow as tf display(tf.__version__) def pairwise_dist(tf_y): """Calculate pairwise distances between each pair of vectors in tf_y.""" tf_norms = tf.square(tf.norm(tf_y, axis=1)) tf_r1 = tf.expand_dims(tf_norms, axis=1) tf_r2 = tf.expand_dims(tf_norms, axis=0) tf_y_dot_yT = tf...
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
Let's compare that against what the sklearn implementation gives us:
from sklearn.manifold import TSNE # Extract the embeddings and convert into a DataFrame sk_embedded = TSNE(n_components=2).fit_transform(mnist_df_scaled.values) sk_embedded = pd.DataFrame(index=mnist_df_scaled.index, data=sk_embedded) # Display sk_embedded = sk_embedded.reset_index().rename(columns={'index': 'label',...
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
Appendix: Vectorized Calculation of $Q_{ij}$ in TensorFlow$
y = pd.DataFrame(index=range(3), columns=range(5), data=np.random.uniform(1, 5, size=[3, 5])) y
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
First we calculate Q using the direct iterative algorithm which requires iterating over rows and columns of y this gives us a reference to test our vectorized implementation for correctness.
Q_simple = pd.DataFrame(index=y.index, columns=y.index, data=0.0) for i in range(0, y.shape[0]): for j in range(0, i): assert i != j, (i, j) md = y.loc[i, :].sub(y.loc[j, :]) d = 1 + np.linalg.norm(md)**2 Q_simple.loc[i, j] = 1 / d Q_simple.loc[j, i] = 1 / d ...
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
To calculate Q in a vectorized way, we note that $D[i, j] = (y[i] - y[j]) (y[i] - y[j])^T = norm(y[i])^2 + norm(y[j])^2 - 2 \times dot(a[i], a[j])$ For the entire 2D array y, we can generalize this to: D = np.atleast_2d(r) + np.atleast_2d(r).T - 2 * np.dot(y, y.T) where r is (vector) of norms of each vector in y.
norms = y.apply(np.linalg.norm, axis=1).values r1 = np.atleast_2d(norms**2) r2 = r1.T d1 = r1 + r2 d2 = d1 - 2 * np.dot(y, y.T) d2 += 1 d3 = 1 / d2 d3[np.diag_indices_from(d3)] = 0 Q_vectorized = pd.DataFrame(d3) Q_vectorized from pandas.util.testing import assert_frame_equal assert_frame_equal(Q_simple, Q_vectori...
simple_implementations/t-sne.ipynb
dipanjank/ml
gpl-3.0
First we load the iris data from task 1 and split it into training and validation set.
# load dataset from task 1 url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class'] dataset = pandas.read_csv(url, names=names) # split-out dataset array = dataset.values X = array[:,0:4] y = array[:,4]
notebooks/robin_ue1/03_Cross_validation_and_grid_search.ipynb
hhain/sdap17
mit
Next we run a performance test on GridSearchCV. Therefor we search mulitple times to maximize the precision save the best time for later comparison. Each time we use a different number of jobs.
# parameter for performance test max_jobs = 8 best_in = 3 # performance test measurements = [] i = 1 while i <= max_jobs: min_t = float("inf") for j in range(best_in): kneighbors = KNeighborsClassifier() grid_search = GridSearchCV(kneighbors, parameter_grid, cv=cross_val, scoring=scoring, n_j...
notebooks/robin_ue1/03_Cross_validation_and_grid_search.ipynb
hhain/sdap17
mit
Finally we plot our results:
fig = plt.figure() fig.suptitle('Visualization of the runtime depending on the number of used jobs.') plt.xticks(range(1, max_jobs + 1)) ax = fig.add_subplot(111) ax.set_xlabel('used jobs') ax.set_ylabel('runtime in seconds') ax.plot(range(1, max_jobs + 1), measurements, 'ro') plt.show() neighbors = [s[0]["n_neighbors...
notebooks/robin_ue1/03_Cross_validation_and_grid_search.ipynb
hhain/sdap17
mit
Then we can display the final result:
t = algorithm.linspace(0, duration_secs, samples) plt.plot(t, data) plt.show()
notebooks/Oscillators.ipynb
mohabouje/eDSP
gpl-3.0
Sawtooth Signal A sawtooth waveform increases linearly from -1 to 1 in $ [0, 2 \pi wi] $ interval, and decreases linearly from 1 to -1 in the interval $ \left[ 2 \pi w, 2 \pi \right] $, where $ w $ is the width of the periodic signal. If $ w $ is 0.5, the function generates a standard triangular wave. The triang...
width = 0.7 sawtooth = oscillator.Sawtooth(amp=amplitude, sr=sample_rate, f=frequency, width=width) data = sawtooth.generate(N=samples)
notebooks/Oscillators.ipynb
mohabouje/eDSP
gpl-3.0
Then, to display:
plt.plot(t, data) plt.show()
notebooks/Oscillators.ipynb
mohabouje/eDSP
gpl-3.0
Interactive mode
from __future__ import print_function from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets @interact(dtype=widgets.Dropdown( options=['square', 'sinusoidal', 'sawtooth'], value='square', description='Type:', di...
notebooks/Oscillators.ipynb
mohabouje/eDSP
gpl-3.0
Note that this works in the opposite direction too: let's say you want to find "rare" objects in 10 dimensions, where we'll define rare as <1% of the population. Then you'll need to accept objects from 63% of the distribution in all 10 dimensions! So are those really "rare" or are they just a particular 1% of the pop...
import numpy as np p = 10**(np.log10(0.01)/10.0) print p
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
N.B. Dimensionality isn't just measuring $D$ parameters for $N$ objects. It could be a spectrum with $D$ values or an image with $D$ pixels, etc. In the book the examples used just happen to be spectra of galaxies from the SDSS project. But we can insert the data of our choice instead. For example: the SDSS compris...
# Execute this cell # Ivezic, Figure 7.2 # Author: Jake VanderPlas # License: BSD # The figure produced by this code is published in the textbook # "Statistics, Data Mining, and Machine Learning in Astronomy" (2013) # For more information, see http://astroML.github.com # To report a bug or issue, use the follow...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Note that the points are correlated along a particular direction which doesn't align with the initial choice of axes. So, we should rotate our axes to align with this correlation. We'll choose the rotation to maximize the ability to discriminate between the data points: * the first axis, or principal component, is ...
#Example call from 7.3.2 import numpy as np from sklearn.decomposition import PCA X = np.random.normal(size=(100,3)) # 100 points in 3D R = np.random.random((3,10)) # projection matrix X = np.dot(X,R) # X is now 10-dim, with 3 intrinsic dims pca = PCA(n_components=4) # n_components can be optionally set pca.fit(X) co...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Scikit-Learn's decomposition module has a number of PCA type implementations. Let's work through an example using spectra of galaxies take during the Sloan Digital Sky Survey. In this sample there are 4000 spectra with flux measurements in 1000 bins. 15 example spectra are shown below and our example will use half of...
%matplotlib inline # Example from Andy Connolly # See Ivezic, Figure 7.4 import numpy as np from matplotlib import pyplot as plt from sklearn.decomposition import PCA from sklearn.decomposition import RandomizedPCA from astroML.datasets import sdss_corrected_spectra from astroML.decorators import pickle_results #---...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Now let's plot the components. See also Ivezic, Figure 7.4. The left hand panels are just the first 5 spectra for comparison with the first 5 PCA components, which are shown on the right. They are ordered by the size of their eigenvalues.
#Make plots fig = plt.figure(figsize=(10, 8)) fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05, bottom=0.1, top=0.95, hspace=0.05) titles = 'PCA components' for j in range(n_components): # plot the components ax = fig.add_subplot(n_components, 2, 2*j+2) ax.yaxis.set_major_fo...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Now let's make "scree" plots. These plots tell us how much of the variance is explained as a function of the each eigenvector. Our plot won't look much like Ivezic, Figure 7.5, so I've shown it below to explain where "scree" comes from.
# Execute this cell import numpy as np from matplotlib import pyplot as plt #---------------------------------------------------------------------- # Plot the results fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot(121) ax.plot(np.arange(n_components-1), evals) ax.set_xlabel("eigenvalue number") ax.set_ylabel(...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
How much of the variance is explained by the first two components? How about all of the components?
print("The first component explains {:.3f} of the variance in the data.".format(# Complete print("The second component explains {:.3f} of the variance in the data.".format(# Complete print("All components explain {:.3f} of the variance in the data.".format(# Complete
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
This is why PCA enables dimensionality reduction. How many components would we need to explain 99.5% of the variance?
for num_feats in np.arange(1,20, dtype = int): # complete print("{:d} features are needed to explain 99.5% of the variance".format(# Complete
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Note that we would need 1000 components to encode all of the variance. Interpreting the PCA The output eigenvectors are ordered by their associated eigenvalues The eigenvalues reflect the variance within each eigenvector The sum of the eigenvalues is total variance of the system Projection of each spectrum onto the...
# Execute this cell import numpy as np from matplotlib import pyplot as plt from sklearn.decomposition import PCA from astroML.datasets import sdss_corrected_spectra from astroML.decorators import pickle_results #------------------------------------------------------------ # Download data data = sdss_corrected_spect...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Caveats I PCA is a linear process, whereas the variations in the data may not be. So it may not always be appropriate to use and/or may require a relatively large number of components to fully describe any non-linearity. Note also that PCA can be very impractical for large data sets which exceed the memory per core as...
# Execute this cell %matplotlib inline import numpy as np from matplotlib import pyplot as plt from matplotlib import ticker from astroML.datasets import fetch_sdss_corrected_spectra from astroML.datasets import sdss_corrected_spectra #------------------------------------------------------------ # Get spectra and eig...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
The example that we have been using above is "spectral" PCA. Some examples from the literature include: - Francis et al. 1992 - Connolly et al. 1995 - Yip et al. 2004 One can also do PCA on features that aren't ordered (as they were for the spectra). E.g., if you have $D$ different parameters measured for your object...
# Execute this cell import numpy as np from sklearn.decomposition import NMF X = np.random.random((100,3)) # 100 points in 3D nmf = NMF(n_components=3) nmf.fit(X) proj = nmf.transform(X) # project to 3 dimension comp = nmf.components_ # 3x10 array of components err = nmf.reconstruction_err_ # how well 3 components capt...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
An example (and comparison to PCA) is given below.
# Execute the next 2 cells # Example from Figure 7.4 # Author: Jake VanderPlas # License: BSD %matplotlib inline import numpy as np from matplotlib import pyplot as plt from sklearn.decomposition import NMF from sklearn.decomposition import RandomizedPCA from astroML.datasets import sdss_corrected_spectra from astro...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Independent Component Analysis (ICA) For data where the components are statistically independent (or nearly so) Independent Component Analysis (ICA) has become a popular method for separating mixed components. The classical example is the so-called "cocktail party" problem. This is illustrated in the following figure...
# Execute this cell import numpy as np from sklearn.decomposition import FastICA X = np.random.normal(size=(100,2)) # 100 objects in 2D R = np.random.random((2,5)) # mixing matrix X = np.dot(X,R) # 2D data in 5D space ica = FastICA(2) # fit 2 components ica.fit(X) proj = ica.transform(X) # 100x2 projection of the data ...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Execute the next 2 cells to produce a plot showing the ICA components.
%matplotlib inline #Example from Andy Connolly import numpy as np from matplotlib import pyplot as plt from sklearn.decomposition import FastICA from astroML.datasets import sdss_corrected_spectra from astroML.decorators import pickle_results #------------------------------------------------------------ # Download d...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
As with PCA and NMF, we can similarly do a reconstruction:
# Execute this cell #------------------------------------------------------------ # Find the coefficients of a particular spectrum spec = spectra[1] evecs = data['evecs'] coeff = np.dot(evecs, spec - spec_mean) #------------------------------------------------------------ # Plot the sequence of reconstructions fig = p...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Ivezic, Figure 7.4 compares the components found by the PCA, ICA, and NMF algorithms. Their differences and similarities are quite interesting. If you think that I was pulling your leg about the cocktail problem, try it yourself! Load the code instead of running it and see what effect changing some things has.
%run code/plot_ica_blind_source_separation.py
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Let's revisit the digits sample and see what PCA, NMF, and ICA do for it.
# Execute this cell to load the digits sample %matplotlib inline import numpy as np from sklearn.datasets import load_digits from matplotlib import pyplot as plt digits = load_digits() grid_data = np.reshape(digits.data[0], (8,8)) #reshape to 8x8 plt.imshow(grid_data, interpolation = "nearest", cmap = "bone_r") print g...
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Do the PCA transform, projecting to 2 dimensions and plot the results.
# PCA from sklearn.decomposition import PCA pca = # Complete # Complete X_reduced = # Complete plt.scatter(# Complete, c=y, cmap="nipy_spectral", edgecolor="None") plt.colorbar()
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Similarly for NMF and ICA
# NMF # Complete # ICA from sklearn.decomposition import FastICA # Complete
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Take a second to think about what ICA is doing. What if you had digits from digital clocks instead of handwritten? I wasn't going to introduce Neural Networks yet, but it is worth noting that Scikit-Learn's Bernoulli Restricted Boltzman Machine (RBM) is discussed in the (unsupervised) neural network part of the User's...
from sklearn.ensemble import RandomForestRegressor RFreg = RandomForestRegressor()# Complete or leave blank as you see fit # Do Fitting importances = # Determine importances np.argsort( # Complete to rank importances
DimensionReduction.ipynb
gtrichards/PHYS_T480
mit
Data-MC comparison Table of contents Data preprocessing Weight simulation events to spectrum S125 verification $\log_{10}(\mathrm{dE/dX})$ verification
from __future__ import division, print_function import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec from icecube.weighting.weighting import from_simprod from icecube import dataclasses import comptools as comp import comptools.analysis.plotting as pl...
notebooks/data-MC-comparison.ipynb
jrbourbeau/cr-composition
mit
Data preprocessing [ back to top ] 1. Load simulation/data dataframe and apply specified quality cuts 2. Extract desired features from dataframe 3. Get separate testing and training datasets 4. Feature selection Load simulation, format feature and target matrices
config = 'IC86.2012' # comp_list = ['light', 'heavy'] comp_list = ['PPlus', 'Fe56Nucleus'] june_july_data_only = False sim_df = comp.load_dataframe(datatype='sim', config=config, split=False) data_df = comp.load_dataframe(datatype='data', config=config) data_df = data_df[np.isfinite(data_df['log_dEdX'])] if june_jul...
notebooks/data-MC-comparison.ipynb
jrbourbeau/cr-composition
mit
Weight simulation events to spectrum [ back to top ] For more information, see the IT73-IC79 Data-MC comparison wiki page. First, we'll need to define a 'realistic' flux model
phi_0 = 3.5e-6 # phi_0 = 2.95e-6 gamma_1 = -2.7 gamma_2 = -3.1 eps = 100 def flux(E): E = np.array(E) * 1e-6 return (1e-6) * phi_0 * E**gamma_1 *(1+(E/3.)**eps)**((gamma_2-gamma_1)/eps) from icecube.weighting.weighting import PowerLaw pl_flux = PowerLaw(eslope=-2.7, emin=1e5, emax=3e6, nevents=1e6) + \ ...
notebooks/data-MC-comparison.ipynb
jrbourbeau/cr-composition
mit
$\log_{10}(\mathrm{S_{125}})$ verification [ back to top ]
sim_df['log_s125'].plot(kind='hist', bins=100, alpha=0.6, lw=1.5) plt.xlabel('$\log_{10}(\mathrm{S}_{125})$') plt.ylabel('Counts'); log_s125_bins = np.linspace(-0.5, 3.5, 75) gs = gridspec.GridSpec(2, 1, height_ratios=[2,1], hspace=0.0) ax1 = plt.subplot(gs[0]) ax2 = plt.subplot(gs[1], sharex=ax1) for composition in ...
notebooks/data-MC-comparison.ipynb
jrbourbeau/cr-composition
mit
$\log_{10}(\mathrm{dE/dX})$ verification
sim_df['log_dEdX'].plot(kind='hist', bins=100, alpha=0.6, lw=1.5) plt.xlabel('$\log_{10}(\mathrm{dE/dX})$') plt.ylabel('Counts'); log_dEdX_bins = np.linspace(-2, 4, 75) gs = gridspec.GridSpec(2, 1, height_ratios=[2,1], hspace=0.0) ax1 = plt.subplot(gs[0]) ax2 = plt.subplot(gs[1], sharex=ax1) for composition in comp_l...
notebooks/data-MC-comparison.ipynb
jrbourbeau/cr-composition
mit
$\cos(\theta)$ verification
sim_df['lap_cos_zenith'].plot(kind='hist', bins=100, alpha=0.6, lw=1.5) plt.xlabel('$\cos(\\theta_{\mathrm{reco}})$') plt.ylabel('Counts'); cos_zenith_bins = np.linspace(0.8, 1.0, 75) gs = gridspec.GridSpec(2, 1, height_ratios=[2,1], hspace=0.0) ax1 = plt.subplot(gs[0]) ax2 = plt.subplot(gs[1], sharex=ax1) for compos...
notebooks/data-MC-comparison.ipynb
jrbourbeau/cr-composition
mit
Search for hierarchy identifiers: FeatureSet and PhenotypeAssociationSet The G2P dataset exists within the hierarchy of Ga4GH datasets and featuresets. This call returns phenotype association sets hosted by the API. Observe that we are querying all datasets hosted in the endpoint. The identifiers for the featureset a...
datasets = c.search_datasets() phenotype_association_set_id = None phenotype_association_set_name = None for dataset in datasets: phenotype_association_sets = c.search_phenotype_association_sets(dataset_id=dataset.id) for phenotype_association_set in phenotype_association_sets: phenotype_association_set_id = p...
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Use case: Find evidence for a Genomic Feature Search for Features by location Using the feature set id returned above, the following request returns a list of features that exactly match a location
feature_generator = c.search_features(feature_set_id=feature_set_id, reference_name="chr7", start=55249005, end=55249006 ) features = list(feature_generator) assert len(features) == 1 print "Found {} features in G2P feature_set...
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Search features by name Alternatively, if the location is not known, we can query using the name of the feature. Using the feature set id returned above, the following request returns a list of features that exactly match a given name - 'EGFR S768I missense mutation'.
feature_generator = c.search_features(feature_set_id=feature_set_id, name='EGFR S768I missense mutation') features = list(feature_generator) assert len(features) == 1 print "Found {} features in G2P feature_set {}".format(len(features),feature_set_id) feature = features[0] print [feature.name,feature.gene_symbol,featu...
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Get evidence associated with that feature. Once we have looked up the feature, we can then search for all evidence associated with that feature.
feature_phenotype_associations = c.search_genotype_phenotype( phenotype_association_set_id=phenotype_association_set_id, feature_ids=[f.id for f in features]) associations = list(feature_phenotype_associations) assert len(associations) >= len(fea...
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Display evidence Explore the evidence. For example, a publication.
from IPython.display import IFrame IFrame(associations[0].evidence[0].info['publications'][0], "100%",300)
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Use Case: Find evidence for a Phenotype Search a phenotype Alternatively, a researcher can query for a phenotype. In this case by the phenotype's description matching 'Adenosquamous carcinoma .*'
phenotypes_generator = c.search_phenotype( phenotype_association_set_id=phenotype_association_set_id, description="Adenosquamous carcinoma .*" ) phenotypes = list(phenotypes_generator) assert len(phenotypes) >= 0 print "\n".join(set([p.description for p in phenotypes])) ...
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Get evidence associated with those phenotypes. The researcher can use those phenotype identifiers to query for evidence associations.
feature_phenotype_associations = c.search_genotype_phenotype( phenotype_association_set_id=phenotype_association_set_id, phenotype_ids=[p.id for p in phenotypes]) associations = list(feature_phenotype_associations) assert len(associations) >= len(...
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Further constrain associations with environment The researcher can limit the associations returned by introducing the evironment contraint
import ga4gh_client.protocol as protocol evidence = protocol.EvidenceQuery() evidence.description = "MEK inhibitors" feature_phenotype_associations = c.search_genotype_phenotype( phenotype_association_set_id=phenotype_association_set_id, phen...
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Use Case: Association Heatmap The bokeh package should be installed for graphing. Find features First, we collect a set of features.
feature_generator = c.search_features(feature_set_id=feature_set_id, name='.*KIT.*') features = list(feature_generator) assert len(features) > 0 print "Found {} features. First five...".format(len(features),feature_set_id) print "\n".join([a.description for a in associations][:5])
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Get all associations Then we select all the associations for those features.
feature_phenotype_associations = c.search_genotype_phenotype( phenotype_association_set_id=phenotype_association_set_id, feature_ids=[f.id for f in features]) associations = list(feature_phenotype_associations) print "There are {} associations. ...
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Association Heatmap Developers can use the G2P package to create researcher friendly applications. Here we take the results from the GA4GH queries and create a dataframe showing association counts.
from bokeh.charts import HeatMap, output_notebook, output_file, show from bokeh.layouts import column from bokeh.models import ColumnDataSource from bokeh.models.widgets import DataTable, TableColumn from bokeh.models import HoverTool feature_ids = {} for feature in features: feature_ids[feature.id]=feature.n...
python_notebooks/g2p-example-notebook.ipynb
david4096/bioapi-examples
apache-2.0
Now that we have our electrode positions in MRI coordinates, we can create our measurement info structure.
info = mne.create_info(ch_names, 1000., 'ecog').set_montage(montage)
0.20/_downloads/f760cc2f1a5d6c625b1e14a0b05176dd/plot_ecog.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause