markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
?
np.linspace?
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
?? (Lab can scroll if you click)
np.linspace??
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Inspect everything
def silly_absolute_value_function(xval): """Takes a value and returns the value.""" xval_sq = xval ** 2.0 1 + 4 xval_abs = np.sqrt(xval_sq) return xval_abs silly_absolute_value_function(2) silly_absolute_value_function? silly_absolute_value_function??
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Keyboard shortcuts For help, ESC + h h doesn't work in Lab l / shift L for line numbers
# in select mode, shift j/k (to select multiple cells at once) # split cell with ctrl shift - first = 1 second = 2 third = 3 first = 1 second = 2 third = 3 # a new cell above # b new cell below
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Headings and LaTeX With text and $\LaTeX$ support. $$\begin{align} B'&=-\nabla \times E,\ E'&=\nabla \times B - 4\pi j \end{align}$$
%%latex If you want to get crazier... \begin{equation} \oint_S {E_n dA = \frac{1}{{\varepsilon _0 }}} Q_\textrm{inside} \end{equation}
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
More markdown
# Indent # Cmd + [ # Cmd + ] # Comment # Cmd + /
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
You can also get monospaced fonts by indenting 4 spaces: mkdir toc cd toc Wrap with triple-backticks and language: bash mkdir toc cd toc wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh SQL SELECT * FROM tablename
# note difference w/ lab
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
```sql SELECT first_name, last_name, year_of_birth FROM presidents WHERE year_of_birth > 1800; ```
%%bash pwd for i in *.ipynb do echo ${i} | awk -F . '{print $1}' done echo echo "break" echo for i in *.ipynb do echo $i | awk -F - '{print $2}' done
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Other cell-magics
%%writefile ../scripts/temp.py from __future__ import absolute_import, division, print_function I promise that I'm not cheating! !cat ../scripts/temp.py
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Autoreload is cool -- don't have time to give it the attention that it deserves. https://gist.github.com/jbwhit/38c1035c48cdb1714fc8d47fa163bfae
%load_ext autoreload %autoreload 2 example_dict = {} # Indent/dedent/comment for _ in range(5): example_dict["one"] = 1 example_dict["two"] = 2 example_dict["three"] = 3 example_dict["four"] = 4
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Multicursor magic Hold down option, click and drag.
example_dict["one_better_name"] = 1 example_dict["two_better_name"] = 2 example_dict["three_better_name"] = 3 example_dict["four_better_name"] = 4
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Find and replace -- regex notebook (or cell) wide. R pyRserve rpy2
import numpy as np !conda install -c r rpy2 -y import rpy2 %load_ext rpy2.ipython X = np.array([0,1,2,3,4]) Y = np.array([3,5,4,6,7]) %%R? %%R -i X,Y -o XYcoef XYlm = lm(Y~X) XYcoef = coef(XYlm) print(summary(XYlm)) par(mfrow=c(2,2)) plot(XYlm) type(XYcoef) XYcoef**2 thing()
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
It can be hard to guess which code is going to operate faster just by looking at it because the interactions between software and computers can be extremely complex. The best way to optimize code is through using profilers to identify bottlenecks in your code and then attempt to address these problems through optimization. Let's give it a whirl. Problem 1) Using timeit We will begin our experience with profilers by using the time and timeit commands. time can be run on any size of program, but it returns coarse level time information on how long something took to run overall. There are a lot of small optimizations that can add up to a lot of time in real-world software. Let's look at a few of the non-obvious ones. Problem 1a What is the best way to join a bunch of strings into a larger string? There are several ways of doing this, but some are clearly superior to others. Let's use timeit to test things out. Below, in each of the cells after the string_list is defined, put a new code snippet using the following three methods for building a string: --Use the builtin + operator to add strings together in an iterative way --Use the join method, as in "".join(list). --Iteratively add the strings from the list together using "%s %s" string composition. Guess which method you think will be fastest? Now test it out and see if you're right!
string_list = ['the ', 'quick ', 'brown ', 'fox ', 'jumped ', 'over ', 'the ', 'lazy ', 'dog'] %%timeit output = "" # complete %%timeit # complete %%timeit output = "" # complete
Sessions/Session03/Day4/Profiling.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Interesting! So it appears that the join method was the fastest by a factor of four or so. Good to keep that in mind for future use of strings! Problem 1b What about building big lists or list-like structures (like numpy arrays)? We now know how to construct lists in a variety of ways, so let's see which is fastest. Make a list of ascending perfect squares (i.e. 1, 4, 9, ...) for the first 1 million integers. Use these methods: --Iteratively appending x**2 values on to an empty list --A for loop with the built in python range command --A for loop with the numpy arange command --Use the numpy arange command directly, and then take the square of it --Use map to map a lambda squaring function to a numpy array constructed with numpy arange Guess which method you think will be fastest? Now test it out and see if you're right!
%%timeit output = [] # complete %%timeit # complete %%timeit # complete %%timeit # complete %%timeit map(lambda x:# complete
Sessions/Session03/Day4/Profiling.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Maximum Inner Product Matrix factorization are potent techniques in solving the collaborative filtering problem. It mainly involves building up the user-item interaction matrix, then decomposing it into a user latent factor (a.k.a embedding) and item latent factor each with some user specified dimension (a hyperparameter that we get to tweak). <img src="img/matrix_factorization.png" width="60%" height="60%"> To generate the items recommended for each user, we would perform a dot product between the two matrices and retrieve the top-k items that have the highest "scores". This process, however, can often times becomes a large bottleneck for these type of algorithms when the number of users and items becomes fairly large. As exhaustive computation of the dot product is extremely expensive. This document's focus is to demonstrate a order preserving transformation that converts the maximum inner product into a nearest neighborhood search problem to significantly speed up the process for generating the top-k recommendations. Order Preserving Transformations We'll first describe the notation we'll be using. Lower case is for scalars, $x$, bold lower case for vectors, $\mathbf{x}$, and bold upper case for matrices, $\mathbf{X}$. Given a vector, $\mathbf{x}$. The norm is denoted by $\Vert \mathbf{x} \Vert = \sqrt{\sum^d_{i=1} x_i^2}$. The inner product is represented as $\mathbf{x} \cdot \mathbf{y}$. Last but not least, $(a, \mathbf{x}^T)^T$ is for denoting a concatenation of a scalar $a$ with a vector $\mathbf{x}$. On one hand, we have a matrix of $n$ vectors $\mathbf{Y} = [\mathbf{y}_1, \mathbf{y}_2, ..., \mathbf{y}_n]$, such that $\mathbf{y}_i \in \mathbb{R}^d$. Where $d$ is the number of dimensions we set for the latent factor. Whereas, our query vector $\mathbf{x} \in \mathbb{R}^d$. Our objective is to retrieve an index according to the maximum inner product. $$ \begin{align} f(\mathbf{Y}, \mathbf{x}) = \underset{i}{\text{argmax}} \space \mathbf{x} \cdot \mathbf{y}_i \end{align} $$ The idea behind speeding up the workload for maximum inner product operations is to transform the problem into a distance minimization problem or nearest neighborhood search. \begin{align} f(\mathbf{Y}, \mathbf{x}) = \underset{i}{\text{argmin}} \space {\Vert \mathbf{x} - \mathbf{y}_i \Vert}^2 \end{align} Once we transform the problem into a euclidean distance problem, there are plethora of algorithms/packages available for doing fast similarity search. To do so, we are going to apply a transformation function on our matrix, $\mathbf{Y}$and our query vector, $\mathbf{x}$. Note that the idea here is only to perform transformation on top of the existing $\mathbf{x}$ and $\mathbf{y}$, not to design a whole new algorithm in itself to learn embeddings/latent factors that directly uses distance minimization to generate the prediction, as this prevents us from using the existing matrix factorization algorithms. The order transformation is to add an additional dimension to each of the latent factors: \begin{align} \mathbf{y}_i^ &= \big(\sqrt{\phi^2 - {\Vert \mathbf{y_i} \Vert}^2 }, \mathbf{y_i}^T\big)^T, \text{where } \phi = \underset{i}{\text{max}} \Vert \mathbf{y}_i \Vert \ \mathbf{x}^ &= (0, \mathbf{x}^T)^T \end{align} As \begin{align} {\Vert \mathbf{x}^ \Vert}^2 &= {\Vert \mathbf{x} \Vert}^2 \ {\Vert \mathbf{y}_i^ \Vert}^2 &= \phi^2 - {\Vert \mathbf{y}_i \Vert}^2 + {\Vert \mathbf{y}_i \Vert}^2 = \phi^2 \ \mathbf{x}^ \cdot \mathbf{y}^_i &= \sqrt{\phi^2 - {\Vert \mathbf{y}_i \Vert}^2 } \cdot 0 + \mathbf{x} \cdot \mathbf{y}_i = \mathbf{x} \cdot \mathbf{y}_i \end{align} To link the maximum inner product to the distance minimization problem, we would then have: \begin{align} {\Vert \mathbf{x}^ - \mathbf{y}_i^ \Vert}^2 = {\Vert \mathbf{x}^ \Vert}^2 + {\Vert \mathbf{y}_i^ \Vert}^2 - 2 \cdot \mathbf{x}^ \cdot \mathbf{y}^_i = {\Vert \mathbf{x} \Vert}^2 + \phi^2 - 2 \cdot \mathbf{x} \cdot \mathbf{y}_i \end{align} Since both $\mathbf{x}$ and $\phi$ are independent of the term $i$, that concludes our order preserving transformation. Upon building the transformation, our original matrices would have 1 extra dimension. Then the next step is to pick our favorite nearest neighborhood algorithm and use it to generate the predictions. Popular options at the time of writing this includes, faiss, nmslib, or hnswlib. The ann-benchmarks also lists down the comparison between different open-source nearest neighborhood search algorithms/packages. Let's now take a look at these concepts in practice. Matrix Factorization We'll be using the movielens data to illustrate to concept.
file_dir = 'ml-100k' file_path = os.path.join(file_dir, 'u.data') if not os.path.isdir(file_dir): call(['curl', '-O', 'http://files.grouplens.org/datasets/movielens/' + file_dir + '.zip']) call(['unzip', file_dir + '.zip']) names = ['user_id', 'item_id', 'rating', 'timestamp'] df = pd.read_csv(file_path, sep='\t', names=names) print('data dimension: \n', df.shape) df.head() users_col = 'user_id' items_col = 'item_id' value_col = 'rating' time_col = 'timestamp' for col in (users_col, items_col): df[col] = df[col].astype('category')
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
For the train/test split, the process is to split each user's behavior based on chronological order. e.g. If an user interacted with 10 items, and we specify a test set of size, 0.2. Then the first 8 items that the user first interacted with will fall in the training set, and the last 2 items will belong to the test set.
def train_test_user_time_split(df: pd.DataFrame, test_size: float=0.2): train_size = 1 - test_size df_train_user = [] df_test_user = [] df_grouped = df.sort_values(time_col).groupby(users_col) for name, df_group in df_grouped: n_train = int(df_group.shape[0] * train_size) df_group_train = df_group.iloc[:n_train] df_group_test = df_group.iloc[n_train:] df_train_user.append(df_group_train) df_test_user.append(df_group_test) df_train = pd.concat(df_train_user, ignore_index=True) df_test = pd.concat(df_test_user, ignore_index=True) return df_train, df_test test_size = 0.2 df_train, df_test = train_test_user_time_split(df, test_size) print('train size: ', df_train.shape[0]) print('test size: ', df_test.shape[0])
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
The model we'll be using is Bayesian Personalized Ranking from the implicit library.
n_users = df[users_col].cat.categories.shape[0] n_items = df[items_col].cat.categories.shape[0] # implicit library expects items to be rows # and users to be columns of the sparse matrix rows = df_train[items_col].cat.codes.values cols = df_train[users_col].cat.codes.values values = df_train[value_col].astype(np.float32) item_user = csr_matrix((values, (rows, cols)), shape=(n_items, n_users)) item_user # we won't be doing any hyperparameter tuning # as training the "best" model is not the main purpose here bpr = BayesianPersonalizedRanking() bpr.fit(item_user)
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
The model object also provides a .recommend method that generates the recommendation for a user.
user_id = 0 topn = 5 user_item = item_user.T.tocsr() recommendations = bpr.recommend(user_id, user_item, topn, filter_already_liked_items=False) recommendations
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
We can also generate the recommendations ourselves. We'll first confirm that the recommend function that we've implemented matches the one provided by the library, also implement a recommend_all function that generates the recommendation for all the user, this will be used to compare against the nearest neighborhood search on the order transformed matrix later.
def recommend(query_factors, index_factors, query_id, topn=5): output = query_factors[query_id].dot(index_factors.T) argpartition_indices = np.argpartition(output, -topn)[-topn:] sort_indices = np.argsort(output[argpartition_indices])[::-1] labels = argpartition_indices[sort_indices] distances = output[labels] return labels, distances
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Different model/library have different ways of extracting the item and user factors/embeddings, we assign it to index_factors and query_factors to make all downstream code agnostic of libraries' implementation.
index_factors = bpr.item_factors query_factors = bpr.user_factors labels, distances = recommend(query_factors, index_factors, user_id, topn) print(labels) print(distances) def recommend_all(query_factors, index_factors, topn=5): output = query_factors.dot(index_factors.T) argpartition_indices = np.argpartition(output, -topn)[:, -topn:] x_indices = np.repeat(np.arange(output.shape[0]), topn) y_indices = argpartition_indices.flatten() top_value = output[x_indices, y_indices].reshape(output.shape[0], topn) top_indices = np.argsort(top_value)[:, ::-1] y_indices = top_indices.flatten() top_indices = argpartition_indices[x_indices, y_indices] labels = top_indices.reshape(-1, topn) distances = output[x_indices, top_indices].reshape(-1, topn) return labels, distances labels, distances = recommend_all(query_factors, index_factors) print(labels) print(distances)
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Implementation To implement our order preserving transformation, we first apply the transformation on our index factors. Recall that the formula is: Let $\phi = \underset{i}{\text{max}} \Vert \mathbf{y}_i \Vert$. $\mathbf{y}_i^* = g(\mathbf{y}_i) = \big(\sqrt{\phi^2 - {\Vert \mathbf{y_i} \Vert}^2 }, \mathbf{y_i}^T\big)^T$.
def augment_inner_product(factors): normed_factors = np.linalg.norm(factors, axis=1) max_norm = normed_factors.max() extra_dim = np.sqrt(max_norm ** 2 - normed_factors ** 2).reshape(-1, 1) augmented_factors = np.append(factors, extra_dim, axis=1) return max_norm, augmented_factors print('pre shape: ', index_factors.shape) max_norm, augmented_index_factors = augment_inner_product(index_factors) augmented_index_factors.shape
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Our next step is to use our favorite nearest neighborhood search algorithm/library to conduct the search. We'll be leveraging hnswlib in this example, explaining the details behind the this nearest neighborhood search algorithm is beyond the scope of this document.
def build_hnsw(factors, space, ef_construction, M): # Declaring index max_elements, dim = factors.shape hnsw = hnswlib.Index(space, dim) # possible options for space are l2, cosine or ip # Initing index - the maximum number of elements should be known beforehand hnsw.init_index(max_elements, M, ef_construction) # Element insertion (can be called several times) hnsw.add_items(factors) return hnsw # the library directly supports inner product, # this might not be the case for all the nearest neighborhood search library space = 'ip' ef_construction = 400 M = 24 start = time.time() hnsw = build_hnsw(augmented_index_factors, space, ef_construction, M) build_time = time.time() - start build_time
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
To generate the the prediction, we first transform the incoming "queries". $\mathbf{x}^* = h(\mathbf{x}) = (0, \mathbf{x}^T)^T$.
extra_zero = np.zeros((query_factors.shape[0], 1)) augmented_query_factors = np.append(query_factors, extra_zero, axis=1) augmented_query_factors.shape k = 5 # Controlling the recall by setting ef, should always be > k hnsw.set_ef(70) # retrieve the top-n search neighbors label, distance = hnsw.knn_query(augmented_query_factors, k=k) print(label) # the distance returned by hnsw is 1 - inner product, hence # we convert it back to just inner product print(1 - distance)
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Benchmark We can time the original recommend method using maximum inner product versus the new method of using the order preserving transformed matrices with nearest neighborhood search.
%%timeit recommend_all(query_factors, index_factors, topn=k) %%timeit extra_zero = np.zeros((query_factors.shape[0], 1)) augmented_query_factors = np.append(query_factors, extra_zero, axis=1) hnsw.knn_query(query_factors, k=k)
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Note that the timing is highly dependent on the dataset. We'll observe a much larger speedup if the number of items/labels in the output/index factor is larger. In the movielens dataset, we only had to rank the top items for each user among 1.6K items, in a much larger dataset, the number of items could easily go up to 100K or even million, that's when we'll see the real potential of this method. Another thing worth checking is the quality of the prediction using the new method. Here we're using hnswlib library to generate the nearest neighborhood items, as hnswlib is technically an approximate nearest neighborhood algorithm. We can measure how much overlap the approximate top recommendations are to the original top recommendations to make sure we are using the right parameters for the nearest neighborhood search algorithm. Notation-wise: \begin{align} \text{overlap@k} = \frac{|L_{rec} \cap L_{opt}|}{k} \end{align} Where $L_{rec}$ and $L_{opt}$ are the lists of top k approximate recommendations and top k optimal/original recommendations respectively.
labels, distances = recommend_all(query_factors, index_factors, topn=k) hnsw_labels, hnsw_distances = hnsw.knn_query(query_factors, k=k) def compute_label_precision(optimal_labels, reco_labels): n_labels = len(optimal_labels) label_precision = 0.0 for optimal_label, reco_label in zip(optimal_labels, reco_labels): topn = len(reco_label) precision = len(set(optimal_label) & set(reco_label)) / topn label_precision += (precision / n_labels) return round(label_precision, 3) # as expected, the precision between itself should be 1 label_precision = compute_label_precision(labels, labels) label_precision # ensure the approximate neighborhood search is of good quality label_precision = compute_label_precision(labels, hnsw_labels) label_precision
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions: How is this possible? Is this desirable? The answer to (1) is that this is happening because of dropout. Dropout refers to a layer that randomly deletes (i.e. sets to zero) each activation in the previous layer with probability p (generally 0.5). This only happens during training, not when calculating the accuracy on the validation set, which is why the validation set can show higher accuracy than the training set. The purpose of dropout is to avoid overfitting. By deleting parts of the neural network at random during training, it ensures that no one part of the network can overfit to one part of the training set. The creation of dropout was one of the key developments in deep learning, and has allowed us to create rich models without overfitting. However, it can also result in underfitting if overused, and this is something we should be careful of with our model. So the answer to (2) is: this is probably not desirable. It is likely that we can get better validation set results with less (or no) dropout, if we're seeing that validation accuracy is higher than training accuracy - a strong sign of underfitting. So let's try removing dropout entirely, and see what happens! (We had dropout in this model already because the VGG authors found it necessary for the imagenet competition. But that doesn't mean it's necessary for dogs v cats, so we will do our own analysis of regularization approaches from scratch.) Removing dropout Our high level approach here will be to start with our fine-tuned cats vs dogs model (with dropout), then fine-tune all the dense layers, after removing dropout from them. The steps we will take are: - Re-create and load our modified VGG model with binary dependent (i.e. dogs v cats) - Split the model between the convolutional (conv) layers and the dense layers - Pre-calculate the output of the conv layers, so that we don't have to redundently re-calculate them on every epoch - Create a new model with just the dense layers, and dropout p set to zero - Train this new model using the output of the conv layers as training data. As before we need to start with a working model, so let's bring in our working VGG 16 model and change it to predict our binary dependent...
??vgg_ft
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
py def vgg_ft(out_dim): vgg = Vgg16() vgg.ft(out_dim) model = vgg.model return model
??Vgg16.ft
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
```py def ft(self, num): """ Replace the last layer of the model with a Dense (fully connected) layer of num neurons. Will also lock the weights of all layers except the new layer so that we only learn weights for the last layer in subsequent training. Args: num (int): Number of neurons in the Dense layer Returns: None """ model = self.model model.pop() for layer in model.layers: layer.trainable=False model.add(Dense(num, activation='softmax')) self.compile() ```
model = vgg_ft(2)
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. Convolution layers take a lot of time to compute, but Dense layers do not. We'll start by finding this layer in our model, and creating a new model that contains just the layers up to and including this layer:
layers = model.layers # find the last convolution layer last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known as fully connected or 'FC' layers fc_layers = layers[last_conv_idx+1:]
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
# Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.000001, rho=0.7) def get_fc_model(): model = Sequential([ MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]), Flatten(), Dense(4096, activation='relu'), Dropout(0.), Dense(4096, activation='relu'), Dropout(0.), Dense(2, activation='softmax') ]) for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2)) model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) return model fc_model = get_fc_model()
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
What we typically talk about are "$p$-values": the probability to observe data where the null-hypothesis is at least as disfavored as what we actually observed if the null hypothesis were true. To go from the $\chi^2$ distribution to the $p$-value we take the complement of the cumulative distribution (sometimes called the "survival function").
pvalue_1dof = 1. - stats.chi2.cdf(x,1) fig2, ax2 = plt.subplots(1, 1) ax2.set_xlabel('Test Statistic') ax2.set_ylabel('p-value') ax2.set_xlim(0.,25.) curve = ax2.semilogy(x, pvalue_1dof,'r-', lw=1, label='p-value')
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
By way of comparison, here is what the $p$-value looks like for 1,2,3, and 4 degrees of freedom.
pvalue_2dof = 1. - stats.chi2.cdf(x,2) pvalue_3dof = 1. - stats.chi2.cdf(x,3) pvalue_4dof = 1. - stats.chi2.cdf(x,4) fig3, ax3 = plt.subplots(1, 1) ax3.set_xlabel('Test Statistic') ax3.set_ylabel('p-value') ax3.set_xlim(0.,25.) ax3.semilogy(x, pvalue_1dof,'r-', lw=1, label='1 DOF') ax3.semilogy(x, pvalue_2dof,'b-', lw=1, label='2 DOF') ax3.semilogy(x, pvalue_3dof,'g-', lw=1, label='3 DOF') ax3.semilogy(x, pvalue_4dof,'y-', lw=1, label='4 DOF') leg = ax3.legend()
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Converting p-values to standard deviations We often choose to report signal significance in terms of the equivalent number of standard deviations ($\sigma$) away from the mean of a normal (Gaussian) distribution you would have go to obtain a given $p$-value. Here are the confidence intervals correspond to 1,2,3,4,5 sigma.
sigma_p = [] for i in range(1,6): print "%i sigma = %.2e p-value"%(i,2*stats.norm.sf(i)) sigma_p.append(2*stats.norm.sf(i))
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Here is a plot showing how those p-values map onto values of the TS.
fig6, ax6 = plt.subplots(1, 1) ax6.set_xlabel('Test Statistic') ax6.set_ylabel('p-value') ax6.set_xlim(0.,25.) ax6.semilogy(x, pvalue_1dof,'r-', lw=1, label='1 DOF') ax6.semilogy(x, pvalue_2dof,'b-', lw=1, label='2 DOF') ax6.semilogy(x, pvalue_3dof,'g-', lw=1, label='3 DOF') ax6.semilogy(x, pvalue_4dof,'y-', lw=1, label='4 DOF') ax6.hlines(sigma_p,0,25.0,linestyles=u'dotted') leg = ax6.legend()
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
You will notice that for 1 DOF, the significance expressed in standard deviations is simply the $\sigma = \sqrt{TS}$, i.e., the $\chi^2$ distribution is simply the positive half of the normal distribution for $\sqrt{TS}$.
for i in range(1,6): print "%i sigma = %.2e == %.2e"%(i,2*stats.norm.sf(i),stats.chi2.sf(i*i,1))
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Examples of the $\chi^2$ distribution and p-values for Chernoff's theorem Chernoff's theorem applies when 1/2 of the trials are expected to give negative fluctuations where we expect the signal. Since we have bounded the parameter at zero, the likelihood will be maximized at zero, which is the same as the null-hypothesis, so those trials will give $TS = 0$. Here is a comparison of the $p$-value for cases where Wilks' and Chernoff's theorems apply, for a single degree of freedom.
pvalue_1dof_cher = 0.5*(1. - stats.chi2.cdf(x,1)) fig4, ax4 = plt.subplots(1, 1) ax4.set_xlabel('Test Statistic') ax4.set_ylabel('p-value') ax4.set_xlim(0.,25.) ax4.semilogy(x, pvalue_1dof,'r-', lw=1, label='Unbounded') ax4.semilogy(x, pvalue_1dof_cher,'r--', lw=1, label='Bounded') leg = ax4.legend()
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Degrees of freedom vs. trials factor A very common mistake that people make is to confuse degrees of freedom with trials factors. As a concrete example, consider a search for a new point source, where we allow the position of the source to vary in our fitting procedure. In that case we would typically have (at least) 3 additional degrees of freedom w.r.t. the null hypothesis (the magnitude of the source and 2 spatial coordiantes). We can consider two limiting cases: If we only allow the position of the source to move a small amount compared to the image resolution then will expect the distribution of $TS$ to be $\chi^2$-distributed with 3 DOF. If, on the other hand, we were to fit for a signal at three different locations separated by much that the image resolution they we would have three independent trials with a single degree of freedom each. In that case the smallest $p$-value would be: $p_{\rm glob} = 1 - (1-p)^3$. Here is a comparison of those two situations, for bounded parameters.
pvalue_1dof_cher_3trial = 1. - (1 - 0.5*(1. - stats.chi2.cdf(x,1)))**3 fig5, ax5 = plt.subplots(1, 1) ax5.set_xlabel('Test Statistic') ax5.set_ylabel('p-value') ax5.set_xlim(0.,25.) ax5.semilogy(x, pvalue_1dof_cher,'r--', lw=1, label='1Trial, 3DOF') ax5.semilogy(x, pvalue_1dof_cher_3trial,'r-.', lw=1, label='3Trials, 1DOF') leg = ax5.legend()
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Euler's method Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation $$ \frac{dy}{dx} = f(y(x), x) $$ with the initial condition: $$ y(x_0)=y_0 $$ Euler's method performs updates using the equations: $$ y_{n+1} = y_n + h f(y_n,x_n) $$ $$ h = x_{n+1} - x_n $$ Write a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:
def solve_euler(derivs, y0, x): """Solve a 1d ODE using Euler's method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarray, list, tuple The array of times at which of solve the diff-eq. Returns ------- y : np.ndarray Array of solutions y[i] = y(x[i]) """ # YOUR CODE HERE #raise NotImplementedError() y = np.empty_like(x) y[0] = y0 h = x[1] - x[0] for n in range (0, len(x) - 1): y[n + 1] = y[n] + h * derivs(y[n],x[n]) return y assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
assignments/assignment10/ODEsEx01.ipynb
sraejones/phys202-2015-work
mit
The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation: $$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$ Write a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:
def solve_midpoint(derivs, y0, x): """Solve a 1d ODE using the Midpoint method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarray, list, tuple The array of times at which of solve the diff-eq. Returns ------- y : np.ndarray Array of solutions y[i] = y(x[i]) """ # YOUR CODE HERE #raise NotImplementedError() y = np.empty_like(x) y[0] = y0 h = x[1] - x[0] for n in range (0, len(x) - 1): # y[n + 1] = y[n] + h * ((derivs(y[n]+(h/2)) * derivs(y[n],x[n]), x[n]) * (y[n] + (h/2) * derivs(y[n],x[n]) + (h/2))) y[n+1] = y[n] + h * derivs(y[n] + h/2 * derivs(y[n],x[n]), x[n] + h/2) return y assert np.allclose(solve_midpoint(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
assignments/assignment10/ODEsEx01.ipynb
sraejones/phys202-2015-work
mit
In the following cell you are going to solve the above ODE using four different algorithms: Euler's method Midpoint method odeint Exact Here are the details: Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$). Define the derivs function for the above differential equation. Using the solve_euler, solve_midpoint, odeint and solve_exact functions to compute the solutions using the 4 approaches. Visualize the solutions on a sigle figure with two subplots: Plot the $y(x)$ versus $x$ for each of the 4 approaches. Plot $\left|y(x)-y_{exact}(x)\right|$ versus $x$ for each of the 3 numerical approaches. Your visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness. While your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.
# YOUR CODE HERE # raise NotImplementedError() x = np.linspace(0,1.0,11) y = np.empty_like(x) y0 = y[0] def derivs(y, x): return x+2*y plt.plot(solve_euler(derivs, y0, x), label = 'euler') plt.plot(solve_midpoint(derivs, y0, x), label = 'midpoint') plt.plot(solve_exact(x), label = 'exact') plt.plot(odeint(derivs, y0, x), label = 'odeint') assert True # leave this for grading the plots
assignments/assignment10/ODEsEx01.ipynb
sraejones/phys202-2015-work
mit
Expected output: test: Hello World <font color='blue'> What you need to remember: - Run your cells using SHIFT+ENTER (or "Run cell") - Write code in the designated areas using Python 3 only - Do not modify the code outside of the designated areas 1 - Building basic functions with numpy Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments. 1.1 - sigmoid function, np.exp() Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp(). Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function. Reminder: $sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning. <img src="images/Sigmoid.png" style="width:500px;height:228px;"> To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
# GRADED FUNCTION: basic_sigmoid import math def basic_sigmoid(x): """ Compute sigmoid of x. Arguments: x -- A scalar Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) s = 1.0 / (1.0 + math.exp(-x)) ### END CODE HERE ### return s basic_sigmoid(3)
course-deeplearning.ai/course1-nn-and-deeplearning/Python+Basics+With+Numpy+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Any time you need more info on a numpy function, we encourage you to look at the official documentation. You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation. Exercise: Implement the sigmoid function using numpy. Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now. $$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix} x_1 \ x_2 \ ... \ x_n \ \end{pmatrix} = \begin{pmatrix} \frac{1}{1+e^{-x_1}} \ \frac{1}{1+e^{-x_2}} \ ... \ \frac{1}{1+e^{-x_n}} \ \end{pmatrix}\tag{1} $$
# GRADED FUNCTION: sigmoid import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function() def sigmoid(x): """ Compute the sigmoid of x Arguments: x -- A scalar or numpy array of any size Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) s = 1.0 / (1.0 + np.exp(-x)) ### END CODE HERE ### return s x = np.array([1, 2, 3]) sigmoid(x)
course-deeplearning.ai/course1-nn-and-deeplearning/Python+Basics+With+Numpy+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **sigmoid([1,2,3])**</td> <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> </tr> </table> 1.2 - Sigmoid gradient As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function. Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$ You often code this function in two steps: 1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful. 2. Compute $\sigma'(x) = s(1-s)$
# GRADED FUNCTION: sigmoid_derivative def sigmoid_derivative(x): """ Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x. You can store the output of the sigmoid function into variables and then use it to calculate the gradient. Arguments: x -- A scalar or numpy array Return: ds -- Your computed gradient. """ ### START CODE HERE ### (≈ 2 lines of code) s = 1.0 / (1.0 + np.exp(-x)) ds = s * (1-s) ### END CODE HERE ### return ds x = np.array([1, 2, 3]) print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
course-deeplearning.ai/course1-nn-and-deeplearning/Python+Basics+With+Numpy+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Basic vectorization Vectorizing text is a fundamental concept in applying both supervised and unsupervised learning to documents. Basically, you can think of it as turning the words in a given text document into features, represented by a matrix. Rather than explicitly defining our features, as we did for the donor classification problem, we can instead take advantage of tools, called vectorizers, that turn each word into a feature best described as "The number of times Word X appears in this document". Here's an example with one bill title:
bill_titles = ['An act to amend Section 44277 of the Education Code, relating to teachers.'] vectorizer = CountVectorizer() features = vectorizer.fit_transform(bill_titles).toarray() print features print vectorizer.get_feature_names()
class5_1/vectorization.ipynb
datapolitan/lede_algorithms
gpl-2.0
The following block of code illustrates how to evaluate a single sequence. Additionally we show how one can pass in the information using NumPy arrays.
# load dictionaries query_wl = [line.rstrip('\n') for line in open(data['query']['file'])] slots_wl = [line.rstrip('\n') for line in open(data['slots']['file'])] query_dict = {query_wl[i]:i for i in range(len(query_wl))} slots_dict = {slots_wl[i]:i for i in range(len(slots_wl))} # let's run a sequence through # seq = 'BOS flights from new york to seattle EOS' seq = 'BOS flights from new york to paris EOS' w = [query_dict[w] for w in seq.split()] # convert to word indices print(w) onehot = np.zeros([len(w),len(query_dict)], np.float32) for t in range(len(w)): onehot[t,w[t]] = 1 #x = C.sequence.input_variable(vocab_size) pred = z(x).eval({x:[onehot]})[0] print(pred.shape) best = np.argmax(pred,axis=1) print(best) list(zip(seq.split(),[slots_wl[s] for s in best]))
DAT236x Deep Learning Explained/Lab6_TextClassification_with_LSTM.ipynb
bourneli/deep-learning-notes
mit
Reformat data
from utils import preprocess_data raw_data = [] for ii in data_list: im = Image.open(ii) idat = np.array(im) > 100 idat = idat.flatten() raw_data.append(idat) np.random.seed(111) np.random.shuffle(raw_data) data = preprocess_data(raw_data) data
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Visualize the input data
# Examine one im = Image.open(data_list[0]) im data.shape %matplotlib inline from matplotlib.pyplot import imshow import matplotlib.pyplot as plt from utils import display_single_png, display_all_png display_single_png(idat) plt.show() display_all_png(data) plt.show()
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
DO
from ART1 import ART1 from collections import defaultdict # create networkreload input_row_size = 100 max_categories = 8 rho = 0.4 network = ART1(n=input_row_size, m=max_categories, rho=rho) # preprocess data data_cleaned = preprocess_data(data) # shuffle data? np.random.seed(155) np.random.shuffle(data_cleaned) # multiple epochs? network.compute(data_cleaned) # # learn data array, row by row # for row in data_cleaned: # network.learn(row) print print "n rows of data: ", len(data_cleaned) print "max categories allowed: ", max_categories print "rho: ", rho #print "n categories used: ", network.n_cats print network.Y
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Visualize cluster weights as an input pattern The cluster unit weights can be represented visually, representing the learned patterns for that unit.
# print learned clusters for idx, cluster in enumerate(network.Bij.T): print "Cluster Unit #{}".format(idx) display_single_output(cluster)
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Sanity check: predict cluster centers What if we take one of these cluster "centers" and feed it back into the network for prediction?
# Cluster_index clust_idx = 2 print "Target: ", clust_idx idata = network.Bij.T[clust_idx] idata = idata.astype(bool).astype(int) display_single_output(idata) # Prediction pred = network.predict(idata) print "prediction (cluster index): ", pred
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Examine the predictions visually
# output results, row by row output_dict = defaultdict(list) for row, row_cleaned in zip (data, data_cleaned): pred = network.predict(row_cleaned) output_dict[pred].append(row) for k,v in output_dict.iteritems(): print "Cluster #{} ({} members)".format(k, len(v)) print '-'*20 for row in v: display_single_output(row) # \ print "'{}':{}".format( # row, # network.predict(row_cleaned))
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Sanity check: Modify input pattern randomly By making random variations of the input pattern, we can judge the ability of the network to generalize input patterns not seen in the training data.
# of tests ntests = 10 # number of bits in the pattern to modify nchanges = 30 for test in range(ntests): #cluster_index clust_idx = np.random.randint(network.output_size) print "Target: ", clust_idx idata = network.Bij.T[clust_idx] idata = idata.astype(bool).astype(int) #modify data for ii in range(nchanges): rand_element = np.random.randint(idata.shape[0]) # flip this bit if idata[rand_element] == 0: idata[rand_element] = 1 else: idata[rand_element] = 0 # randomize this bit idata[rand_element] = np.random.randint(1) display_single_output(idata) # prediction pred = network.predict(idata) print "prediction (cluster index): ", pred display_single_output(network.Bij.T[pred]) print "-" * 20 plt.show() # print training data display_output(data) plt.show()
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
The above is the main function of the Levenberg-Marquardt algorithm. The code may appear daunting at first, but all it does is implement the Levenberg-Marquardt update rule and some checks of convergence. We can now apply it to the problem with relative ease to obtain a numerical solution for our parameter vector.
solved_x = levenberg_marquardt(d, t, x, sinusoid_residual, sinusoid_jacobian) print solved_x
2_Mathematical_Groundwork/2_11_least_squares.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
A final, important thing to note is that the Levenberg-Marquardt algorithm is already implemented in Python. It is used in scipy.optimise.leastsq. This is often useful for doing rapid numerical solution without the need for an analytic Jacobian. As a simple proof, we can call the built-in method to verify our results.
x = np.array([8., 43.5, 1.05]) leastsq_x = leastsq(sinusoid_residual, x, args=(t, d)) print "scipy.optimize.leastsq: ", leastsq_x[0] print "Our LM: ", solved_x plt.plot(t, d, label="Data") plt.plot(t, sinusoid(leastsq_x[0], t), label="optimize.leastsq") plt.xlabel("t") plt.legend(loc='upper right') plt.show()
2_Mathematical_Groundwork/2_11_least_squares.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
In this case, the built-in method clearly fails. I have done this deliberately to illustrate a point - a given implementation of an algorithm might not be the best one for your application. In this case, the manner in which the tuning parameters are handled prevents the solution from converging correctly. This can be avoided by choosing a starting guess closer to the truth and once again highlights the importance of initial values in problems of this type.
x = np.array([8., 35., 1.05]) leastsq_x = leastsq(sinusoid_residual, x, args=(t, d)) print "scipy.optimize.leastsq: ", leastsq_x[0] print "Our LM: ", solved_x plt.plot(t, d, label="Data") plt.plot(t, sinusoid(leastsq_x[0], t), label="optimize.leastsq") plt.xlabel("t") plt.legend(loc='upper right') plt.show()
2_Mathematical_Groundwork/2_11_least_squares.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
Adding Datasets Next let's add a mesh dataset so that we can plot our Wilson-Devinney style meshes
b.add_dataset('mesh', times=np.linspace(0,10,6), dataset='mesh01', columns=['visibilities'])
2.1/examples/mesh_wd.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
And see which elements are visible at the current time. This defaults to use the 'RdYlGn' colormap which will make visible elements green, partially hidden elements yellow, and hidden elements red. Note that the observer is in the positive w-direction.
afig, mplfig = b['secondary@mesh01@model'].plot(time=0.0, x='us', y='ws', ec='None', fc='visibilities', show=True)
2.1/examples/mesh_wd.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Setup
pudl_settings = pudl.workspace.setup.get_defaults() settings_file_name= 'etl_full.yml' etl_settings = EtlSettings.from_yaml( pathlib.Path(pudl_settings['settings_dir'], settings_file_name)) validated_etl_settings = etl_settings.datasets datasets = validated_etl_settings.get_datasets() eia_settings = datasets["eia"]
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
You can skip the settings step above and set these years/tables yourself here without using the settings files... just know they are not validated below so they could be wrong and fail after some time. It is HIGHLY RECOMMENDED that you use all the years/tables
eia860_tables = eia_settings.eia860.tables eia860_years = eia_settings.eia860.years eia860m = eia_settings.eia860.eia860m eia923_tables = eia_settings.eia923.tables eia923_years = eia_settings.eia923.years ds = Datastore()
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
Run extract step & phase 1 transform step this is pulled from pudl.etl._etl_eia()
# Extract EIA forms 923, 860 eia923_raw_dfs = pudl.extract.eia923.Extractor(ds).extract( settings=eia_settings.eia923 ) eia860_raw_dfs = pudl.extract.eia860.Extractor(ds).extract( settings=eia_settings.eia860 ) # if we are trying to add the EIA 860M YTD data, then extract it and append if eia860m: eia860m_raw_dfs = pudl.extract.eia860m.Extractor(ds).extract( settings=eia_settings.eia860 ) eia860_raw_dfs = pudl.extract.eia860m.append_eia860m( eia860_raw_dfs=eia860_raw_dfs, eia860m_raw_dfs=eia860m_raw_dfs ) # Transform EIA forms 923, 860 eia860_transformed_dfs = pudl.transform.eia860.transform( eia860_raw_dfs, eia860_settings=eia_settings.eia860 ) eia923_transformed_dfs = pudl.transform.eia923.transform( eia923_raw_dfs, eia923_settings=eia_settings.eia923 )
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
You have to re-run this cell every time you want to re-run the havesting cell below (bc pudl.transform.eia.harvesting removes columns from the dfs). This cell enables you to start with a fresheia_transformed_dfs without needing to re-run the 860/923 transforms.
# create an eia transformed dfs dictionary eia_transformed_dfs = eia860_transformed_dfs.copy() eia_transformed_dfs.update(eia923_transformed_dfs.copy()) # Do some final cleanup and assign appropriate types: eia_transformed_dfs = { name: convert_cols_dtypes(df, data_source="eia") for name, df in eia_transformed_dfs.items() }
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
Run harvest w/ debug=True
# we want to investigate the harvesting of the plants in this case... entity = 'generators' # create the empty entities df to fill up entities_dfs = {} entities_dfs, eia_transformed_dfs, col_dfs = ( pudl.transform.eia.harvesting( entity, eia_transformed_dfs, entities_dfs, debug=True) )
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
Use col_dfs to explore harvested values
pmc = col_dfs['prime_mover_code'] pmc.prime_mover_code.unique()
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
Define a simple schema (the only info is location and point num)
schema = { 'geometry': 'Point','properties':{'num':'int' }} # # copy the projection from the tif file so we put the groundtrack # in the same coordinates driver='ESRI Shapefile' raster=rasterio.open(tiff_file,'r') crs = raster.crs.to_dict() proj = pyproj.Proj(crs) with fiona.open("ground_track", "w", driver=driver, schema=schema,crs=crs,encoding='utf-8') as output: for index,lon_lat in enumerate(zip(cloudsat_lons,cloudsat_lats)): lon,lat=lon_lat x,y=proj(lon,lat) #print(x,y,lon,lat) geometry={'coordinates': (x, y), 'type': 'Point'} out_dict=dict(geometry=geometry,properties={'num':index}) output.write(out_dict)
notebooks/shapefiles.ipynb
a301-teaching/a301_code
mit
Sparse 2d interpolation In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain: The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$. The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points. The value of $f$ is known at a single interior point: $f(0,0)=1.0$. The function $f$ is not known at any other points. Create arrays x, y, f: x should be a 1d array of the x coordinates on the boundary and the 1 interior point. y should be a 1d array of the y coordinates on the boundary and the 1 interior point. f should be a 1d array of the values of f at the corresponding x and y coordinates. You might find that np.hstack is helpful.
x = np.array([-5,-5,-5,-5,-5,-5,-5,-5,-5,-5]) x = np.append(x,range(-5,6)) x = np.append(x,[5,5,5,5,5,5,5,5,5,5]) x = np.append(x,range(4,-5,-1)) x = np.append(x,0) y = np.array(range(-5,6)) y = np.append(y,[5,5,5,5,5,5,5,5,5,5]) y = np.append(y,range(4,-6,-1)) y = np.append(y,[-5,-5,-5,-5,-5,-5,-5,-5,-5]) y = np.append(y,0) f = np.zeros(40) f = np.append(f,1)
assignments/assignment08/InterpolationEx02.ipynb
joshnsolomon/phys202-2015-work
mit
Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
plt.figure(figsize=(10,7)) plt.contourf(xnew,ynew,Fnew,cmap='gist_rainbow'); plt.colorbar(); plt.title('contour plot of scaler field f(x,y)'); assert True # leave this to grade the plot
assignments/assignment08/InterpolationEx02.ipynb
joshnsolomon/phys202-2015-work
mit
Problem: Implement the Forward Algorithm Now it's time to put it all together. We create a table to hold the results and build them up from the front to back. Along with the results, we return the marginal probability that can be compared with the backward algorithm's below.
import numpy as np np.set_printoptions(suppress=True) def forward(params, observations): pi, A, B = params N = len(observations) S = pi.shape[0] alpha = np.zeros((N, S)) # base case # p(z1) * p(x1|z1) alpha[0, :] = pi * B[observations[0], :] # recursive case - YOUR CODE GOES HERE return (alpha, np.sum(alpha[N-1,:])) forward((pi, A, B), [THE, DOG, WALKED, IN, THE, PARK, END])
handsOn_lecture18_gmm-hmm/handsOn_lecture18_gmm-hmm.ipynb
eecs445-f16/umich-eecs445-f16
mit
Problem: Implement the Backward Algorithm If you implemented both correctly, the second return value (the marginals) from each method should match.
def backward(params, observations): pi, A, B = params N = len(observations) S = pi.shape[0] beta = np.zeros((N, S)) # base case beta[N-1, :] = 1 # recursive case -- YOUR CODE GOES HERE! return (beta, np.sum(pi * B[observations[0], :] * beta[0,:])) backward((pi, A, B), [THE, DOG, WALKED, IN, THE, PARK, END])
handsOn_lecture18_gmm-hmm/handsOn_lecture18_gmm-hmm.ipynb
eecs445-f16/umich-eecs445-f16
mit
模型平均 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/addons/tutorials/average_optimizers_callback"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org上查看</a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td> <td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td> <td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td> </table> 概述 此笔记本将演示如何使用 TensorFlow Addons 软件包中的移动平均优化器和模型平均检查点。 移动平均 移动平均值的优点在于,它们在最新批次中不易出现重大的损失变动或不规则的数据表示。在某一时刻之前,它会为模型训练提供一个平滑而笼统的思路。 随机平均值 随机加权平均会收敛于更广泛的最优值。在这种情况下,它就像几何集成。作为其他优化器的封装容器和内部优化器不同轨迹点的平均结果时,SWA 是一种提高模型性能的简单方法。 模型平均检查点 callbacks.ModelCheckpoint 无法让您在训练过程中保存移动平均权重,这就是为什么模型平均优化器需要自定义回调的原因。使用 update_weights 参数,ModelAverageCheckpoint 允许您: 将移动平均权重分配给模型,然后保存它们。 保留旧的非平均权重,但保存的模型使用平均权重。 设置
!pip install -U tensorflow-addons import tensorflow as tf import tensorflow_addons as tfa import numpy as np import os
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
构建模型
def create_model(opt): model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer=opt, loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
准备数据集
#Load Fashion MNIST dataset train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) test_images, test_labels = test
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
我们在这里比较三个优化器: 解包的 SGD 带移动平均的 SGD 带随机加权平均的 SGD 查看它们在同一模型上的性能。
#Optimizers sgd = tf.keras.optimizers.SGD(0.01) moving_avg_sgd = tfa.optimizers.MovingAverage(sgd) stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
MovingAverage 和 StocasticAverage 优化器均使用 ModelAverageCheckpoint。
#Callback checkpoint_path = "./training/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir, save_weights_only=True, verbose=1) avg_callback = tfa.callbacks.AverageModelCheckpoint(filepath=checkpoint_dir, update_weights=True)
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
训练模型 Vanilla SGD 优化器
#Build Model model = create_model(sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
移动平均 SGD
#Build Model model = create_model(moving_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
随机加权平均 SGD
#Build Model model = create_model(stocastic_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Make predictions from the new data In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook. The add_traffic_last_5min function below will query the traffic_realtime table to find the most recent traffic information and add that feature to our instance for prediction.
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` # table and add it to instance. def add_traffic_last_5min(instance): bq = bigquery.Client() query_string = """ SELECT * FROM `taxifare.traffic_realtime` ORDER BY time DESC LIMIT 1 """ trips = bq.query(query_string).to_dataframe()["trips_last_5min"][0] instance["traffic_last_5min"] = int(trips) return instance
notebooks/building_production_ml_systems/solutions/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. Copy the ENDPOINT_RESOURCENAME from the deployment in the previous lab to the beginning of the block below.
# TODO 2b. Write code to call prediction on instance using realtime traffic # info. Hint: Look at this sample # https://github.com/googleapis/python-aiplatform/blob/master/samples/snippets/predict_custom_trained_model_sample.py # TODO: Copy the `ENDPOINT_RESOURCENAME` from the deployment in the previous # lab. ENDPOINT_RESOURCENAME = "" api_endpoint = f"{REGION}-aiplatform.googleapis.com" # The AI Platform services require regional API endpoints. client_options = {"api_endpoint": api_endpoint} # Initialize client that will be used to create and send requests. # This client only needs to be created once, and can be reused for multiple # requests. client = aiplatform.gapic.PredictionServiceClient(client_options=client_options) instance = { "dayofweek": 4, "hourofday": 13, "pickup_longitude": -73.99, "pickup_latitude": 40.758, "dropoff_latitude": 41.742, "dropoff_longitude": -73.07, } # The format of each instance should conform to the deployed model's # prediction input schema. instance_dict = add_traffic_last_5min(instance) instance = json_format.ParseDict(instance, Value()) instances = [instance] response = client.predict(endpoint=ENDPOINT_RESOURCENAME, instances=instances) # The predictions are a google.protobuf.Value representation of the model's # predictions. print(" prediction:", response.predictions[0][0])
notebooks/building_production_ml_systems/solutions/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Ad hoc Polymorphism and Object tables Ad hoc polymorphism is the notion that different functions are called to accomplish the same task for arguments of different types. This enables the Python Data model with the dunder methods. If you call len(arg) or iter(arg), we delegate to arg's __len__ or __iter__ by looking them up in the table (class) corresponding to arg. The net effect is that you get different behaviors for different objects. You are not looking up a table for the operation but instead looking up a table for the object. You can think of this as single dispatch: the len is dispatched based on the type of the argument by looking up a table for the argument. Duck Typing We group together the notion that an object responds to such "messages" into a protocol An example is the informal notion that something is a sequence This is Duck Typing. Alex Martelli, the coiner of the phrase Duck Typing, says: In Python, this mostly boils down to avoiding the use of isinstance to check the object’s type (not to mention the even worse approach of checking, for example, whether type(foo) is bar—which is rightly anathema as it inhibits even the simplest forms of inheritance!). Tables for dispatching on functions You can also dispatch a function based on its argument, with no lookup in that argument's table, but rather in a table that is associated with the function. This is also single dispatch, but from a different table. There is no built in support in Python for this, but you can write it on your own by associating a dictionary with multiple types. See Chapter 7 (Example 7-20 and Example 7-21) in Fluent Python. Parametric Polymorphism Write functions (or types) that are generic "over" other types. This means, for example, a stack that can take either an int or a float or an animal. Notice that this is generally true in a dynamic language such as Python where objects are allocated on the heap and it's the references or labels or ids that are pushed onto the stack. In C++ this can be done using templates at compile time to optimize the allocation of space. Subtype Polymorphism This refers to the polymorphism that we encounter in situations where our language provides subclassing. In a language such as C++, this refers to the notion that a dog and a cat can make sounds through an animal pointer. In Python one can use duck typing or inheritance. So subtype polymorphism is then just ad-hoc polymorphism plus an augmented lookup in the inheritance hierarchy. Object Tables Again What's this table we keep talking about? We hinted at it earlier when we did:
mydeck.__class__.__dict__
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
What if we don't find a method in the table? Either this is a runtime error, or we search in the "parent" classes of this class. We can see all such attributes by using dir:
dir(mydeck)
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
This works because it gets sent up:
hash(mydeck)
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
You can see whats upward of the French Deck by inspecting the Method Order Resolution using the mro method.
FrenchDeck.mro()
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Data Structures Computer programs don't only perform calculations; they also store and retrieve information Data structures and the algorithms that operate on them is at the core of computer science Data structures are quite general Any data representation and associated operations e.g. integers, floats, arrays, classes, ... Need to develop a "toolkit" of data structures and know when/how to use the right one for a given problem Changing a data structure in a slow program can work the same way an organ transplant does in a sick patient. Important classes of abstract data types such as containers, dictionaries, and priority queues, have many different but functionally equivalent data structures that implement them. Changing the data structure does not change the correctness of the program, since we presumably replace a correct implementation with a different correct implementation. However, the new implementation of the data type realizes different tradeoffs in the time to execute various operations, so the total performance can improve dramatically. Like a patient in need of a transplant, only one part might need to be replaced in order to fix the problem. -Steven S Skiena. The Algorithm Design Manual We'll tour some data structures in Python. First up: sequences. Common data structures Lists Stacks/queues Hashes Heaps Trees We'll focus on lists today. Sequences and their Abstractions What is a sequence? Consider the notion of Abstract Data Types. The idea there is that one data type might be implemented in terms of another, or some underlying code, not even in python. As long as the interface and contract presented to the user is solid, we can change the implementation below. The dunder methods in Python are used towards this purpose. In Python a sequence is something that follows the "sequence protocol". An example of this is a Python list. This entails defining the __len__ and __getitem__ methods, as we mentioned in previous lectures. Example
alist=[1,2,3,4] len(alist) # calls alist.__len__ alist[2] # calls alist.__getitem__(2)
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Lists also support slicing
alist[2:4]
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
How does this work? We will create a dummy sequence, which does not create any storage. It just implements the protocol.
class DummySeq: def __len__(self): return 42 def __getitem__(self, index): return index d = DummySeq() len(d) d[5] d[67:98]
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
The "slice object" Slicing creates a slice object for us of the form slice(start, stop, step) and then Python calls seq.__getitem__(slice(start, stop, step)). Two-dimensional slicing is also possible.
d[67:98:2,1] d[67:98:2,1:10]
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Example
# Adapted from Example 10-6 from Fluent Python import numbers import reprlib # like repr but w/ limits on sizes of returned strings class NewSeq: def __init__(self, iterator): self._storage=list(iterator) def __repr__(self): components = reprlib.repr(self._storage) components = components[components.find('['):] return 'NewSeq({})'.format(components) def __len__(self): return len(self._storage) def __getitem__(self, index): cls = type(self) if isinstance(index, slice): return cls(self._storage[index]) elif isinstance(index, numbers.Integral): return self._storage[index] else: msg = '{cls.__name__} indices must be integers' raise TypeError(msg.format(cls=cls)) d2 = NewSeq(range(10)) len(d2) repr(d2) d2 d[4] d2[2:4] d2[1,4]
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Linked Lists Remember, a name in Python points to its value. We've seen lists whose last element is actually a pointer to another list. This leads to the idea of a linked list, which we'll use to illustrate sequences. Nested Pairs Stanford CS61a: Nested Pairs, this is the box and pointer notation. In Python:
pair = (1,2)
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
This representation lacks a certain power. A few generalizations: * pair = (1, (2, None)) * linked_list = (1, (2, (3, (4, None)))) The second example leads to something like: Recursive Lists. Here's what things look like in PythonTutor: PythonTutor Example. Quick Linked List implementation
empty_ll = None def make_ll(first, rest): # Make a linked list return (first, rest) def first(ll): # Get the first entry of a linked list return ll[0] def rest(ll): # Get the second entry of a linked list return ll[1] ll_1 = make_ll(1, make_ll(2, make_ll(3, empty_ll))) # Recursively generate a linked list my_ll = make_ll(10,ll_1) # Make another one my_ll print(first(my_ll), " ", rest(my_ll), " ", first(rest(my_ll)))
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
2. Subset and calculate After we've extracted values from a list, we can use them to perform additional calculations. Concatenation of the list elements can also be performed.
""" Instructions: + Using a combination of list subsetting and variable assignment, create a new variable, eat_sleep_area, that contains the sum of the area of the kitchen and the area of the bedroom. + Print this new variable "eat_sleep_area". """ # Create the areas list areas = ["hallway", 11.25, "kitchen", 18.0, "living room", 20.0, "bedroom", 10.75, "bathroom", 9.50] # Sum of kitchen and bedroom area: eat_sleep_area eat_sleep_area = areas[3] + areas[7] # Print the variable eat_sleep_area print( eat_sleep_area )
Courses/DAT-208x/DAT208X - Week 2 - Section 2 - Subsetting Lists.ipynb
dataDogma/Computer-Science
gpl-3.0
3. Slicing and dicing Slicing means selecting multiple elemens from our list. It's like splitting a list into sub-list.
""" Instructions: + Use slicing to create a list, "downstairs", that contains the first 6 elements of "areas". + Do a similar thing to create a new variable, "upstairs", that contains the last 4 elements of areas. + Print both "downstairs" and "upstairs" using print(). """ # Create the areas list areas = ["hallway", 11.25, "kitchen", 18.0, "living room", 20.0, "bedroom", 10.75, "bathroom", 9.50] # Use slicing to create downstairs downstairs = areas[0:7] # Use slicing to create upstairs upstairs = areas[6:] # Print out downstairs and upstairs print( downstairs ) print( upstairs )
Courses/DAT-208x/DAT208X - Week 2 - Section 2 - Subsetting Lists.ipynb
dataDogma/Computer-Science
gpl-3.0
4. Slicing and dicing (2) It is also possible to slice without explicitly defining the starting index. Syntax: list1[ &lt; undefined &gt; : &lt; end &gt; ] Similarly, also possible to slice without explicitly defining the ending index. Syntax: list2[ begin : &lt; undefined &gt; ] It's possible to print the entire list without defining the "begin" and "end" index of a list. Syntax: list3[ &lt; undefined &gt; : &lt; undefined &gt; ]
""" Instructions: + Use slicing to create the lists, "downstairs" and "upstairs" again. - Without using any indexes, unless nessecery. """ # Create the areas list areas = ["hallway", 11.25, "kitchen", 18.0, "living room", 20.0, "bedroom", 10.75, "bathroom", 9.50] # Alternative slicing to create downstairs downstairs = areas[ : 6] # Alternative slicing to create upstairs upstairs = areas[ 6 : ]
Courses/DAT-208x/DAT208X - Week 2 - Section 2 - Subsetting Lists.ipynb
dataDogma/Computer-Science
gpl-3.0
**5. Subsetting lists of lists We can subset "lists of lists". Syntax: list[ [ sub-list1 ], [sub-list2], ... , [sub-list(n-1)] ] 0 1 n-1 We can also perform both, "indexing" and "slicing" on a "lists of lists". Syntax: list[ < sub-list-index > ] [ < begin > : < end > ]
""" Problem definition: What will house[-1][1] return? """ # Ans : float, 9.5 as the bathroom area.
Courses/DAT-208x/DAT208X - Week 2 - Section 2 - Subsetting Lists.ipynb
dataDogma/Computer-Science
gpl-3.0
Computing source timecourses with an XFit-like multi-dipole model MEGIN's XFit program offers a "guided ECD modeling" interface, where multiple dipoles can be fitted interactively. By manually selecting subsets of sensors and time ranges, dipoles can be fitted to specific signal components. Then, source timecourses can be computed using a multi-dipole model. The advantage of using a multi-dipole model over fitting each dipole in isolation, is that when multiple dipoles contribute to the same signal component, the model can make sure that activity assigned to one dipole is not also assigned to another. This example shows how to build a multi-dipole model for estimating source timecourses for evokeds or single epochs. The XFit program is the recommended approach for guided ECD modeling, because it offers a convenient graphical user interface for it. These dipoles can then be imported into MNE-Python by using the :func:mne.read_dipole function for building and applying the multi-dipole model. In addition, this example will also demonstrate how to perform guided ECD modeling using only MNE-Python functionality, which is less convenient than using XFit, but has the benefit of being reproducible.
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com> # # License: BSD-3-Clause
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Importing everything and setting up the data paths for the MNE-Sample dataset.
import mne from mne.datasets import sample from mne.channels import read_vectorview_selection from mne.minimum_norm import (make_inverse_operator, apply_inverse, apply_inverse_epochs) import matplotlib.pyplot as plt import numpy as np data_path = sample.data_path() meg_path = data_path / 'MEG' / 'sample' raw_fname = meg_path / 'sample_audvis_raw.fif' cov_fname = meg_path / 'sample_audvis-shrunk-cov.fif' bem_dir = data_path / 'subjects' / 'sample' / 'bem' bem_fname = bem_dir / 'sample-5120-5120-5120-bem-sol.fif'
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Read the MEG data from the audvis experiment. Make epochs and evokeds for the left and right auditory conditions.
raw = mne.io.read_raw_fif(raw_fname) raw = raw.pick_types(meg=True, eog=True, stim=True) info = raw.info # Create epochs for auditory events events = mne.find_events(raw) event_id = dict(right=1, left=2) epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=0.3, baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13, eog=150e-6)) # Create evokeds for left and right auditory stimulation evoked_left = epochs['left'].average() evoked_right = epochs['right'].average()
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Guided dipole modeling, meaning fitting dipoles to a manually selected subset of sensors as a manually chosen time, can now be performed in MEGINs XFit on the evokeds we computed above. However, it is possible to do it completely in MNE-Python.
# Setup conductor model cov = mne.read_cov(cov_fname) bem = mne.read_bem_solution(bem_fname) # Fit two dipoles at t=80ms. The first dipole is fitted using only the sensors # on the left side of the helmet. The second dipole is fitted using only the # sensors on the right side of the helmet. picks_left = read_vectorview_selection('Left', info=info) evoked_fit_left = evoked_left.copy().crop(0.08, 0.08) evoked_fit_left.pick_channels(picks_left) cov_fit_left = cov.copy().pick_channels(picks_left) picks_right = read_vectorview_selection('Right', info=info) evoked_fit_right = evoked_right.copy().crop(0.08, 0.08) evoked_fit_right.pick_channels(picks_right) cov_fit_right = cov.copy().pick_channels(picks_right) # Any SSS projections that are active on this data need to be re-normalized # after picking channels. evoked_fit_left.info.normalize_proj() evoked_fit_right.info.normalize_proj() cov_fit_left['projs'] = evoked_fit_left.info['projs'] cov_fit_right['projs'] = evoked_fit_right.info['projs'] # Fit the dipoles with the subset of sensors. dip_left, _ = mne.fit_dipole(evoked_fit_left, cov_fit_left, bem) dip_right, _ = mne.fit_dipole(evoked_fit_right, cov_fit_right, bem)
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now that we have the location and orientations of the dipoles, compute the full timecourses using MNE, assigning activity to both dipoles at the same time while preventing leakage between the two. We use a very low lambda value to ensure both dipoles are fully used.
fwd, _ = mne.make_forward_dipole([dip_left, dip_right], bem, info) # Apply MNE inverse inv = make_inverse_operator(info, fwd, cov, fixed=True, depth=0) stc_left = apply_inverse(evoked_left, inv, method='MNE', lambda2=1E-6) stc_right = apply_inverse(evoked_right, inv, method='MNE', lambda2=1E-6) # Plot the timecourses of the resulting source estimate fig, axes = plt.subplots(nrows=2, sharex=True, sharey=True) axes[0].plot(stc_left.times, stc_left.data.T) axes[0].set_title('Left auditory stimulation') axes[0].legend(['Dipole 1', 'Dipole 2']) axes[1].plot(stc_right.times, stc_right.data.T) axes[1].set_title('Right auditory stimulation') axes[1].set_xlabel('Time (s)') fig.supylabel('Dipole amplitude')
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can also fit the timecourses to single epochs. Here, we do it for each experimental condition separately.
stcs_left = apply_inverse_epochs(epochs['left'], inv, lambda2=1E-6, method='MNE') stcs_right = apply_inverse_epochs(epochs['right'], inv, lambda2=1E-6, method='MNE')
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To summarize and visualize the single-epoch dipole amplitudes, we will create a detailed plot of the mean amplitude of the dipoles during different experimental conditions.
# Summarize the single epoch timecourses by computing the mean amplitude from # 60-90ms. amplitudes_left = [] amplitudes_right = [] for stc in stcs_left: amplitudes_left.append(stc.crop(0.06, 0.09).mean().data) for stc in stcs_right: amplitudes_right.append(stc.crop(0.06, 0.09).mean().data) amplitudes = np.vstack([amplitudes_left, amplitudes_right]) # Visualize the epoch-by-epoch dipole ampltudes in a detailed figure. n = len(amplitudes) n_left = len(amplitudes_left) mean_left = np.mean(amplitudes_left, axis=0) mean_right = np.mean(amplitudes_right, axis=0) fig, ax = plt.subplots(figsize=(8, 4)) ax.scatter(np.arange(n), amplitudes[:, 0], label='Dipole 1') ax.scatter(np.arange(n), amplitudes[:, 1], label='Dipole 2') transition_point = n_left - 0.5 ax.plot([0, transition_point], [mean_left[0], mean_left[0]], color='C0') ax.plot([0, transition_point], [mean_left[1], mean_left[1]], color='C1') ax.plot([transition_point, n], [mean_right[0], mean_right[0]], color='C0') ax.plot([transition_point, n], [mean_right[1], mean_right[1]], color='C1') ax.axvline(transition_point, color='black') ax.set_xlabel('Epochs') ax.set_ylabel('Dipole amplitude') ax.legend() fig.suptitle('Single epoch dipole amplitudes') fig.text(0.30, 0.9, 'Left auditory stimulation', ha='center') fig.text(0.70, 0.9, 'Right auditory stimulation', ha='center')
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause