markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
?
np.linspace?
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
?? (Lab can scroll if you click)
np.linspace??
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Inspect everything
def silly_absolute_value_function(xval): """Takes a value and returns the value.""" xval_sq = xval ** 2.0 1 + 4 xval_abs = np.sqrt(xval_sq) return xval_abs silly_absolute_value_function(2) silly_absolute_value_function? silly_absolute_value_function??
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Keyboard shortcuts For help, ESC + h h doesn't work in Lab l / shift L for line numbers
# in select mode, shift j/k (to select multiple cells at once) # split cell with ctrl shift - first = 1 second = 2 third = 3 first = 1 second = 2 third = 3 # a new cell above # b new cell below
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Headings and LaTeX With text and $\LaTeX$ support. $$\begin{align} B'&=-\nabla \times E,\ E'&=\nabla \times B - 4\pi j \end{align}$$
%%latex If you want to get crazier... \begin{equation} \oint_S {E_n dA = \frac{1}{{\varepsilon _0 }}} Q_\textrm{inside} \end{equation}
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
More markdown
# Indent # Cmd + [ # Cmd + ] # Comment # Cmd + /
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
You can also get monospaced fonts by indenting 4 spaces: mkdir toc cd toc Wrap with triple-backticks and language: bash mkdir toc cd toc wget https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh SQL SELECT * FROM tablename
# note difference w/ lab
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
```sql SELECT first_name, last_name, year_of_birth FROM presidents WHERE year_of_birth > 1800; ```
%%bash pwd for i in *.ipynb do echo ${i} | awk -F . '{print $1}' done echo echo "break" echo for i in *.ipynb do echo $i | awk -F - '{print $2}' done
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Other cell-magics
%%writefile ../scripts/temp.py from __future__ import absolute_import, division, print_function I promise that I'm not cheating! !cat ../scripts/temp.py
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Autoreload is cool -- don't have time to give it the attention that it deserves. https://gist.github.com/jbwhit/38c1035c48cdb1714fc8d47fa163bfae
%load_ext autoreload %autoreload 2 example_dict = {} # Indent/dedent/comment for _ in range(5): example_dict["one"] = 1 example_dict["two"] = 2 example_dict["three"] = 3 example_dict["four"] = 4
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Multicursor magic Hold down option, click and drag.
example_dict["one_better_name"] = 1 example_dict["two_better_name"] = 2 example_dict["three_better_name"] = 3 example_dict["four_better_name"] = 4
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
Find and replace -- regex notebook (or cell) wide. R pyRserve rpy2
import numpy as np !conda install -c r rpy2 -y import rpy2 %load_ext rpy2.ipython X = np.array([0,1,2,3,4]) Y = np.array([3,5,4,6,7]) %%R? %%R -i X,Y -o XYcoef XYlm = lm(Y~X) XYcoef = coef(XYlm) print(summary(XYlm)) par(mfrow=c(2,2)) plot(XYlm) type(XYcoef) XYcoef**2 thing()
notebooks/11-older-stuff.ipynb
jbwhit/jupyter-best-practices
mit
It can be hard to guess which code is going to operate faster just by looking at it because the interactions between software and computers can be extremely complex. The best way to optimize code is through using profilers to identify bottlenecks in your code and then attempt to address these problems through optimiza...
string_list = ['the ', 'quick ', 'brown ', 'fox ', 'jumped ', 'over ', 'the ', 'lazy ', 'dog'] %%timeit output = "" # complete %%timeit # complete %%timeit output = "" # complete
Sessions/Session03/Day4/Profiling.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Interesting! So it appears that the join method was the fastest by a factor of four or so. Good to keep that in mind for future use of strings! Problem 1b What about building big lists or list-like structures (like numpy arrays)? We now know how to construct lists in a variety of ways, so let's see which is fastest....
%%timeit output = [] # complete %%timeit # complete %%timeit # complete %%timeit # complete %%timeit map(lambda x:# complete
Sessions/Session03/Day4/Profiling.ipynb
LSSTC-DSFP/LSSTC-DSFP-Sessions
mit
Maximum Inner Product Matrix factorization are potent techniques in solving the collaborative filtering problem. It mainly involves building up the user-item interaction matrix, then decomposing it into a user latent factor (a.k.a embedding) and item latent factor each with some user specified dimension (a hyperparamet...
file_dir = 'ml-100k' file_path = os.path.join(file_dir, 'u.data') if not os.path.isdir(file_dir): call(['curl', '-O', 'http://files.grouplens.org/datasets/movielens/' + file_dir + '.zip']) call(['unzip', file_dir + '.zip']) names = ['user_id', 'item_id', 'rating', 'timestamp'] df = pd.read_csv(file_path, sep='...
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
For the train/test split, the process is to split each user's behavior based on chronological order. e.g. If an user interacted with 10 items, and we specify a test set of size, 0.2. Then the first 8 items that the user first interacted with will fall in the training set, and the last 2 items will belong to the test se...
def train_test_user_time_split(df: pd.DataFrame, test_size: float=0.2): train_size = 1 - test_size df_train_user = [] df_test_user = [] df_grouped = df.sort_values(time_col).groupby(users_col) for name, df_group in df_grouped: n_train = int(df_group.shape[0] * train_size) df_group_t...
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
The model we'll be using is Bayesian Personalized Ranking from the implicit library.
n_users = df[users_col].cat.categories.shape[0] n_items = df[items_col].cat.categories.shape[0] # implicit library expects items to be rows # and users to be columns of the sparse matrix rows = df_train[items_col].cat.codes.values cols = df_train[users_col].cat.codes.values values = df_train[value_col].astype(np.float...
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
The model object also provides a .recommend method that generates the recommendation for a user.
user_id = 0 topn = 5 user_item = item_user.T.tocsr() recommendations = bpr.recommend(user_id, user_item, topn, filter_already_liked_items=False) recommendations
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
We can also generate the recommendations ourselves. We'll first confirm that the recommend function that we've implemented matches the one provided by the library, also implement a recommend_all function that generates the recommendation for all the user, this will be used to compare against the nearest neighborhood se...
def recommend(query_factors, index_factors, query_id, topn=5): output = query_factors[query_id].dot(index_factors.T) argpartition_indices = np.argpartition(output, -topn)[-topn:] sort_indices = np.argsort(output[argpartition_indices])[::-1] labels = argpartition_indices[sort_indices] distances = out...
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Different model/library have different ways of extracting the item and user factors/embeddings, we assign it to index_factors and query_factors to make all downstream code agnostic of libraries' implementation.
index_factors = bpr.item_factors query_factors = bpr.user_factors labels, distances = recommend(query_factors, index_factors, user_id, topn) print(labels) print(distances) def recommend_all(query_factors, index_factors, topn=5): output = query_factors.dot(index_factors.T) argpartition_indices = np.argpartitio...
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Implementation To implement our order preserving transformation, we first apply the transformation on our index factors. Recall that the formula is: Let $\phi = \underset{i}{\text{max}} \Vert \mathbf{y}_i \Vert$. $\mathbf{y}_i^* = g(\mathbf{y}_i) = \big(\sqrt{\phi^2 - {\Vert \mathbf{y_i} \Vert}^2 }, \mathbf{y_i}^T\big)...
def augment_inner_product(factors): normed_factors = np.linalg.norm(factors, axis=1) max_norm = normed_factors.max() extra_dim = np.sqrt(max_norm ** 2 - normed_factors ** 2).reshape(-1, 1) augmented_factors = np.append(factors, extra_dim, axis=1) return max_norm, augmented_factors print('pre s...
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Our next step is to use our favorite nearest neighborhood search algorithm/library to conduct the search. We'll be leveraging hnswlib in this example, explaining the details behind the this nearest neighborhood search algorithm is beyond the scope of this document.
def build_hnsw(factors, space, ef_construction, M): # Declaring index max_elements, dim = factors.shape hnsw = hnswlib.Index(space, dim) # possible options for space are l2, cosine or ip # Initing index - the maximum number of elements should be known beforehand hnsw.init_index(max_elements, M, ef_...
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
To generate the the prediction, we first transform the incoming "queries". $\mathbf{x}^* = h(\mathbf{x}) = (0, \mathbf{x}^T)^T$.
extra_zero = np.zeros((query_factors.shape[0], 1)) augmented_query_factors = np.append(query_factors, extra_zero, axis=1) augmented_query_factors.shape k = 5 # Controlling the recall by setting ef, should always be > k hnsw.set_ef(70) # retrieve the top-n search neighbors label, distance = hnsw.knn_query(augmented_q...
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Benchmark We can time the original recommend method using maximum inner product versus the new method of using the order preserving transformed matrices with nearest neighborhood search.
%%timeit recommend_all(query_factors, index_factors, topn=k) %%timeit extra_zero = np.zeros((query_factors.shape[0], 1)) augmented_query_factors = np.append(query_factors, extra_zero, axis=1) hnsw.knn_query(query_factors, k=k)
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Note that the timing is highly dependent on the dataset. We'll observe a much larger speedup if the number of items/labels in the output/index factor is larger. In the movielens dataset, we only had to rank the top items for each user among 1.6K items, in a much larger dataset, the number of items could easily go up to...
labels, distances = recommend_all(query_factors, index_factors, topn=k) hnsw_labels, hnsw_distances = hnsw.knn_query(query_factors, k=k) def compute_label_precision(optimal_labels, reco_labels): n_labels = len(optimal_labels) label_precision = 0.0 for optimal_label, reco_label in zip(optimal_labels, reco_l...
recsys/max_inner_product/max_inner_product.ipynb
ethen8181/machine-learning
mit
Are we underfitting? Our validation accuracy so far has generally been higher than our training accuracy. That leads to two obvious questions: How is this possible? Is this desirable? The answer to (1) is that this is happening because of dropout. Dropout refers to a layer that randomly deletes (i.e. sets to zero) ea...
??vgg_ft
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
py def vgg_ft(out_dim): vgg = Vgg16() vgg.ft(out_dim) model = vgg.model return model
??Vgg16.ft
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
```py def ft(self, num): """ Replace the last layer of the model with a Dense (fully connected) layer of num neurons. Will also lock the weights of all layers except the new layer so that we only learn weights for the last layer in subsequent training. Args: num (int): Number of neurons in the Dense lay...
model = vgg_ft(2)
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer. Convolution layers take a lot of time to compute, but Dense layers do not. We'll start by finding this layer in our model, and creating a ne...
layers = model.layers # find the last convolution layer last_conv_idx = [index for index,layer in enumerate(layers) if type(layer) is Convolution2D][-1] last_conv_idx layers[last_conv_idx] conv_layers = layers[:last_conv_idx+1] conv_model = Sequential(conv_layers) # Dense layers - also known a...
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
# Copy the weights from the pre-trained model. # NB: Since we're removing dropout, we want to half the weights def proc_wgts(layer): return [o/2 for o in layer.get_weights()] # Such a finely tuned model needs to be updated very slowly! opt = RMSprop(lr=0.000001, rho=0.7) def get_fc_model(): model = Sequential([ ...
deeplearning1/nbs/lesson3.ipynb
yingchi/fastai-notes
apache-2.0
What we typically talk about are "$p$-values": the probability to observe data where the null-hypothesis is at least as disfavored as what we actually observed if the null hypothesis were true. To go from the $\chi^2$ distribution to the $p$-value we take the complement of the cumulative distribution (sometimes calle...
pvalue_1dof = 1. - stats.chi2.cdf(x,1) fig2, ax2 = plt.subplots(1, 1) ax2.set_xlabel('Test Statistic') ax2.set_ylabel('p-value') ax2.set_xlim(0.,25.) curve = ax2.semilogy(x, pvalue_1dof,'r-', lw=1, label='p-value')
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
By way of comparison, here is what the $p$-value looks like for 1,2,3, and 4 degrees of freedom.
pvalue_2dof = 1. - stats.chi2.cdf(x,2) pvalue_3dof = 1. - stats.chi2.cdf(x,3) pvalue_4dof = 1. - stats.chi2.cdf(x,4) fig3, ax3 = plt.subplots(1, 1) ax3.set_xlabel('Test Statistic') ax3.set_ylabel('p-value') ax3.set_xlim(0.,25.) ax3.semilogy(x, pvalue_1dof,'r-', lw=1, label='1 DOF') ax3.semilogy(x, pvalue_2dof,'b-', lw...
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Converting p-values to standard deviations We often choose to report signal significance in terms of the equivalent number of standard deviations ($\sigma$) away from the mean of a normal (Gaussian) distribution you would have go to obtain a given $p$-value. Here are the confidence intervals correspond to 1,2,3,4,5 sig...
sigma_p = [] for i in range(1,6): print "%i sigma = %.2e p-value"%(i,2*stats.norm.sf(i)) sigma_p.append(2*stats.norm.sf(i))
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Here is a plot showing how those p-values map onto values of the TS.
fig6, ax6 = plt.subplots(1, 1) ax6.set_xlabel('Test Statistic') ax6.set_ylabel('p-value') ax6.set_xlim(0.,25.) ax6.semilogy(x, pvalue_1dof,'r-', lw=1, label='1 DOF') ax6.semilogy(x, pvalue_2dof,'b-', lw=1, label='2 DOF') ax6.semilogy(x, pvalue_3dof,'g-', lw=1, label='3 DOF') ax6.semilogy(x, pvalue_4dof,'y-', lw=1, labe...
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
You will notice that for 1 DOF, the significance expressed in standard deviations is simply the $\sigma = \sqrt{TS}$, i.e., the $\chi^2$ distribution is simply the positive half of the normal distribution for $\sqrt{TS}$.
for i in range(1,6): print "%i sigma = %.2e == %.2e"%(i,2*stats.norm.sf(i),stats.chi2.sf(i*i,1))
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Examples of the $\chi^2$ distribution and p-values for Chernoff's theorem Chernoff's theorem applies when 1/2 of the trials are expected to give negative fluctuations where we expect the signal. Since we have bounded the parameter at zero, the likelihood will be maximized at zero, which is the same as the null-hypothe...
pvalue_1dof_cher = 0.5*(1. - stats.chi2.cdf(x,1)) fig4, ax4 = plt.subplots(1, 1) ax4.set_xlabel('Test Statistic') ax4.set_ylabel('p-value') ax4.set_xlim(0.,25.) ax4.semilogy(x, pvalue_1dof,'r-', lw=1, label='Unbounded') ax4.semilogy(x, pvalue_1dof_cher,'r--', lw=1, label='Bounded') leg = ax4.legend()
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Degrees of freedom vs. trials factor A very common mistake that people make is to confuse degrees of freedom with trials factors. As a concrete example, consider a search for a new point source, where we allow the position of the source to vary in our fitting procedure. In that case we would typically have (at least...
pvalue_1dof_cher_3trial = 1. - (1 - 0.5*(1. - stats.chi2.cdf(x,1)))**3 fig5, ax5 = plt.subplots(1, 1) ax5.set_xlabel('Test Statistic') ax5.set_ylabel('p-value') ax5.set_xlim(0.,25.) ax5.semilogy(x, pvalue_1dof_cher,'r--', lw=1, label='1Trial, 3DOF') ax5.semilogy(x, pvalue_1dof_cher_3trial,'r-.', lw=1, label='3Trials, 1...
examples/Fermi/FermiOverview.ipynb
enoordeh/StatisticalMethods
gpl-2.0
Euler's method Euler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation $$ \frac{dy}{dx} = f(y(x), x) $$ with the initial condition: $$ y(x_0)=y_0 $$ Euler's method performs updates using the equations: $$ y_{n+1} = y_n + h f(y_n,x_n) $$ $$ h = x_{n+1}...
def solve_euler(derivs, y0, x): """Solve a 1d ODE using Euler's method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarray, list...
assignments/assignment10/ODEsEx01.ipynb
sraejones/phys202-2015-work
mit
The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation: $$ y_{n+1} = y_n + h f\left(y_n+\frac{h}{2}f(y_n,x_n),x_n+\frac{h}{2}\right) $$ Write a function solve_midpoint that implements the midpoint met...
def solve_midpoint(derivs, y0, x): """Solve a 1d ODE using the Midpoint method. Parameters ---------- derivs : function The derivative of the diff-eq with the signature deriv(y,x) where y and x are floats. y0 : float The initial condition y[0] = y(x[0]). x : np.ndarr...
assignments/assignment10/ODEsEx01.ipynb
sraejones/phys202-2015-work
mit
In the following cell you are going to solve the above ODE using four different algorithms: Euler's method Midpoint method odeint Exact Here are the details: Generate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$). Define the derivs function for the above differential equation. Using the...
# YOUR CODE HERE # raise NotImplementedError() x = np.linspace(0,1.0,11) y = np.empty_like(x) y0 = y[0] def derivs(y, x): return x+2*y plt.plot(solve_euler(derivs, y0, x), label = 'euler') plt.plot(solve_midpoint(derivs, y0, x), label = 'midpoint') plt.plot(solve_exact(x), label = 'exact') plt.plot(odeint(derivs, y...
assignments/assignment10/ODEsEx01.ipynb
sraejones/phys202-2015-work
mit
Expected output: test: Hello World <font color='blue'> What you need to remember: - Run your cells using SHIFT+ENTER (or "Run cell") - Write code in the designated areas using Python 3 only - Do not modify the code outside of the designated areas 1 - Building basic functions with numpy Numpy is the main package for sci...
# GRADED FUNCTION: basic_sigmoid import math def basic_sigmoid(x): """ Compute sigmoid of x. Arguments: x -- A scalar Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) s = 1.0 / (1.0 + math.exp(-x)) ### END CODE HERE ### return s basic_sigm...
course-deeplearning.ai/course1-nn-and-deeplearning/Python+Basics+With+Numpy+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Any time you need more info on a numpy function, we encourage you to look at the official documentation. You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation. Exercise: Implement the sigmoid function using numpy. Instructions: x could now be either a ...
# GRADED FUNCTION: sigmoid import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function() def sigmoid(x): """ Compute the sigmoid of x Arguments: x -- A scalar or numpy array of any size Return: s -- sigmoid(x) """ ### START C...
course-deeplearning.ai/course1-nn-and-deeplearning/Python+Basics+With+Numpy+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **sigmoid([1,2,3])**</td> <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> </tr> </table> 1.2 - Sigmoid gradient As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first ...
# GRADED FUNCTION: sigmoid_derivative def sigmoid_derivative(x): """ Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x. You can store the output of the sigmoid function into variables and then use it to calculate the gradient. Arguments:...
course-deeplearning.ai/course1-nn-and-deeplearning/Python+Basics+With+Numpy+v3.ipynb
liufuyang/deep_learning_tutorial
mit
Basic vectorization Vectorizing text is a fundamental concept in applying both supervised and unsupervised learning to documents. Basically, you can think of it as turning the words in a given text document into features, represented by a matrix. Rather than explicitly defining our features, as we did for the donor cla...
bill_titles = ['An act to amend Section 44277 of the Education Code, relating to teachers.'] vectorizer = CountVectorizer() features = vectorizer.fit_transform(bill_titles).toarray() print features print vectorizer.get_feature_names()
class5_1/vectorization.ipynb
datapolitan/lede_algorithms
gpl-2.0
The following block of code illustrates how to evaluate a single sequence. Additionally we show how one can pass in the information using NumPy arrays.
# load dictionaries query_wl = [line.rstrip('\n') for line in open(data['query']['file'])] slots_wl = [line.rstrip('\n') for line in open(data['slots']['file'])] query_dict = {query_wl[i]:i for i in range(len(query_wl))} slots_dict = {slots_wl[i]:i for i in range(len(slots_wl))} # let's run a sequence through # seq = ...
DAT236x Deep Learning Explained/Lab6_TextClassification_with_LSTM.ipynb
bourneli/deep-learning-notes
mit
Reformat data
from utils import preprocess_data raw_data = [] for ii in data_list: im = Image.open(ii) idat = np.array(im) > 100 idat = idat.flatten() raw_data.append(idat) np.random.seed(111) np.random.shuffle(raw_data) data = preprocess_data(raw_data) data
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Visualize the input data
# Examine one im = Image.open(data_list[0]) im data.shape %matplotlib inline from matplotlib.pyplot import imshow import matplotlib.pyplot as plt from utils import display_single_png, display_all_png display_single_png(idat) plt.show() display_all_png(data) plt.show()
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
DO
from ART1 import ART1 from collections import defaultdict # create networkreload input_row_size = 100 max_categories = 8 rho = 0.4 network = ART1(n=input_row_size, m=max_categories, rho=rho) # preprocess data data_cleaned = preprocess_data(data) # shuffle data? np.random.seed(155) np.random.shuffle(data_cleaned...
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Visualize cluster weights as an input pattern The cluster unit weights can be represented visually, representing the learned patterns for that unit.
# print learned clusters for idx, cluster in enumerate(network.Bij.T): print "Cluster Unit #{}".format(idx) display_single_output(cluster)
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Sanity check: predict cluster centers What if we take one of these cluster "centers" and feed it back into the network for prediction?
# Cluster_index clust_idx = 2 print "Target: ", clust_idx idata = network.Bij.T[clust_idx] idata = idata.astype(bool).astype(int) display_single_output(idata) # Prediction pred = network.predict(idata) print "prediction (cluster index): ", pred
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Examine the predictions visually
# output results, row by row output_dict = defaultdict(list) for row, row_cleaned in zip (data, data_cleaned): pred = network.predict(row_cleaned) output_dict[pred].append(row) for k,v in output_dict.iteritems(): print "Cluster #{} ({} members)".format(k, len(v)) print '-'*20 for row in v: ...
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
Sanity check: Modify input pattern randomly By making random variations of the input pattern, we can judge the ability of the network to generalize input patterns not seen in the training data.
# of tests ntests = 10 # number of bits in the pattern to modify nchanges = 30 for test in range(ntests): #cluster_index clust_idx = np.random.randint(network.output_size) print "Target: ", clust_idx idata = network.Bij.T[clust_idx] idata = idata.astype(bool).astype(int) #modify data for...
ipynb/ART1_demo_png.ipynb
amanahuja/adaptive_resonance_networks
mit
The above is the main function of the Levenberg-Marquardt algorithm. The code may appear daunting at first, but all it does is implement the Levenberg-Marquardt update rule and some checks of convergence. We can now apply it to the problem with relative ease to obtain a numerical solution for our parameter vector.
solved_x = levenberg_marquardt(d, t, x, sinusoid_residual, sinusoid_jacobian) print solved_x
2_Mathematical_Groundwork/2_11_least_squares.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
A final, important thing to note is that the Levenberg-Marquardt algorithm is already implemented in Python. It is used in scipy.optimise.leastsq. This is often useful for doing rapid numerical solution without the need for an analytic Jacobian. As a simple proof, we can call the built-in method to verify our results.
x = np.array([8., 43.5, 1.05]) leastsq_x = leastsq(sinusoid_residual, x, args=(t, d)) print "scipy.optimize.leastsq: ", leastsq_x[0] print "Our LM: ", solved_x plt.plot(t, d, label="Data") plt.plot(t, sinusoid(leastsq_x[0], t), label="optimize.leastsq") plt.xlabel("t") plt.legend(loc='upper right') plt.show()
2_Mathematical_Groundwork/2_11_least_squares.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
In this case, the built-in method clearly fails. I have done this deliberately to illustrate a point - a given implementation of an algorithm might not be the best one for your application. In this case, the manner in which the tuning parameters are handled prevents the solution from converging correctly. This can be a...
x = np.array([8., 35., 1.05]) leastsq_x = leastsq(sinusoid_residual, x, args=(t, d)) print "scipy.optimize.leastsq: ", leastsq_x[0] print "Our LM: ", solved_x plt.plot(t, d, label="Data") plt.plot(t, sinusoid(leastsq_x[0], t), label="optimize.leastsq") plt.xlabel("t") plt.legend(loc='upper right') plt.show()
2_Mathematical_Groundwork/2_11_least_squares.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
Adding Datasets Next let's add a mesh dataset so that we can plot our Wilson-Devinney style meshes
b.add_dataset('mesh', times=np.linspace(0,10,6), dataset='mesh01', columns=['visibilities'])
2.1/examples/mesh_wd.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
And see which elements are visible at the current time. This defaults to use the 'RdYlGn' colormap which will make visible elements green, partially hidden elements yellow, and hidden elements red. Note that the observer is in the positive w-direction.
afig, mplfig = b['secondary@mesh01@model'].plot(time=0.0, x='us', y='ws', ec='None', fc='visibilities', show=True)
2.1/examples/mesh_wd.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Setup
pudl_settings = pudl.workspace.setup.get_defaults() settings_file_name= 'etl_full.yml' etl_settings = EtlSettings.from_yaml( pathlib.Path(pudl_settings['settings_dir'], settings_file_name)) validated_etl_settings = etl_settings.datasets datasets = validated_etl_settings.get_datasets() eia_settings ...
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
You can skip the settings step above and set these years/tables yourself here without using the settings files... just know they are not validated below so they could be wrong and fail after some time. It is HIGHLY RECOMMENDED that you use all the years/tables
eia860_tables = eia_settings.eia860.tables eia860_years = eia_settings.eia860.years eia860m = eia_settings.eia860.eia860m eia923_tables = eia_settings.eia923.tables eia923_years = eia_settings.eia923.years ds = Datastore()
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
Run extract step & phase 1 transform step this is pulled from pudl.etl._etl_eia()
# Extract EIA forms 923, 860 eia923_raw_dfs = pudl.extract.eia923.Extractor(ds).extract( settings=eia_settings.eia923 ) eia860_raw_dfs = pudl.extract.eia860.Extractor(ds).extract( settings=eia_settings.eia860 ) # if we are trying to add the EIA 860M YTD data, then extract it and append if eia860m: eia860m_...
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
You have to re-run this cell every time you want to re-run the havesting cell below (bc pudl.transform.eia.harvesting removes columns from the dfs). This cell enables you to start with a fresheia_transformed_dfs without needing to re-run the 860/923 transforms.
# create an eia transformed dfs dictionary eia_transformed_dfs = eia860_transformed_dfs.copy() eia_transformed_dfs.update(eia923_transformed_dfs.copy()) # Do some final cleanup and assign appropriate types: eia_transformed_dfs = { name: convert_cols_dtypes(df, data_source="eia") for name, df in eia_transformed...
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
Run harvest w/ debug=True
# we want to investigate the harvesting of the plants in this case... entity = 'generators' # create the empty entities df to fill up entities_dfs = {} entities_dfs, eia_transformed_dfs, col_dfs = ( pudl.transform.eia.harvesting( entity, eia_transformed_dfs, entities_dfs, debug=True) )
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
Use col_dfs to explore harvested values
pmc = col_dfs['prime_mover_code'] pmc.prime_mover_code.unique()
devtools/harvesting_debug.ipynb
catalyst-cooperative/pudl
mit
Define a simple schema (the only info is location and point num)
schema = { 'geometry': 'Point','properties':{'num':'int' }} # # copy the projection from the tif file so we put the groundtrack # in the same coordinates driver='ESRI Shapefile' raster=rasterio.open(tiff_file,'r') crs = raster.crs.to_dict() proj = pyproj.Proj(crs) with fiona.open("ground_track", "w", driver=driver, ...
notebooks/shapefiles.ipynb
a301-teaching/a301_code
mit
Sparse 2d interpolation In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain: The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$. The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points. The value of $f$ is know...
x = np.array([-5,-5,-5,-5,-5,-5,-5,-5,-5,-5]) x = np.append(x,range(-5,6)) x = np.append(x,[5,5,5,5,5,5,5,5,5,5]) x = np.append(x,range(4,-5,-1)) x = np.append(x,0) y = np.array(range(-5,6)) y = np.append(y,[5,5,5,5,5,5,5,5,5,5]) y = np.append(y,range(4,-6,-1)) y = np.append(y,[-5,-5,-5,-5,-5,-5,-5,-5,-5]) y = np.appen...
assignments/assignment08/InterpolationEx02.ipynb
joshnsolomon/phys202-2015-work
mit
Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
plt.figure(figsize=(10,7)) plt.contourf(xnew,ynew,Fnew,cmap='gist_rainbow'); plt.colorbar(); plt.title('contour plot of scaler field f(x,y)'); assert True # leave this to grade the plot
assignments/assignment08/InterpolationEx02.ipynb
joshnsolomon/phys202-2015-work
mit
Problem: Implement the Forward Algorithm Now it's time to put it all together. We create a table to hold the results and build them up from the front to back. Along with the results, we return the marginal probability that can be compared with the backward algorithm's below.
import numpy as np np.set_printoptions(suppress=True) def forward(params, observations): pi, A, B = params N = len(observations) S = pi.shape[0] alpha = np.zeros((N, S)) # base case # p(z1) * p(x1|z1) alpha[0, :] = pi * B[observations[0], :] # recursive case - YOUR CODE ...
handsOn_lecture18_gmm-hmm/handsOn_lecture18_gmm-hmm.ipynb
eecs445-f16/umich-eecs445-f16
mit
Problem: Implement the Backward Algorithm If you implemented both correctly, the second return value (the marginals) from each method should match.
def backward(params, observations): pi, A, B = params N = len(observations) S = pi.shape[0] beta = np.zeros((N, S)) # base case beta[N-1, :] = 1 # recursive case -- YOUR CODE GOES HERE! return (beta, np.sum(pi * B[observations[0], :] * beta[0,:])) backward((pi, A...
handsOn_lecture18_gmm-hmm/handsOn_lecture18_gmm-hmm.ipynb
eecs445-f16/umich-eecs445-f16
mit
模型平均 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/addons/tutorials/average_optimizers_callback"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org上查看</a></td> <td><a target="_blank" href="https://colab.research.google.com/g...
!pip install -U tensorflow-addons import tensorflow as tf import tensorflow_addons as tfa import numpy as np import os
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
构建模型
def create_model(opt): model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimi...
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
准备数据集
#Load Fashion MNIST dataset train, test = tf.keras.datasets.fashion_mnist.load_data() images, labels = train images = images/255.0 labels = labels.astype(np.int32) fmnist_train_ds = tf.data.Dataset.from_tensor_slices((images, labels)) fmnist_train_ds = fmnist_train_ds.shuffle(5000).batch(32) test_images, test_labels...
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
我们在这里比较三个优化器: 解包的 SGD 带移动平均的 SGD 带随机加权平均的 SGD 查看它们在同一模型上的性能。
#Optimizers sgd = tf.keras.optimizers.SGD(0.01) moving_avg_sgd = tfa.optimizers.MovingAverage(sgd) stocastic_avg_sgd = tfa.optimizers.SWA(sgd)
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
MovingAverage 和 StocasticAverage 优化器均使用 ModelAverageCheckpoint。
#Callback checkpoint_path = "./training/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_dir, save_weights_only=True, verbose=1) ...
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
训练模型 Vanilla SGD 优化器
#Build Model model = create_model(sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[cp_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy)
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
移动平均 SGD
#Build Model model = create_model(moving_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accuracy...
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
随机加权平均 SGD
#Build Model model = create_model(stocastic_avg_sgd) #Train the network model.fit(fmnist_train_ds, epochs=5, callbacks=[avg_callback]) #Evalute results model.load_weights(checkpoint_dir) loss, accuracy = model.evaluate(test_images, test_labels, batch_size=32, verbose=2) print("Loss :", loss) print("Accuracy :", accur...
site/zh-cn/addons/tutorials/average_optimizers_callback.ipynb
tensorflow/docs-l10n
apache-2.0
Make predictions from the new data In the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook. The add_traffic_last_5min function below will query the traffic_realtime table to find the most recent...
# TODO 2a. Write a function to take most recent entry in `traffic_realtime` # table and add it to instance. def add_traffic_last_5min(instance): bq = bigquery.Client() query_string = """ SELECT * FROM `taxifare.traffic_realtime` ORDER BY time DESC LIMIT 1 """ trips = bq...
notebooks/building_production_ml_systems/solutions/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well. Copy the ENDPOINT_RESOURCENAME from the deployment in the ...
# TODO 2b. Write code to call prediction on instance using realtime traffic # info. Hint: Look at this sample # https://github.com/googleapis/python-aiplatform/blob/master/samples/snippets/predict_custom_trained_model_sample.py # TODO: Copy the `ENDPOINT_RESOURCENAME` from the deployment in the previous # lab. ENDPOIN...
notebooks/building_production_ml_systems/solutions/4b_streaming_data_inference_vertex.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Ad hoc Polymorphism and Object tables Ad hoc polymorphism is the notion that different functions are called to accomplish the same task for arguments of different types. This enables the Python Data model with the dunder methods. If you call len(arg) or iter(arg), we delegate to arg's __len__ or __iter__ by looking t...
mydeck.__class__.__dict__
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
What if we don't find a method in the table? Either this is a runtime error, or we search in the "parent" classes of this class. We can see all such attributes by using dir:
dir(mydeck)
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
This works because it gets sent up:
hash(mydeck)
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
You can see whats upward of the French Deck by inspecting the Method Order Resolution using the mro method.
FrenchDeck.mro()
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Data Structures Computer programs don't only perform calculations; they also store and retrieve information Data structures and the algorithms that operate on them is at the core of computer science Data structures are quite general Any data representation and associated operations e.g. integers, floats, arrays, cla...
alist=[1,2,3,4] len(alist) # calls alist.__len__ alist[2] # calls alist.__getitem__(2)
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Lists also support slicing
alist[2:4]
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
How does this work? We will create a dummy sequence, which does not create any storage. It just implements the protocol.
class DummySeq: def __len__(self): return 42 def __getitem__(self, index): return index d = DummySeq() len(d) d[5] d[67:98]
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
The "slice object" Slicing creates a slice object for us of the form slice(start, stop, step) and then Python calls seq.__getitem__(slice(start, stop, step)). Two-dimensional slicing is also possible.
d[67:98:2,1] d[67:98:2,1:10]
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Example
# Adapted from Example 10-6 from Fluent Python import numbers import reprlib # like repr but w/ limits on sizes of returned strings class NewSeq: def __init__(self, iterator): self._storage=list(iterator) def __repr__(self): components = reprlib.repr(self._storage) components =...
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
Linked Lists Remember, a name in Python points to its value. We've seen lists whose last element is actually a pointer to another list. This leads to the idea of a linked list, which we'll use to illustrate sequences. Nested Pairs Stanford CS61a: Nested Pairs, this is the box and pointer notation. In Python:
pair = (1,2)
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
This representation lacks a certain power. A few generalizations: * pair = (1, (2, None)) * linked_list = (1, (2, (3, (4, None)))) The second example leads to something like: Recursive Lists. Here's what things look like in PythonTutor: PythonTutor Example. Quick Linked List implementation
empty_ll = None def make_ll(first, rest): # Make a linked list return (first, rest) def first(ll): # Get the first entry of a linked list return ll[0] def rest(ll): # Get the second entry of a linked list return ll[1] ll_1 = make_ll(1, make_ll(2, make_ll(3, empty_ll))) # Recursively generate a linked li...
lectures/L12/L12.ipynb
crystalzhaizhai/cs207_yi_zhai
mit
2. Subset and calculate After we've extracted values from a list, we can use them to perform additional calculations. Concatenation of the list elements can also be performed.
""" Instructions: + Using a combination of list subsetting and variable assignment, create a new variable, eat_sleep_area, that contains the sum of the area of the kitchen and the area of the bedroom. + Print this new variable "eat_sleep_area". """ # Create the areas list areas = ["ha...
Courses/DAT-208x/DAT208X - Week 2 - Section 2 - Subsetting Lists.ipynb
dataDogma/Computer-Science
gpl-3.0
3. Slicing and dicing Slicing means selecting multiple elemens from our list. It's like splitting a list into sub-list.
""" Instructions: + Use slicing to create a list, "downstairs", that contains the first 6 elements of "areas". + Do a similar thing to create a new variable, "upstairs", that contains the last 4 elements of areas. + Print both "downstairs" and "upstairs" using print(). """ #...
Courses/DAT-208x/DAT208X - Week 2 - Section 2 - Subsetting Lists.ipynb
dataDogma/Computer-Science
gpl-3.0
4. Slicing and dicing (2) It is also possible to slice without explicitly defining the starting index. Syntax: list1[ &lt; undefined &gt; : &lt; end &gt; ] Similarly, also possible to slice without explicitly defining the ending index. Syntax: list2[ begin : &lt; undefined &gt; ] It's possible to print the en...
""" Instructions: + Use slicing to create the lists, "downstairs" and "upstairs" again. - Without using any indexes, unless nessecery. """ # Create the areas list areas = ["hallway", 11.25, "kitchen", 18.0, "living room", 20.0, "bedroom", 10.75, "bathroom", 9.50] # Alternative slicing to create...
Courses/DAT-208x/DAT208X - Week 2 - Section 2 - Subsetting Lists.ipynb
dataDogma/Computer-Science
gpl-3.0
**5. Subsetting lists of lists We can subset "lists of lists". Syntax: list[ [ sub-list1 ], [sub-list2], ... , [sub-list(n-1)] ] 0 1 n-1 We can also perform both, "indexing" and "slicing" on a "lists of lists". Syntax: list[ < sub-list-index > ] [ < begin > : ...
""" Problem definition: What will house[-1][1] return? """ # Ans : float, 9.5 as the bathroom area.
Courses/DAT-208x/DAT208X - Week 2 - Section 2 - Subsetting Lists.ipynb
dataDogma/Computer-Science
gpl-3.0
Computing source timecourses with an XFit-like multi-dipole model MEGIN's XFit program offers a "guided ECD modeling" interface, where multiple dipoles can be fitted interactively. By manually selecting subsets of sensors and time ranges, dipoles can be fitted to specific signal components. Then, source timecourses can...
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com> # # License: BSD-3-Clause
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Importing everything and setting up the data paths for the MNE-Sample dataset.
import mne from mne.datasets import sample from mne.channels import read_vectorview_selection from mne.minimum_norm import (make_inverse_operator, apply_inverse, apply_inverse_epochs) import matplotlib.pyplot as plt import numpy as np data_path = sample.data_path() meg_path = data_path / ...
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Read the MEG data from the audvis experiment. Make epochs and evokeds for the left and right auditory conditions.
raw = mne.io.read_raw_fif(raw_fname) raw = raw.pick_types(meg=True, eog=True, stim=True) info = raw.info # Create epochs for auditory events events = mne.find_events(raw) event_id = dict(right=1, left=2) epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=0.3, baseline=(None, 0), ...
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Guided dipole modeling, meaning fitting dipoles to a manually selected subset of sensors as a manually chosen time, can now be performed in MEGINs XFit on the evokeds we computed above. However, it is possible to do it completely in MNE-Python.
# Setup conductor model cov = mne.read_cov(cov_fname) bem = mne.read_bem_solution(bem_fname) # Fit two dipoles at t=80ms. The first dipole is fitted using only the sensors # on the left side of the helmet. The second dipole is fitted using only the # sensors on the right side of the helmet. picks_left = read_vectorvie...
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now that we have the location and orientations of the dipoles, compute the full timecourses using MNE, assigning activity to both dipoles at the same time while preventing leakage between the two. We use a very low lambda value to ensure both dipoles are fully used.
fwd, _ = mne.make_forward_dipole([dip_left, dip_right], bem, info) # Apply MNE inverse inv = make_inverse_operator(info, fwd, cov, fixed=True, depth=0) stc_left = apply_inverse(evoked_left, inv, method='MNE', lambda2=1E-6) stc_right = apply_inverse(evoked_right, inv, method='MNE', lambda2=1E-6) # Plot the timecourses...
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can also fit the timecourses to single epochs. Here, we do it for each experimental condition separately.
stcs_left = apply_inverse_epochs(epochs['left'], inv, lambda2=1E-6, method='MNE') stcs_right = apply_inverse_epochs(epochs['right'], inv, lambda2=1E-6, method='MNE')
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
To summarize and visualize the single-epoch dipole amplitudes, we will create a detailed plot of the mean amplitude of the dipoles during different experimental conditions.
# Summarize the single epoch timecourses by computing the mean amplitude from # 60-90ms. amplitudes_left = [] amplitudes_right = [] for stc in stcs_left: amplitudes_left.append(stc.crop(0.06, 0.09).mean().data) for stc in stcs_right: amplitudes_right.append(stc.crop(0.06, 0.09).mean().data) amplitudes = np.vsta...
dev/_downloads/7a15f28878eb067b0af68c33433f47f6/multi_dipole_model.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause