markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage cost... | PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:... | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bu... | REGION = "us-central1" # @param {type: "string"} | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial. | from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go ... | # If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
i... | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set ... | BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. | ! gsutil mb -l $REGION $BUCKET_NAME | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Finally, validate access to your Cloud Storage bucket by examining its contents: | ! gsutil ls -al $BUCKET_NAME | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants | import google.cloud.aiplatform as aip | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket. | aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Tutorial
Now you are ready to start creating your own AutoML image classification model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage. | IMPORT_FILE = (
"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv"
) | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Quick peek at your data
This tutorial uses a version of the Flowers dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows... | if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create the Dataset
Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import... | dataset = aip.ImageDataset.create(
display_name="Flowers" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification,
)
print(dataset.resource_name) | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters:
display_name: The human readable name for the T... | dag = aip.AutoMLImageTrainingJob(
display_name="flowers_" + TIMESTAMP,
prediction_type="classification",
multi_label=False,
model_type="CLOUD",
base_model=None,
)
print(dag) | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for tra... | model = dag.run(
dataset=dataset,
model_display_name="flowers_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
budget_milli_node_hours=8000,
disable_early_stopping=False,
) | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in y... | # Get model resource ID
models = aip.Model.list(filter="display_name=flowers_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = mode... | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Send a batch prediction request
Send a batch prediction to your deployed model.
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate ... | test_items = !gsutil cat $IMPORT_FILE | head -n2
if len(str(test_items[0]).split(",")) == 3:
_, test_item_1, test_label_1 = str(test_items[0]).split(",")
_, test_item_2, test_label_2 = str(test_items[1]).split(",")
else:
test_item_1, test_label_1 = str(test_items[0]).split(",")
test_item_2, test_label_2... | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Copy test item(s)
For the batch prediction, copy the test items over to your Cloud Storage bucket. | file_1 = test_item_1.split("/")[-1]
file_2 = test_item_2.split("/")[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2 | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/valu... | import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(dat... | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_dest... | batch_predict_job = model.batch_predict(
job_display_name="flowers_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job) | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed. | batch_predict_job.wait() | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or m... | import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_... | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job... | delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
mode... | notebooks/community/sdk/sdk_automl_image_classification_batch.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Runtime Analysis
using Finding the nth Fibonacci numbers as a computational object to think with | %pylab inline
# Import libraries
from __future__ import absolute_import, division, print_function
import math
from time import time
import matplotlib.pyplot as pyplt | CS/Part_1_Complexity_RunTimeAnalysis.ipynb | omoju/Fundamentals | gpl-3.0 |
Fibonacci
Excerpt from Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani
Fibonacci is most widely known for his famous sequence of numbers
$0,1,1,2,3,5,8,13,21,34,...,$
each the sum of its two immediate predecessors. More formally, the Fibonacci numbers $F_n$ are generated by the simple rule
$F_n =... | from IPython.display import YouTubeVideo
YouTubeVideo('ls0GsJyLVLw')
def fib(n):
if n == 0 or n == 1:
return n
else:
return fib(n-2) + fib(n-1)
fib(5) | CS/Part_1_Complexity_RunTimeAnalysis.ipynb | omoju/Fundamentals | gpl-3.0 |
Whenever we have an algorithm, there are three questions we always ask about it:
Is it correct?
How much time does it take, as a function of n?
And can we do better?
1. Correctness
For this question, the answer is yes because it is almost a line by line implementation of the definition of the Fibonacci sequence.
2. ... | # This function provides a way to track function calls
def count(f):
def counted(n):
counted.call_count += 1
return f(n)
counted.call_count = 0
return counted
fib = count(fib)
t0 = time()
n = 5
fib(n)
print ('This recursive implementation of fib(', n, ') took', round(time() - t0, 4), '... | CS/Part_1_Complexity_RunTimeAnalysis.ipynb | omoju/Fundamentals | gpl-3.0 |
3. Can we do better?
A polynomial algorithm for $fib$
Let’s try to understand why $fib$ is so slow. fib.call_count shows the count of recursive invocations triggered by a single call to $fib(5)$, which is 15. If you sketched it out, you will notice that many computations are repeated!
A more sensible scheme would store... | def memo(f):
cache = {}
def memoized(n):
if n not in cache:
cache[n] = f(n) # Make a mapping between the key "n" and the return value of f(n)
return cache[n]
return memoized
fib = memo(fib)
t0 = time()
n = 400
fib(n)
print ('This memoized implementation of fib(', n, ') took',... | CS/Part_1_Complexity_RunTimeAnalysis.ipynb | omoju/Fundamentals | gpl-3.0 |
How long does $fib2$ take?
- The inner loop consists of a single computer step and is executed $n − 1$ times.
- Therefore the number of computer steps used by $fib2$ is linear in $n$.
From exponential we are down to polynomial, a huge breakthrough in running time. It is now perfectly reasonable to compute $F_{200}$ ... | fib2(200) | CS/Part_1_Complexity_RunTimeAnalysis.ipynb | omoju/Fundamentals | gpl-3.0 |
Instead of reporting that an algorithm takes, say, $ 5n^3 + 4n + 3$ steps on an input of size $n$, it is much simpler to leave out lower-order terms such as $4n$ and $3$ (which become insignificant as $n$ grows), and even the detail of the coefficient $5$ in the leading term (computers will be five times faster in a fe... | t = arange(0, 15, 1)
f1 = t * t
f2 = 2*t + 20
pyplt.title('Exponential time vs Linear time')
plot(t, f1, t, f2)
pyplt.annotate('$n^2$', xy=(8, 1), xytext=(10, 108))
pyplt.annotate('$2n + 20$', xy=(5, 1), xytext=(10, 45))
pyplt.xlabel('n')
pyplt.ylabel('Run time')
pyplt.grid(True)
| CS/Part_1_Complexity_RunTimeAnalysis.ipynb | omoju/Fundamentals | gpl-3.0 |
Config
Automatically discover the paths to various data folders and compose the project structure. | project = kg.Project.discover() | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Identifier for storing these features on disk and referring to them later. | feature_list_id = 'oofp_nn_lstm_with_activations' | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Make subsequent NN runs reproducible. | RANDOM_SEED = 42
np.random.seed(RANDOM_SEED) | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Read data
Word embedding lookup matrix. | embedding_matrix = kg.io.load(project.aux_dir + 'fasttext_vocab_embedding_matrix.pickle') | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Padded sequences of word indices for every question. | X_train_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_train.pickle')
X_train_q2 = kg.io.load(project.preprocessed_data_dir + 'sequences_q2_fasttext_train.pickle')
X_test_q1 = kg.io.load(project.preprocessed_data_dir + 'sequences_q1_fasttext_test.pickle')
X_test_q2 = kg.io.load(project.preproce... | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Word embedding properties. | EMBEDDING_DIM = embedding_matrix.shape[-1]
VOCAB_LENGTH = embedding_matrix.shape[0]
MAX_SEQUENCE_LENGTH = X_train_q1.shape[-1]
print(EMBEDDING_DIM, VOCAB_LENGTH, MAX_SEQUENCE_LENGTH) | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Define models | def zero_loss(y_true, y_pred):
return K.zeros((1,))
def create_model_question_branch():
input_q = Input(shape=(MAX_SEQUENCE_LENGTH,), dtype='int32')
embedding_q = Embedding(
VOCAB_LENGTH,
EMBEDDING_DIM,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
... | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Partition the data | NUM_FOLDS = 5
kfold = StratifiedKFold(
n_splits=NUM_FOLDS,
shuffle=True,
random_state=RANDOM_SEED
) | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Define hyperparameters | BATCH_SIZE = 2048
MAX_EPOCHS = 200 | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Best values picked by Bayesian optimization. | model_params = {
'dense_dropout_rate': 0.075,
'lstm_dropout_rate': 0.332,
'num_dense': 130,
'num_lstm': 300,
}
feature_output_size = model_params['num_lstm'] * 2 | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Create placeholders for out-of-fold predictions. | y_train_oofp = np.zeros_like(y_train, dtype='float32')
y_train_oofp_features = np.zeros((len(y_train), feature_output_size), dtype='float32')
y_test_oofp = np.zeros((len(X_test_q1), NUM_FOLDS), dtype='float32')
y_test_oofp_features = np.zeros((len(X_test_q1), feature_output_size), dtype='float32') | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
The path where the best weights of the current model will be saved. | model_checkpoint_path = project.temp_dir + 'fold-checkpoint-' + feature_list_id + '.h5' | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Fit the folds and compute out-of-fold predictions | %%time
# Iterate through folds.
for fold_num, (ix_train, ix_val) in enumerate(kfold.split(X_train_q1, y_train)):
# Augment the training set by mirroring the pairs.
X_fold_train_q1 = np.vstack([X_train_q1[ix_train], X_train_q2[ix_train]])
X_fold_train_q2 = np.vstack([X_train_q2[ix_train], X_train_q1[ix... | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Save features | feature_names = [feature_list_id]
features_train = y_train_oofp.reshape((-1, 1))
features_test = np.mean(y_test_oofp, axis=1).reshape((-1, 1))
project.save_features(features_train, features_test, feature_names, feature_list_id) | notebooks/unused/feature-oofp-nn-lstm-with-activations.ipynb | YuriyGuts/kaggle-quora-question-pairs | mit |
Querying SQL (advanced)
NOTE: THIS DOC IS CURRENTLY IN OUTLINE FORM
In this tutorial, we'll use a dataset of television ratings.
copying data in, and getting a table from SQL
filtering out rows, and aggregating data
looking at shifts in ratings between seasons
checking for abnormalities in the data
Setting up | import pandas as pd
from siuba.tests.helpers import copy_to_sql
from siuba import *
from siuba.dply.vector import lag, desc, row_number
from siuba.dply.string import str_c
from siuba.sql import LazyTbl
data_url = "https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-01-08/IMDb_Economist_... | docs/draft-old-pages/intro_sql_interm.ipynb | machow/siuba | mit |
Inspecting a single show | buffy = (tbl_ratings
>> filter(_.title == "Buffy the Vampire Slayer")
>> collect()
)
buffy
buffy >> summarize(avg_rating = _.av_rating.mean()) | docs/draft-old-pages/intro_sql_interm.ipynb | machow/siuba | mit |
Average rating per show, along with dates | avg_ratings = (tbl_ratings
>> group_by(_.title)
>> summarize(
avg_rating = _.av_rating.mean(),
date_range = str_c(_.date.dt.year.max(), " - ", _.date.dt.year.min())
)
)
avg_ratings | docs/draft-old-pages/intro_sql_interm.ipynb | machow/siuba | mit |
Biggest changes in ratings between two seasons | top_4_shifts = (tbl_ratings
>> group_by(_.title)
>> arrange(_.seasonNumber)
>> mutate(rating_shift = _.av_rating - lag(_.av_rating))
>> summarize(
max_shift = _.rating_shift.max()
)
>> arrange(-_.max_shift)
>> head(4)
)
top_4_shifts
big_shift_series = (top_4_shifts
>> select(_.title)
>> ... | docs/draft-old-pages/intro_sql_interm.ipynb | machow/siuba | mit |
Do we have full data for each season? | mismatches = (tbl_ratings
>> arrange(_.title, _.seasonNumber)
>> group_by(_.title)
>> mutate(
row = row_number(_),
mismatch = _.row != _.seasonNumber
)
>> filter(_.mismatch.any())
>> ungroup()
)
mismatches
mismatches >> distinct(_.title) >> count() >> collect() | docs/draft-old-pages/intro_sql_interm.ipynb | machow/siuba | mit |
Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta
$$
When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\... | g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax)) | assignments/assignment10/ODEsEx03.ipynb | jpilgram/phys202-2015-work | mit |
Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$. | #I worked with James A and Hunter T.
def derivs(y, t, a, b, omega0):
"""Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0:... | assignments/assignment10/ODEsEx03.ipynb | jpilgram/phys202-2015-work | mit |
Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol ... | # YOUR CODE HERE
#raise NotImplementedError()
y0 = [np.pi,0]
solution = odeint(derivs, y0, t, args = (0,0,0), atol = 1e-5, rtol = 1e-4)
# YOUR CODE HERE
#raise NotImplementedError()
plt.plot(t,energy(solution), label="$Energy/mass$")
plt.title('Simple Pendulum Engery')
plt.xlabel('time')
plt.ylabel('$Engery/Mass$')
p... | assignments/assignment10/ODEsEx03.ipynb | jpilgram/phys202-2015-work | mit |
Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Ma... | def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
"""Integrate the damped, driven pendulum and make a phase plot of the solution."""
# YOUR CODE HERE
#raise NotImplementedError()
y0 =[-np.pi+0.1,0]
solution = odeint(derivs, y0, t, args = (a,b,omega0), atol = 1e-5, rtol = 1e-4)
theta=solution[:,0]
... | assignments/assignment10/ODEsEx03.ipynb | jpilgram/phys202-2015-work | mit |
Here is an example of the output of your plot_pendulum function that should show a decaying spiral. | plot_pendulum(0.5, 0.0, 0.0) | assignments/assignment10/ODEsEx03.ipynb | jpilgram/phys202-2015-work | mit |
Use interact to explore the plot_pendulum function with:
a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.
b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$. | # YOUR CODE HERE
#raise NotImplementedError()
interact(plot_pendulum, a=(0.0,1.0,0.1), b=(0.0,10.0,0.1), omega0 = (0.0,10.0,0.1)); | assignments/assignment10/ODEsEx03.ipynb | jpilgram/phys202-2015-work | mit |
Head model and forward computation
The aim of this tutorial is to be a getting started for forward
computation.
For more extensive details and presentation of the general
concepts for forward modeling. See ch_forward. | import os.path as op
import mne
from mne.datasets import sample
data_path = sample.data_path()
# the raw file containing the channel location + types
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
# The paths to Freesurfer reconstructions
subjects_dir = data_path + '/subjects'
subject = 'sample' | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Computing the forward operator
To compute a forward operator we need:
a -trans.fif file that contains the coregistration info.
a source space
the :term:BEM surfaces
Compute and visualize BEM surfaces
The :term:BEM surfaces are the triangulations of the interfaces between
different tissues needed for forward computati... | mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', orientation='coronal') | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Visualization the coregistration
The coregistration is operation that allows to position the head and the
sensors in a common coordinate system. In the MNE software the transformation
to align the head and the sensors in stored in a so-called trans file.
It is a FIF file that ends with -trans.fif. It can be obtained wi... | # The transformation file obtained by coregistration
trans = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
info = mne.io.read_info(raw_fname)
# Here we look at the dense head, which isn't used for BEM computations but
# is useful for coregistration.
mne.viz.plot_alignment(info, trans, subject=subject, dig=True... | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Compute Source Space
The source space defines the position and orientation of the candidate source
locations. There are two types of source spaces:
source-based source space when the candidates are confined to a
surface.
volumetric or discrete source space when the candidates are discrete,
arbitrarily located s... | src = mne.setup_source_space(subject, spacing='oct6',
subjects_dir=subjects_dir, add_dist=False)
print(src) | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
The surface based source space src contains two parts, one for the left
hemisphere (4098 locations) and one for the right hemisphere
(4098 locations). Sources can be visualized on top of the BEM surfaces
in purple. | mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', src=src, orientation='coronal') | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
To compute a volume based source space defined with a grid of candidate
dipoles inside a sphere of radius 90mm centered at (0.0, 0.0, 40.0)
you can use the following code.
Obviously here, the sphere is not perfect. It is not restricted to the
brain and it can miss some parts of the cortex. | sphere = (0.0, 0.0, 40.0, 90.0)
vol_src = mne.setup_volume_source_space(subject, subjects_dir=subjects_dir,
sphere=sphere)
print(vol_src)
mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', src=vol_src, orientation='coronal') | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
To compute a volume based source space defined with a grid of candidate
dipoles inside the brain (requires the :term:BEM surfaces) you can use the
following. | surface = op.join(subjects_dir, subject, 'bem', 'inner_skull.surf')
vol_src = mne.setup_volume_source_space(subject, subjects_dir=subjects_dir,
surface=surface)
print(vol_src)
mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', s... | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
With the surface-based source space only sources that lie in the plotted MRI
slices are shown. Let's write a few lines of mayavi to see all sources in 3D. | import numpy as np # noqa
from mayavi import mlab # noqa
from surfer import Brain # noqa
brain = Brain('sample', 'lh', 'inflated', subjects_dir=subjects_dir)
surf = brain.geo['lh']
vertidx = np.where(src[0]['inuse'])[0]
mlab.points3d(surf.x[vertidx], surf.y[vertidx],
surf.z[vertidx], color=(1, 1, 0)... | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Compute forward solution
We can now compute the forward solution.
To reduce computation we'll just compute a single layer BEM (just inner
skull) that can then be used for MEG (not EEG).
We specify if we want a one-layer or a three-layer BEM using the
conductivity parameter.
The BEM solution requires a BEM model which d... | conductivity = (0.3,) # for single layer
# conductivity = (0.3, 0.006, 0.3) # for three layers
model = mne.make_bem_model(subject='sample', ico=4,
conductivity=conductivity,
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model) | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Note that the :term:BEM does not involve any use of the trans file. The BEM
only depends on the head geometry and conductivities.
It is therefore independent from the MEG data and the head position.
Let's now compute the forward operator, commonly referred to as the
gain or leadfield matrix.
See :func:mne.make_forward_... | fwd = mne.make_forward_solution(raw_fname, trans=trans, src=src, bem=bem,
meg=True, eeg=False, mindist=5.0, n_jobs=2)
print(fwd) | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
We can explore the content of fwd to access the numpy array that contains
the gain matrix. | leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape) | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
To extract the numpy array containing the forward operator corresponding to
the source space fwd['src'] with cortical orientation constraint
we can use the following: | fwd_fixed = mne.convert_forward_solution(fwd, surf_ori=True, force_fixed=True,
use_cps=True)
leadfield = fwd_fixed['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape) | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
This is equivalent to the following code that explicitly applies the
forward operator to a source estimate composed of the identity operator: | n_dipoles = leadfield.shape[1]
vertices = [src_hemi['vertno'] for src_hemi in fwd_fixed['src']]
stc = mne.SourceEstimate(1e-9 * np.eye(n_dipoles), vertices, tmin=0., tstep=1)
leadfield = mne.apply_forward(fwd_fixed, stc, info).data / 1e-9 | 0.18/_downloads/7df5cd97aa959dd7e2627aba5e552081/plot_forward.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
define the path of the model you want to load, and also the path of the dataset | # you may need to change these to link to where your data and checkpoints are actually stored!
# in the default config, model_dir is likely to be /tmp/sketch_rnn/models
data_dir = './kanji'
model_dir = './log'
[train_set, valid_set, test_set, hps_model, eval_hps_model, sample_hps_model] = load_env(data_dir, model_dir)... | jupyter-notebooks/Sketch_RNN_TF_To_JS_Tutorial.ipynb | magenta/magenta-demos | apache-2.0 |
Let's see if our model kind of works by sampling from it: | stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
def get_model_params():
# get trainable params.
model_names = []
model_params = []
model_shapes = []
with sess.as_default():
t_vars = tf.trainable_variables()
for var in t_vars:
param_name = var.name
p = sess.run(var)
... | jupyter-notebooks/Sketch_RNN_TF_To_JS_Tutorial.ipynb | magenta/magenta-demos | apache-2.0 |
The neural network accepts an input vector of length 2. It has 2 output nodes. One node is used to control whether or not to recursively run itself, the other is the real data output. We simply threshold > 0.5 to trigger a recursive call to itself. | ###example output with random initial weights
print( nn(X[0], theta) )
print( nn(X[1], theta) )
print( nn(X[2], theta) )
print( nn(X[3], theta) ) | VariableOutput.ipynb | outlace/Machine-Learning-Experiments | mit |
Cost Function
Arbitrarily assign a high cost to mismatches in the length of the output, then also assess MSE | def costFunction(X, Y, theta):
cost = 0
for i in range(len(X)):
y = Y[i]
m = float(len(X[i]))
hThetaX = nn(X[i], theta)
if len(y) != len(hThetaX):
cost += 3
else:
cost += (1/m) * np.sum(np.abs(y - hThetaX)**2)
return cost | VariableOutput.ipynb | outlace/Machine-Learning-Experiments | mit |
Genetic Algorithm to Solve Weights: | import random as rn, numpy as np
# [Initial population size, mutation rate (=1%), num generations (30), solution length (13), # winners/per gen]
initPop, mutRate, numGen, solLen, numWin = 100, 0.01, 500, 17, 20
#initialize current population to random values within range
curPop = np.random.choice(np.arange(-15,15,step=... | VariableOutput.ipynb | outlace/Machine-Learning-Experiments | mit |
Backends
Quick examples
pandas (fast grouped) _ | # pandas fast grouped implementation ----
from siuba.data import cars
from siuba import _
from siuba.experimental.pd_groups import fast_mutate, fast_filter, fast_summarize
fast_mutate(
cars.groupby('cyl'),
avg_mpg = _.mpg.mean(), # aggregation
hp_per_mpg = _.hp / _.mpg, # elementwise ... | docs/backends.ipynb | machow/siuba | mit |
SQL _ | from siuba import _, mutate, group_by, summarize, show_query
from siuba.sql import LazyTbl
from sqlalchemy import create_engine
# create sqlite db, add pandas DataFrame to it
engine = create_engine("sqlite:///:memory:")
cars.to_sql("cars", engine, if_exists="replace")
# define query
q = (LazyTbl(engine, "cars")
>... | docs/backends.ipynb | machow/siuba | mit |
Supported methods
The table below shows the pandas methods supported by different backends. Note that the regular, ungrouped backend supports all methods, and the fast grouped implementation supports most methods a person could use without having to call the (slow) DataFrame.apply method.
🚧This table is displayed a b... | from siuba import _, mutate
df = pd.DataFrame({
'g': ['a', 'a', 'b'],
'x': [1,2,3],
})
df.assign(y = lambda _: _.x + 1)
mutate(df, y = _.x + 1) | docs/backends.ipynb | machow/siuba | mit |
Siuba verbs also work on grouped DataFrames, but are not always fast. They are the potentially slow, reference implementation. | mutate(
df.groupby('g'),
y = _.x + 1,
z = _.x - _.x.mean()
) | docs/backends.ipynb | machow/siuba | mit |
Overview of MEG/EEG analysis with MNE-Python
This tutorial covers the basic EEG/MEG pipeline for event-related analysis:
loading data, epoching, averaging, plotting, and estimating cortical activity
from sensor data. It introduces the core MNE-Python data structures
:class:~mne.io.Raw, :class:~mne.Epochs, :class:~mne.E... | import os
import numpy as np
import mne | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Loading data
^^^^^^^^^^^^
MNE-Python data structures are based around the FIF file format from
Neuromag, but there are reader functions for a wide variety of other
data formats <data-formats>. MNE-Python also has interfaces to a
variety of :doc:publicly available datasets <../../manual/datasets_index>,
whic... | sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file) | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
By default, :func:~mne.io.read_raw_fif displays some information about the
file it's loading; for example, here it tells us that there are four
"projection items" in the file along with the recorded data; those are
:term:SSP projectors <projector> calculated to remove environmental noise
from the MEG signals, plu... | print(raw)
print(raw.info) | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
:class:~mne.io.Raw objects also have several built-in plotting methods;
here we show the power spectral density (PSD) for each sensor type with
:meth:~mne.io.Raw.plot_psd, as well as a plot of the raw sensor traces with
:meth:~mne.io.Raw.plot. In the PSD plot, we'll only plot frequencies below
50 Hz (since our data are... | raw.plot_psd(fmax=50)
raw.plot(duration=5, n_channels=30) | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Preprocessing
^^^^^^^^^^^^^
MNE-Python supports a variety of preprocessing approaches and techniques
(maxwell filtering, signal-space projection, independent components analysis,
filtering, downsampling, etc); see the full list of capabilities in the
:mod:mne.preprocessing and :mod:mne.filter submodules. Here we'll cle... | # set up and fit the ICA
ica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)
ica.fit(raw)
ica.exclude = [1, 2] # details on how we picked these are omitted here
ica.plot_properties(raw, picks=ica.exclude) | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Once we're confident about which component(s) we want to remove, we pass them
as the exclude parameter and then apply the ICA to the raw signal. The
:meth:~mne.preprocessing.ICA.apply method requires the raw data to be
loaded into memory (by default it's only read from disk as-needed), so we'll
use :meth:~mne.io.Raw.lo... | orig_raw = raw.copy()
raw.load_data()
ica.apply(raw)
# show some frontal channels to clearly illustrate the artifact removal
chs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',
'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',
'EEG 001', 'EEG 002', 'EEG 00... | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Detecting experimental events
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The sample dataset includes several :term:"STIM" channels <stim channel>
that recorded electrical
signals sent from the stimulus delivery computer (as brief DC shifts /
squarewave pulses). These pulses (often called "triggers") are used in this
dataset t... | events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5]) # show the first 5 | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
The resulting events array is an ordinary 3-column :class:NumPy array
<numpy.ndarray>, with sample number in the first column and integer event ID
in the last column; the middle column is usually ignored. Rather than keeping
track of integer event IDs, we can provide an event dictionary that maps
the integer IDs ... | event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'buttonpress': 32} | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Event dictionaries like this one are used when extracting epochs from
continuous data; the / character in the dictionary keys allows pooling
across conditions by requesting partial condition descriptors (i.e.,
requesting 'auditory' will select all epochs with Event IDs 1 and 2;
requesting 'left' will select all epochs ... | fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'])
fig.subplots_adjust(right=0.7) # make room for the legend | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
For paradigms that are not event-related (e.g., analysis of resting-state
data), you can extract regularly spaced (possibly overlapping) spans of data
by creating events using :func:mne.make_fixed_length_events and then
proceeding with epoching as described in the next section.
Epoching continuous data
^^^^^^^^^^^^^^^^... | reject_criteria = dict(mag=4000e-15, # 4000 fT
grad=4000e-13, # 4000 fT/cm
eeg=150e-6, # 150 μV
eog=250e-6) # 250 μV | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
We'll also pass the event dictionary as the event_id parameter (so we can
work with easy-to-pool event labels instead of the integer event IDs), and
specify tmin and tmax (the time relative to each event at which to
start and end each epoch). As mentioned above, by default
:class:~mne.io.Raw and :class:~mne.Epochs data... | epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,
reject=reject_criteria, preload=True) | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Next we'll pool across left/right stimulus presentations so we can compare
auditory versus visual responses. To avoid biasing our signals to the
left or right, we'll use :meth:~mne.Epochs.equalize_event_counts first to
randomly sample epochs from each condition to match the number of epochs
present in the condition wit... | conds_we_care_about = ['auditory/left', 'auditory/right',
'visual/left', 'visual/right']
epochs.equalize_event_counts(conds_we_care_about) # this operates in-place
aud_epochs = epochs['auditory']
vis_epochs = epochs['visual']
del raw, epochs # free up memory | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Like :class:~mne.io.Raw objects, :class:~mne.Epochs objects also have a
number of built-in plotting methods. One is :meth:~mne.Epochs.plot_image,
which shows each epoch as one row of an image map, with color representing
signal magnitude; the average evoked response and the sensor location are
shown below the image: | aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021']) | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
<div class="alert alert-info"><h4>Note</h4><p>Both :class:`~mne.io.Raw` and :class:`~mne.Epochs` objects have
:meth:`~mne.Epochs.get_data` methods that return the underlying data
as a :class:`NumPy array <numpy.ndarray>`. Both methods have a ``picks``
parameter for subselecting which channel(s) to return; `... | frequencies = np.arange(7, 30, 3)
power = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,
freqs=frequencies, decim=3)
power.plot(['MEG 1332']) | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Estimating evoked responses
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Now that we have our conditions in aud_epochs and vis_epochs, we can
get an estimate of evoked responses to auditory versus visual stimuli by
averaging together the epochs in each condition. This is as simple as calling
the :meth:~mne.Epochs.average method on the ... | aud_evoked = aud_epochs.average()
vis_evoked = vis_epochs.average()
mne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),
show_legend='upper left',
show_sensors='upper right') | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
We can also get a more detailed view of each :class:~mne.Evoked object
using other plotting methods such as :meth:~mne.Evoked.plot_joint or
:meth:~mne.Evoked.plot_topomap. Here we'll examine just the EEG channels,
and see the classic auditory evoked N100-P200 pattern over dorso-frontal
electrodes, then plot scalp topog... | # sphinx_gallery_thumbnail_number = 13
aud_evoked.plot_joint(picks='eeg')
aud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg') | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Evoked objects can also be combined to show contrasts between conditions,
using the :func:mne.combine_evoked function. A simple difference can be
generated by negating one of the :class:~mne.Evoked objects passed into the
function. We'll then plot the difference wave at each sensor using
:meth:~mne.Evoked.plot_topo: | evoked_diff = mne.combine_evoked([aud_evoked, -vis_evoked], weights='equal')
evoked_diff.pick_types('mag').plot_topo(color='r', legend=False) | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Inverse modeling
^^^^^^^^^^^^^^^^
Finally, we can estimate the origins of the evoked activity by projecting the
sensor data into this subject's :term:source space (a set of points either
on the cortical surface or within the cortical volume of that subject, as
estimated by structural MRI scans). MNE-Python supports lot... | # load inverse operator
inverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-meg-oct-6-meg-inv.fif')
inv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)
# set signal-to-noise ratio (SNR) to compute regularization parameter... | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Finally, in order to plot the source estimate on the subject's cortical
surface we'll also need the path to the sample subject's structural MRI files
(the subjects_dir): | # path to subjects' MRI files
subjects_dir = os.path.join(sample_data_folder, 'subjects')
# plot
stc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],
subjects_dir=subjects_dir) | 0.18/_downloads/c92aa91c680730c756234cdbc466c558/plot_introduction.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Numpy arrays
In standard Python, data is stored as lists, and multidimensional data as lists of lists. In numpy, however, we can now work with arrays. To get these arrays, we can use np.asarray to convert a list into an array. Below we take a quick look at how a list behaves differently from an array. | # We first create an array `x`
start = 1
stop = 11
step = 1
x = np.arange(start, stop, step)
print(x) | notebooks/Week_05/05_Numpy_Matplotlib.ipynb | VandyAstroML/Vanderbilt_Computational_Bootcamp | mit |
We can also manipulate the array. For example, we can:
Multiply by two: | x * 2 | notebooks/Week_05/05_Numpy_Matplotlib.ipynb | VandyAstroML/Vanderbilt_Computational_Bootcamp | mit |
Take the square of all the values in the array: | x ** 2 | notebooks/Week_05/05_Numpy_Matplotlib.ipynb | VandyAstroML/Vanderbilt_Computational_Bootcamp | mit |
Or even do some math on it: | (x**2) + (5*x) + (x / 3) | notebooks/Week_05/05_Numpy_Matplotlib.ipynb | VandyAstroML/Vanderbilt_Computational_Bootcamp | mit |
If we want to set up an array in numpy, we can use range to make a list and then convert it to an array, but we can also just create an array directly in numpy. np.arange will do this with integers, and np.linspace will do this with floats, and allows for non-integer steps. | print(np.arange(10))
print(np.linspace(1,10,10)) | notebooks/Week_05/05_Numpy_Matplotlib.ipynb | VandyAstroML/Vanderbilt_Computational_Bootcamp | mit |
Last week we had to use a function or a loop to carry out math on a list. However with numpy we can do this a lot simpler by making sure we're working with an array, and carrying out the mathematical operations on that array. | x=np.arange(10)
print(x)
print(x**2) | notebooks/Week_05/05_Numpy_Matplotlib.ipynb | VandyAstroML/Vanderbilt_Computational_Bootcamp | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.