markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Hugging Face BERT models and tokenizers We'll illustrate with the BERT-base cased model:
weights_name = 'bert-base-cased'
finetuning.ipynb
cgpotts/cs224u
apache-2.0
There are lots other options for pretrained weights. See this Hugging Face directory. Next, we specify a tokenizer and a model that match both each other and our choice of pretrained weights:
bert_tokenizer = BertTokenizer.from_pretrained(weights_name) bert_model = BertModel.from_pretrained(weights_name)
finetuning.ipynb
cgpotts/cs224u
apache-2.0
For modeling (as opposed to creating static representations), we will mostly process examples in batches – generally very small ones, as these models consume a lot of memory. Here's a small batch of texts to use as the starting point for illustrations:
example_texts = [ "Encode sentence 1. [SEP] And sentence 2!", "Bert knows Snuffleupagus"]
finetuning.ipynb
cgpotts/cs224u
apache-2.0
We will often need to pad (and perhaps truncate) token lists so that we can work with fixed-dimensional tensors: The batch_encode_plus has a lot of options for doing this:
example_ids = bert_tokenizer.batch_encode_plus( example_texts, add_special_tokens=True, return_attention_mask=True, padding='longest') example_ids.keys()
finetuning.ipynb
cgpotts/cs224u
apache-2.0
The token_type_ids is used for multi-text inputs like NLI. The 'input_ids' field gives the indices for each of the two examples:
example_ids['input_ids']
finetuning.ipynb
cgpotts/cs224u
apache-2.0
Notice that the final two tokens of the second example are pad tokens. For fine-tuning, we want to avoid attending to padded tokens. The 'attention_mask' captures the needed mask, which we'll be able to feed directly to the pretrained BERT model:
example_ids['attention_mask']
finetuning.ipynb
cgpotts/cs224u
apache-2.0
Finally, we can run these indices and masks through the pretrained model:
X_example = torch.tensor(example_ids['input_ids']) X_example_mask = torch.tensor(example_ids['attention_mask']) with torch.no_grad(): reps = bert_model(X_example, attention_mask=X_example_mask)
finetuning.ipynb
cgpotts/cs224u
apache-2.0
Hugging Face BERT models create a special pooler_output representation that is the final representation above the [CLS] extended with a single layer of parameters:
reps.pooler_output.shape
finetuning.ipynb
cgpotts/cs224u
apache-2.0
We have two examples, each representented by a single vector of dimension 768, which is $d_{model}$ for BERT base using the notation from the original Transformers paper. This is an easy basis for fine-tuning, as we will see. We can also access the final output for each state:
reps.last_hidden_state.shape
finetuning.ipynb
cgpotts/cs224u
apache-2.0
Here, we have 2 examples, each padded to the length of the longer one (12), and each of those representations has dimension 768. These representations can be used for sequence modeling, or pooled somehow for simple classifiers. Those are all the essential ingredients for working with these parameters in Hugging Face. O...
def bert_phi(text): input_ids = bert_tokenizer.encode(text, add_special_tokens=True) X = torch.tensor([input_ids]) with torch.no_grad(): reps = bert_model(X) return reps.last_hidden_state.squeeze(0).numpy()
finetuning.ipynb
cgpotts/cs224u
apache-2.0
Simple feed-forward experiment For a simple feed-forward experiment, we can get the representation of the [CLS] tokens and use them as the inputs to a shallow neural network:
def bert_classifier_phi(text): reps = bert_phi(text) #return reps.mean(axis=0) # Another good, easy option. return reps[0]
finetuning.ipynb
cgpotts/cs224u
apache-2.0
Next we read in the SST train and dev splits:
train = sst.train_reader(SST_HOME) dev = sst.dev_reader(SST_HOME)
finetuning.ipynb
cgpotts/cs224u
apache-2.0
Split the input/output pairs out into separate lists:
X_str_train = train.sentence.values y_train = train.label.values X_str_dev = dev.sentence.values y_dev = dev.label.values
finetuning.ipynb
cgpotts/cs224u
apache-2.0
In the next step, we featurize all of the examples. These steps are likely to be the slowest in these experiments:
%time X_train = [bert_classifier_phi(text) for text in X_str_train] %time X_dev = [bert_classifier_phi(text) for text in X_str_dev]
finetuning.ipynb
cgpotts/cs224u
apache-2.0
Now that all the examples are featurized, we can fit a model and evaluate it:
model = TorchShallowNeuralClassifier( early_stopping=True, hidden_dim=300) %time _ = model.fit(X_train, y_train) preds = model.predict(X_dev) print(classification_report(y_dev, preds, digits=3))
finetuning.ipynb
cgpotts/cs224u
apache-2.0
A feed-forward experiment with the sst module It is straightforward to conduct experiments like the above using sst.experiment, which will enable you to do a wider range of experiments without writing or copy-pasting a lot of code.
def fit_shallow_network(X, y): mod = TorchShallowNeuralClassifier( hidden_dim=300, early_stopping=True) mod.fit(X, y) return mod %%time _ = sst.experiment( sst.train_reader(SST_HOME), bert_classifier_phi, fit_shallow_network, assess_dataframes=sst.dev_reader(SST_HOME), v...
finetuning.ipynb
cgpotts/cs224u
apache-2.0
An RNN experiment with the sst module We can also use BERT representations as the input to an RNN. There is just one key change from how we used these models before: Previously, we would feed in lists of tokens, and they would be converted to indices into a fixed embedding space. This presumes that all words have the...
def fit_rnn(X, y): mod = TorchRNNClassifier( vocab=[], early_stopping=True, use_embedding=False) # Pass in the BERT hidden states directly! mod.fit(X, y) return mod %%time _ = sst.experiment( sst.train_reader(SST_HOME), bert_phi, fit_rnn, assess_dataframes=sst.dev_r...
finetuning.ipynb
cgpotts/cs224u
apache-2.0
BERT fine-tuning with Hugging Face The above experiments are quite successful – BERT gives us a reliable boost compared to other methods we've explored for the SST task. However, we might expect to do even better if we fine-tune the BERT parameters as part of fitting our SST classifier. To do that, we need to incorpora...
class HfBertClassifierModel(nn.Module): def __init__(self, n_classes, weights_name='bert-base-cased'): super().__init__() self.n_classes = n_classes self.weights_name = weights_name self.bert = BertModel.from_pretrained(self.weights_name) self.bert.train() self.hidden...
finetuning.ipynb
cgpotts/cs224u
apache-2.0
As you can see, self.bert does the heavy-lifting: it reads in all the pretrained BERT parameters, and I've specified self.bert.train() just to make sure that these parameters can be updated during our training process. In forward, self.bert is used to process inputs, and then pooler_output is fed into self.classifier_...
class HfBertClassifier(TorchShallowNeuralClassifier): def __init__(self, weights_name, *args, **kwargs): self.weights_name = weights_name self.tokenizer = BertTokenizer.from_pretrained(self.weights_name) super().__init__(*args, **kwargs) self.params += ['weights_name'] def build...
finetuning.ipynb
cgpotts/cs224u
apache-2.0
HfBertClassifier experiment That's it! Let's see how we do on the SST binary, root-only problem. Because fine-tuning is expensive, we'll conduct a modest hyperparameter search and run the model for just one epoch per setting evaluation, as we did when assessing NLI models.
def bert_fine_tune_phi(text): return text def fit_hf_bert_classifier_with_hyperparameter_search(X, y): basemod = HfBertClassifier( weights_name='bert-base-cased', batch_size=8, # Small batches to avoid memory overload. max_iter=1, # We'll search based on 1 iteration for efficiency. ...
finetuning.ipynb
cgpotts/cs224u
apache-2.0
And now on to the final test-set evaluation, using the best model from above:
optimized_bert_classifier = bert_classifier_xval['model'] # Remove the rest of the experiment results to clear out some memory: del bert_classifier_xval def fit_optimized_hf_bert_classifier(X, y): optimized_bert_classifier.max_iter = 1000 optimized_bert_classifier.fit(X, y) return optimized_bert_classifie...
finetuning.ipynb
cgpotts/cs224u
apache-2.0
Vega datasets Before going into the perception experiment, let's first talk about some handy datasets that you can play with. It's nice to have clean datasets handy to practice data visualization. There is a nice small package called vega-datasets, from the altair project. You can install the package by running $ pip...
from vega_datasets import data data.list_datasets()
m04-perception/lab.ipynb
yy/dviz-course
mit
or you can work with only smaller, local datasets.
from vega_datasets import local_data local_data.list_datasets()
m04-perception/lab.ipynb
yy/dviz-course
mit
Ah, we have the anscombe data here! Let's see the description of the dataset.
local_data.anscombe.description
m04-perception/lab.ipynb
yy/dviz-course
mit
Anscombe's quartet dataset How does the actual data look like? Very conveniently, calling the dataset returns a Pandas dataframe for you.
df = local_data.anscombe() df.head()
m04-perception/lab.ipynb
yy/dviz-course
mit
Q1: can you draw a scatterplot of the dataset "I"? You can filter the dataframe based on the Series column and use plot function that you used for the Snow's map.
# TODO: put your code here
m04-perception/lab.ipynb
yy/dviz-course
mit
Some histograms with pandas Let's look at a slightly more complicated dataset.
car_df = local_data.cars() car_df.head()
m04-perception/lab.ipynb
yy/dviz-course
mit
Pandas provides useful summary functions. It identifies numerical data columns and provides you with a table of summary statistics.
car_df.describe()
m04-perception/lab.ipynb
yy/dviz-course
mit
If you ask to draw a histogram, you get all of them. :)
car_df.hist()
m04-perception/lab.ipynb
yy/dviz-course
mit
Well this is too small. You can check out the documentation and change the size of the figure. Q2: by consulting the documentation, can you make the figure larger so that we can see all the labels clearly? And then make the layout 2 x 3 not 3 x 2, then change the number of bins to 20?
# TODO: put your code here
m04-perception/lab.ipynb
yy/dviz-course
mit
Your own psychophysics experiment! Let's do an experiment! The procedure is as follows: Generate a random number between [1, 10]; Use a horizontal bar to represent the number, i.e., the length of the bar is equal to the number; Guess the length of the bar by comparing it to two other bars with length 1 and 10 respecti...
import random import time import numpy as np l_short_bar = 1 l_long_bar = 10 perceived_length_list = [] actual_length_list = []
m04-perception/lab.ipynb
yy/dviz-course
mit
Perception of length Let's run the experiment. The random module in Python provides various random number generators, and the random.uniform(a,b) function returns a floating point number in [a,b]. We can plot horizontal bars using the pyplot.barh() function. Using this function, we can produce a bar graph that looks l...
mystery_length = random.uniform(1, 10) # generate a number between 1 and 10. this is the *actual* length. plt.barh(np.arange(3), [l_short_bar, mystery_length, l_long_bar], align='center') plt.yticks(np.arange(3), ('1', '?', '10')) plt.xticks([]) # no hint!
m04-perception/lab.ipynb
yy/dviz-course
mit
Btw, np.arange is used to create a simple integer list [0, 1, 2].
np.arange(3)
m04-perception/lab.ipynb
yy/dviz-course
mit
Now let's define a function to perform the experiment once. When you run this function, it picks a random number between 1.0 and 10.0 and show the bar chart. Then it asks you to input your estimate of the length of the middle bar. It then saves that number to the perceived_length_list and the actual answer to the actua...
def run_exp_once(): mystery_length = random.uniform(1, 10) # generate a number between 1 and 10. plt.barh(np.arange(3), [l_short_bar, mystery_length, l_long_bar], height=0.5, align='center') plt.yticks(np.arange(3), ('1', '?', '10')) plt.xticks([]) # no hint! plt.show() try: perc...
m04-perception/lab.ipynb
yy/dviz-course
mit
Now, run the experiment many times to gather your data. Check the two lists to make sure that you have the proper dataset. The length of the two lists should be the same.
# TODO: Run your experiment many times here
m04-perception/lab.ipynb
yy/dviz-course
mit
Plotting the result Now we can draw the scatter plot of perceived and actual length. The matplotlib's scatter() function will do this. This is the backend of the pandas' scatterplot. Here is an example of how to use scatter:
plt.scatter(x=[1,5,10], y=[1,10, 5])
m04-perception/lab.ipynb
yy/dviz-course
mit
Q3: Now plot your result using the scatter() function. You should also use plt.title(), plt.xlabel(), and plt.ylabel() to label your axes and the plot itself.
# TODO: put your code here
m04-perception/lab.ipynb
yy/dviz-course
mit
After plotting, let's fit the relation between actual and perceived lengths using a polynomial function. We can easily do it using curve_fit(f, x, y) in Scipy, which is to fit $x$ and $y$ using the function f. In our case, $f = a*x^b +c$. For instance, we can check whether this works by creating a fake dataset that fol...
from scipy.optimize import curve_fit def func(x, a, b, c): return a * np.power(x, b) + c x = np.arange(20) # [0,1,2,3, ..., 19] y = np.power(x, 2) # [0,1,4,9, ... ] popt, pcov = curve_fit(func, x, y) print('{:.2f} x^{:.2f} + {:.2f}'.format(*popt))
m04-perception/lab.ipynb
yy/dviz-course
mit
In order to plot the function to check the relationship between the actual and perceived lenghts, you can use two variables x and y to plot the relationship where x equals to a series of continuous numbers. For example, if your x axis ranges from 1 to 9 then the variable x could be equal to np.linspace(1, 10, 50). The ...
# TODO: your code here
m04-perception/lab.ipynb
yy/dviz-course
mit
Perception of area Similar to the above experiment, we now represent a random number as a circle, and the area of the circle is equal to the number. First, calculate the radius of a circle from its area and then plot using the Circle() function. plt.Circle((0,0), r) will plot a circle centered at (0,0) with radius r.
n1 = 0.005 n2 = 0.05 radius1 = np.sqrt(n1/np.pi) # area = pi * r * r radius2 = np.sqrt(n2/np.pi) random_radius = np.sqrt(n1*random.uniform(1,10)/np.pi) plt.axis('equal') plt.axis('off') circ1 = plt.Circle( (0,0), radius1, clip_on=False ) circ2 = plt.Circle( (4*radius2,0), radius2, clip_on=False ) rand_circ = ...
m04-perception/lab.ipynb
yy/dviz-course
mit
Let's have two lists for this experiment.
perceived_area_list = [] actual_area_list = []
m04-perception/lab.ipynb
yy/dviz-course
mit
And define a function for the experiment.
def run_area_exp_once(n1=0.005, n2=0.05): radius1 = np.sqrt(n1/np.pi) # area = pi * r * r radius2 = np.sqrt(n2/np.pi) mystery_number = random.uniform(1,10) random_radius = np.sqrt(n1*mystery_number/math.pi) plt.axis('equal') plt.axis('off') circ1 = plt.Circle( (0,0), radius...
m04-perception/lab.ipynb
yy/dviz-course
mit
Q5: Now you can run the experiment many times, plot the result, and fit a power-law curve!
# TODO: put your code here. You can use multiple cells.
m04-perception/lab.ipynb
yy/dviz-course
mit
The problem of true peak estimation The following widget demonstrates two intersample detection techniques: - Signal upsampling. - parabolic interpolation. The accuracy of both methods can be assessed in real-time by shifting the sampling points in a Sinc function and evaluating the error produced by both systems.
# Parameters duration = 10 # s fs = 1 # hz k = 1. # amplitude oversamplingFactor = 4 # factor of oversampling for the real signal nSamples = fs * duration time = np.arange(-nSamples/2, nSamples/2, 2 ** -oversamplingFactor, dtype='float') samplingPoints = time[::2 ** oversamplingFactor] def shifted_...
src/examples/tutorial/example_truepeakdetector.ipynb
carthach/essentia
agpl-3.0
As it can be seen from the widget, the oversampling strategy generates a smaller error in most of the cases. The ITU-R BS.1770 approach The ITU-R BS.1770 recommentation proposess the following signal chain based on the oversampling strategy: -12.04dB --> x4 oversample --> LowPass --> abs() --> 20 * log1...
fs = 44100. eps = np.finfo(np.float32).eps audio_dir = '../../audio/' audio = es.MonoLoader(filename='{}/{}'.format(audio_dir, 'recorded/distorted.wav'), sampleRate=fs)() times = np.linspace(0, len(audio) / fs, len(audio)) peakLocations, output = es.TruePeakDetector(version...
src/examples/tutorial/example_truepeakdetector.ipynb
carthach/essentia
agpl-3.0
How MNE uses FreeSurfer's outputs This tutorial explains how MRI coordinate frames are handled in MNE-Python, and how MNE-Python integrates with FreeSurfer for handling MRI data and source space data in general. As usual we'll start by importing the necessary packages; for this tutorial that includes :mod:nibabel to ha...
import os import numpy as np import nibabel import matplotlib.pyplot as plt import matplotlib.patheffects as path_effects import mne from mne.transforms import apply_trans from mne.io.constants import FIFF
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
MRI coordinate frames Let's start out by looking at the sample subject MRI. Following standard FreeSurfer convention, we look at :file:T1.mgz, which gets created from the original MRI :file:sample/mri/orig/001.mgz when you run the FreeSurfer command recon-all <https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all>_...
data_path = mne.datasets.sample.data_path() subjects_dir = os.path.join(data_path, 'subjects') subject = 'sample' t1_fname = os.path.join(subjects_dir, subject, 'mri', 'T1.mgz') t1 = nibabel.load(t1_fname) t1.orthoview()
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that the axes in the :meth:~nibabel.spatialimages.SpatialImage.orthoview figure are labeled L-R, S-I, and P-A. These reflect the standard RAS (right-anterior-superior) coordinate system that is widely used in MRI imaging. If you are unfamiliar with RAS coordinates, see the excellent nibabel tutorial :doc:nibabel...
data = np.asarray(t1.dataobj) print(data.shape)
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
These data are voxel intensity values. Here they are unsigned integers in the range 0-255, though in general they can be floating point values. A value data[i, j, k] at a given index triplet (i, j, k) corresponds to some real-world physical location (x, y, z) in space. To get its physical location, first we have to cho...
print(t1.affine) vox = np.array([122, 119, 102]) xyz_ras = apply_trans(t1.affine, vox) print('Our voxel has real-world coordinates {}, {}, {} (mm)' .format(*np.round(xyz_ras, 3)))
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
If you have a point (x, y, z) in scanner-native RAS space and you want the corresponding voxel number, you can get it using the inverse of the affine. This involves some rounding, so it's possible to end up off by one voxel if you're not careful:
ras_coords_mm = np.array([1, -17, -18]) inv_affine = np.linalg.inv(t1.affine) i_, j_, k_ = np.round(apply_trans(inv_affine, ras_coords_mm)).astype(int) print('Our real-world coordinates correspond to voxel ({}, {}, {})' .format(i_, j_, k_))
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's write a short function to visualize where our voxel lies in an image, and annotate it in RAS space (rounded to the nearest millimeter):
def imshow_mri(data, img, vox, xyz, suptitle): """Show an MRI slice with a voxel annotated.""" i, j, k = vox fig, ax = plt.subplots(1, figsize=(6, 6)) codes = nibabel.orientations.aff2axcodes(img.affine) # Figure out the title based on the code of this axis ori_slice = dict(P='Coronal', A='Coron...
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Notice that the axis scales (i, j, and k) are still in voxels (ranging from 0-255); it's only the annotation text that we've translated into real-world RAS in millimeters. "MRI coordinates" in MNE-Python: FreeSurfer surface RAS While :mod:nibabel uses scanner RAS (x, y, z) coordinates, FreeSurfer uses a slightly differ...
Torig = t1.header.get_vox2ras_tkr() print(t1.affine) print(Torig) xyz_mri = apply_trans(Torig, vox) imshow_mri(data, t1, vox, dict(MRI=xyz_mri), 'MRI slice')
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Knowing these relationships and being mindful about transformations, we can get from a point in any given space to any other space. Let's start out by plotting the Nasion on a saggital MRI slice:
fiducials = mne.coreg.get_mni_fiducials(subject, subjects_dir=subjects_dir) nasion_mri = [d for d in fiducials if d['ident'] == FIFF.FIFFV_POINT_NASION][0] print(nasion_mri) # note it's in Freesurfer MRI coords
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
When we print the nasion, it displays as a DigPoint and shows its coordinates in millimeters, but beware that the underlying data is actually stored in meters <units>, so before transforming and plotting we'll convert to millimeters:
nasion_mri = nasion_mri['r'] * 1000 # meters → millimeters nasion_vox = np.round( apply_trans(np.linalg.inv(Torig), nasion_mri)).astype(int) imshow_mri(data, t1, nasion_vox, dict(MRI=nasion_mri), 'Nasion estimated from MRI transform')
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can also take the digitization point from the MEG data, which is in the "head" coordinate frame. Let's look at the nasion in the head coordinate frame:
info = mne.io.read_info( os.path.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')) nasion_head = [d for d in info['dig'] if d['kind'] == FIFF.FIFFV_POINT_CARDINAL and d['ident'] == FIFF.FIFFV_POINT_NASION][0] print(nasion_head) # note it's in "head" coordinates
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
.. sidebar:: Head coordinate frame The head coordinate frame in MNE is the "Neuromag" head coordinate frame. The origin is given by the intersection between a line connecting the LPA and RPA and the line orthogonal to it that runs through the nasion. It is also in RAS orientation, meaning that +X runs through the ...
trans = mne.read_trans( os.path.join(data_path, 'MEG', 'sample', 'sample_audvis_raw-trans.fif')) # first we transform from head to MRI, and *then* convert to millimeters nasion_dig_mri = apply_trans(trans, nasion_head['r']) * 1000 # ...then we can use Torig to convert MRI to voxels: nasion_dig_vox = np.round( ...
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Using FreeSurfer's surface reconstructions An important part of what FreeSurfer does is provide cortical surface reconstructions. For example, let's load and view the white surface of the brain. This is a 3D mesh defined by a set of vertices (conventionally called rr) with shape (n_vertices, 3) and a set of triangles (...
fname = os.path.join(subjects_dir, subject, 'surf', 'rh.white') rr_mm, tris = mne.read_surface(fname) print(f'rr_mm.shape == {rr_mm.shape}') print(f'tris.shape == {tris.shape}') print(f'rr_mm.max() = {rr_mm.max()}') # just to show that we are in mm
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's actually plot it:
renderer = mne.viz.backends.renderer.create_3d_figure( size=(600, 600), bgcolor='w', scene=False) gray = (0.5, 0.5, 0.5) renderer.mesh(*rr_mm.T, triangles=tris, color=gray) view_kwargs = dict(elevation=90, azimuth=0) mne.viz.set_3d_view( figure=renderer.figure, distance=350, focalpoint=(0., 0., 40.), **view...
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can also plot the mesh on top of an MRI slice. The mesh surfaces are defined in millimeters in the MRI (FreeSurfer surface RAS) coordinate frame, so we can convert them to voxels by applying the inverse of the Torig transform:
rr_vox = apply_trans(np.linalg.inv(Torig), rr_mm) fig = imshow_mri(data, t1, vox, {'Scanner RAS': xyz_ras}, 'MRI slice') # Based on how imshow_mri works, the "X" here is the last dim of the MRI vol, # the "Y" is the middle dim, and the "Z" is the first dim, so now that our # points are in the correct coordinate frame, ...
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This is the method used by :func:mne.viz.plot_bem to show the BEM surfaces. Cortical alignment (spherical) A critical function provided by FreeSurfer is spherical surface alignment of cortical surfaces, maximizing sulcal-gyral alignment. FreeSurfer first expands the cortical surface to a sphere, then aligns it optimall...
renderer_kwargs = dict(bgcolor='w', smooth_shading=False) renderer = mne.viz.backends.renderer.create_3d_figure( size=(800, 400), scene=False, **renderer_kwargs) curvs = [ (mne.surface.read_curvature(os.path.join( subjects_dir, subj, 'surf', 'rh.curv'), binary=False) > 0).astype(float) for s...
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's look a bit more closely at the spherical alignment by overlaying the two spherical meshes as wireframes and zooming way in (the purple points are separated by about 1 mm):
cyan = '#66CCEE' purple = '#AA3377' renderer = mne.viz.backends.renderer.create_3d_figure( size=(800, 800), scene=False, **renderer_kwargs) fnames = [os.path.join(subjects_dir, subj, 'surf', 'rh.sphere') for subj in ('sample', 'fsaverage')] colors = [cyan, purple] for name, color in zip(fnames, colors): ...
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
You can see that the fsaverage (purple) mesh is uniformly spaced, and the mesh for subject "sample" (in cyan) has been deformed along the spherical surface by FreeSurfer. This deformation is designed to optimize the sulcal-gyral alignment. Surface decimation These surfaces have a lot of vertices, and in general we only...
src = mne.read_source_spaces(os.path.join(subjects_dir, 'sample', 'bem', 'sample-oct-6-src.fif')) print(src) blue = '#4477AA' renderer = mne.viz.backends.renderer.create_3d_figure( size=(800, 800), scene=False, **renderer_kwargs) rr_sph, _ = mne.read_surface(fnames[0]) for...
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can also then look at how these two meshes compare by plotting the original, high-density mesh as well as our decimated mesh white surfaces.
renderer = mne.viz.backends.renderer.create_3d_figure( size=(800, 400), scene=False, **renderer_kwargs) y_shifts = [-125, 125] tris = [src[1]['tris'], src[1]['use_tris']] for y_shift, tris in zip(y_shifts, tris): this_rr = src[1]['rr'] * 1000. + [0, y_shift, -40] renderer.mesh(*this_rr.T, triangles=tris, co...
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
<div class="alert alert-danger"><h4>Warning</h4><p>Some source space vertices can be removed during forward computation. See `tut-forward` for more information.</p></div> FreeSurfer's MNI affine transformation In addition to surface-based approaches, FreeSurfer also provides a simple affine coregistration of each s...
brain = mne.viz.Brain('sample', 'lh', 'white', subjects_dir=subjects_dir, background='w') xyz = np.array([[-55, -10, 35]]) brain.add_foci(xyz, hemi='lh', color='k') brain.show_view('lat')
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can take this point and transform it to MNI space:
mri_mni_trans = mne.read_talxfm(subject, subjects_dir) print(mri_mni_trans) xyz_mni = apply_trans(mri_mni_trans, xyz / 1000.) * 1000. print(np.round(xyz_mni, 1))
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And because fsaverage is special in that it's already in MNI space (its MRI-to-MNI transform is identity), it should land in the equivalent anatomical location:
brain = mne.viz.Brain('fsaverage', 'lh', 'white', subjects_dir=subjects_dir, background='w') brain.add_foci(xyz_mni, hemi='lh', color='k') brain.show_view('lat')
0.24/_downloads/bdc8ac519d8f54d70a73a5e0de598566/50_background_freesurfer_mne.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
One-dimensional homogeneous problems in a semiinfinite medium page 54, "HEAT CONDUCTION", M. Özisik, 1980 La température extérieure est définie à zéro. L'équation générale pour la température est :
from scipy.integrate import quad rhoCp = 1400e3 # densité*Capacité thermique, J/m3/K k = 1.75 # conductivité, W/m/K alpha = k/rhoCp # diffusivité, s.m-2 conditionlimite = 'T_fixe' # 'T_fixe', 'adia', '3thd' fun_F = lambda x: 100*np.exp( -x/0.3 ) I_right = lambda beta: quad(fun_F, 0, np.inf, weight='sin', wvar=b...
theo_wall_inertia.ipynb
xdze2/thermique_appart
mit
Avec la transformé de Laplace page 275 Avec Odeint
L = 1 # m N = 20 X = np.linspace( 0, L, N ) dx = L/(N-1) T = np.zeros_like( X ) Tzero = np.zeros_like( X ) #Tzero = 1+Tzero def flux_in( T, t ): """ Flux entrant T: Température de surface, °C t: temps, sec """ w = 2*np.pi/( 60*60*24 ) F = 10*( 12*np.cos( w*t ) - T ) return F de...
theo_wall_inertia.ipynb
xdze2/thermique_appart
mit
The spike times of all descending commands along the 5000 ms of simulation is shown in Fig. \ref{fig:spikesDescRenshaw}.
plt.figure() plt.plot(pools[1].poolTerminalSpikes[:, 0], pools[1].poolTerminalSpikes[:, 1]+1, '.') plt.xlabel('t (ms)') plt.ylabel('Descending Command index')
ExampleNotebooks/MNPoolWithRenshawCells-Copy1.ipynb
rnwatanabe/projectPR
gpl-3.0
The spike times of the MNs along the 5000 ms of simulation is shown in Fig. \ref{fig:spikesMNRenshaw}.
plt.figure() plt.plot(pools[0].poolTerminalSpikes[:, 0], pools[0].poolTerminalSpikes[:, 1]+1, '.') plt.xlabel('t (ms)') plt.ylabel('Motor Unit index')
ExampleNotebooks/MNPoolWithRenshawCells-Copy1.ipynb
rnwatanabe/projectPR
gpl-3.0
The spike times of the Renshaw cells along the 5000 ms of simulation is shown in Fig. \ref{fig:spikesRenshawRenshaw}.
plt.figure() plt.plot(pools[2].poolSomaSpikes[:, 0], pools[2].poolSomaSpikes[:, 1]+1, '.') plt.xlabel('t (ms)') plt.ylabel('Renshaw cell index')
ExampleNotebooks/MNPoolWithRenshawCells-Copy1.ipynb
rnwatanabe/projectPR
gpl-3.0
The muscle force during the simulation \ref{fig:forceRenshaw}.
plt.figure() plt.plot(t, pools[0].Muscle.force, '-') plt.xlabel('t (ms)') plt.ylabel('Muscle force (N)')
ExampleNotebooks/MNPoolWithRenshawCells-Copy1.ipynb
rnwatanabe/projectPR
gpl-3.0
1. Importing the Menyanthes-file Import the Menyanthes-file with observations and stresses. Then plot the observations, together with the diferent stresses in the Menyanthes file.
# how to use it? fname = '../data/MenyanthesTest.men' meny = ps.read.MenyData(fname) # plot some series f1, axarr = plt.subplots(len(meny.IN)+1, sharex=True) oseries = meny.H['Obsevation well']["values"] oseries.plot(ax=axarr[0]) axarr[0].set_title(meny.H['Obsevation well']["Name"]) for i, val in enumerate(meny.IN.ite...
examples/notebooks/4_menyanthes_file.ipynb
gwtsa/gwtsa
mit
2. Run a model Make a model with precipitation, evaporation and three groundwater extractions.
# Create the time series model ml = ps.Model(oseries) # Add precipitation IN = meny.IN['Precipitation']['values'] IN.index = IN.index.round("D") IN2 = meny.IN['Evaporation']['values'] IN2.index = IN2.index.round("D") ts = ps.StressModel2([IN, IN2], ps.Gamma, 'Recharge') ml.add_stressmodel(ts) # Add well extraction 1 ...
examples/notebooks/4_menyanthes_file.ipynb
gwtsa/gwtsa
mit
3. Plot the decomposition Show the decomposition of the groundwater head, by plotting the influence on groundwater head of each of the stresses.
ax = ml.plots.decomposition(ytick_base=1.) ax[0].set_title('Observations vs simulation') ax[0].legend() ax[0].figure.tight_layout(pad=0)
examples/notebooks/4_menyanthes_file.ipynb
gwtsa/gwtsa
mit
2: Converting Data Into A List Of Lists The lists needs to be converted to a more structured format to be able to analyze it.
def read_csv(filename): string_data = open(filename).read() string_list = string_data.split("\n")[1:] final_list = [] for row in string_list: string_fields = row.split(",") int_fields = [] for value in string_fields: int_fields.append(int(value)) final_li...
jupyter-files/GP02.ipynb
JasonMDev/guidedprojects
mit
3: Calculating Number Of Births Each Month Now that the data is in a more usable format, we can start to analyze it.
def month_births(data): births_per_month = {} for row in data: month = row[1] births = row[4] if month in births_per_month: births_per_month[month] = births_per_month[month] + births else: births_per_month[month] = births return births_per_month ...
jupyter-files/GP02.ipynb
JasonMDev/guidedprojects
mit
4: Calculating Number Of Births Each Day Of Week Let's now create a function that calculates the total number of births for each unique day of the week.
def dow_births(data): births_per_dow = {} for row in data: dow = row[3] births = row[4] if dow in births_per_dow: births_per_dow[dow] = births_per_dow[dow] + births else: births_per_dow[dow] = births return births_per_dow cdc_dow_births = dow...
jupyter-files/GP02.ipynb
JasonMDev/guidedprojects
mit
5: Creating A More General Function It's better to create a single function that works for any column and specify the column we want as a parameter each time we call the function.
def calc_counts(data, column): sums_dict = {} for row in data: col_value = row[column] births = row[4] if col_value in sums_dict: sums_dict[col_value] = sums_dict[col_value] + births else: sums_dict[col_value] = births return sums_dict cdc_year_b...
jupyter-files/GP02.ipynb
JasonMDev/guidedprojects
mit
Hyperparameters Please feel free to change the hyperparameters and check your results. The best way to develop intuition about the architecture is to experiment with it.
# DATA BATCH_SIZE = 256 AUTO = tf.data.AUTOTUNE INPUT_SHAPE = (32, 32, 3) NUM_CLASSES = 10 # OPTIMIZER LEARNING_RATE = 1e-3 WEIGHT_DECAY = 1e-4 # TRAINING EPOCHS = 20 # AUGMENTATION IMAGE_SIZE = 48 # We will resize input images to this size. PATCH_SIZE = 6 # Size of the patches to be extracted from the input image...
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
Load and prepare the CIFAR-10 dataset
# Load the CIFAR-10 dataset. (x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() (x_train, y_train), (x_val, y_val) = ( (x_train[:40000], y_train[:40000]), (x_train[40000:], y_train[40000:]), ) print(f"Training samples: {len(x_train)}") print(f"Validation samples: {len(x_val)}") print(f"Te...
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
Data augmentation The augmentation pipeline consists of: Rescaling Resizing Random cropping (fixed-sized or random sized) Random horizontal flipping
data_augmentation = keras.Sequential( [ layers.Rescaling(1 / 255.0), layers.Resizing(INPUT_SHAPE[0] + 20, INPUT_SHAPE[0] + 20), layers.RandomCrop(IMAGE_SIZE, IMAGE_SIZE), layers.RandomFlip("horizontal"), ], name="data_augmentation", )
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
Note that image data augmentation layers do not apply data transformations at inference time. This means that when these layers are called with training=False they behave differently. Refer to the documentation for more details. Positional embedding module A Transformer architecture consists of multi-head self attentio...
def position_embedding( projected_patches, num_patches=NUM_PATCHES, projection_dim=PROJECTION_DIM ): # Build the positions. positions = tf.range(start=0, limit=num_patches, delta=1) # Encode the positions with an Embedding layer. encoded_positions = layers.Embedding( input_dim=num_patches,...
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
MLP block for Transformer This serves as the Fully Connected Feed Forward block for our Transformer.
def mlp(x, dropout_rate, hidden_units): # Iterate over the hidden units and # add Dense => Dropout. for units in hidden_units: x = layers.Dense(units, activation=tf.nn.gelu)(x) x = layers.Dropout(dropout_rate)(x) return x
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
TokenLearner module The following figure presents a pictorial overview of the module (source). The TokenLearner module takes as input an image-shaped tensor. It then passes it through multiple single-channel convolutional layers extracting different spatial attention maps focusing on different parts of the input. Thes...
def token_learner(inputs, number_of_tokens=NUM_TOKENS): # Layer normalize the inputs. x = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(inputs) # (B, H, W, C) # Applying Conv2D => Reshape => Permute # The reshape and permute is done to help with the next steps of # multiplication and Global A...
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
Transformer block
def transformer(encoded_patches): # Layer normalization 1. x1 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(encoded_patches) # Multi Head Self Attention layer 1. attention_output = layers.MultiHeadAttention( num_heads=NUM_HEADS, key_dim=PROJECTION_DIM, dropout=0.1 )(x1, x1) # Sk...
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
ViT model with the TokenLearner module
def create_vit_classifier(use_token_learner=True, token_learner_units=NUM_TOKENS): inputs = layers.Input(shape=INPUT_SHAPE) # (B, H, W, C) # Augment data. augmented = data_augmentation(inputs) # Create patches and project the pathces. projected_patches = layers.Conv2D( filters=PROJECTION...
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
As shown in the TokenLearner paper, it is almost always advantageous to include the TokenLearner module in the middle of the network. Training utility
def run_experiment(model): # Initialize the AdamW optimizer. optimizer = tfa.optimizers.AdamW( learning_rate=LEARNING_RATE, weight_decay=WEIGHT_DECAY ) # Compile the model with the optimizer, loss function # and the metrics. model.compile( optimizer=optimizer, loss="spa...
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
Train and evaluate a ViT with TokenLearner
vit_token_learner = create_vit_classifier() run_experiment(vit_token_learner)
examples/vision/ipynb/token_learner.ipynb
keras-team/keras-io
apache-2.0
Description Apply an intensity image transform to the input image. The input image can be seen as an gray scale image or an index image. The intensity transform is represented by a table where the input (gray scale) color address the table line and its column contents indicates the output (gray scale) image color. T...
testing = (__name__ == "__main__") if testing: ! jupyter nbconvert --to python applylut.ipynb import numpy as np import sys,os import matplotlib.image as mpimg ia898path = os.path.abspath('../../') if ia898path not in sys.path: sys.path.append(ia898path) import ia898.src as ia
src/applylut.ipynb
robertoalotufo/ia898
mit
Example 1 This first example shows a simple numeric 2 lines, 3 columns image with sequential pixel values. First the identity table is applied and image g is generated with same values of f. Next, a new table, itn = 5 - it is generated creating a negation table. The resultant image gn has the values of f negated.
if testing: f = np.array([[0,1,2], [3,4,5]]) print('f=\n',f) it = np.array(list(range(6))) # identity transform print('it=',it) g = ia.applylut(f, it) print('g=\n',g) itn = 5 - it # negation print('itn=',itn) gn = ia.applylut(f, itn) print('gn=\n',gn)
src/applylut.ipynb
robertoalotufo/ia898
mit
Example 2 This example shows the negation operation applying the intensity transform through a negation grayscale table: it = 255 - i.
if testing: f = mpimg.imread('../data/cameraman.tif') it = (255 - np.arange(256)).astype('uint8') g = ia.applylut(f, it) ia.adshow(f,'f') ia.adshow(g,'g')
src/applylut.ipynb
robertoalotufo/ia898
mit
Example 3 In this example, the colortable has 3 columns and the application of the colortable to an scalar image results in an image with 3 bands.
if testing: f = np.array([[0,1,2], [2,0,1]]) ct = np.array([[100,101,102], [110,111,112], [120,121,122]]) #print iaimginfo(ct) g = ia.applylut(f,ct) print(g)
src/applylut.ipynb
robertoalotufo/ia898
mit
Example 4 In this example, the colortable has 3 columns, R, G and B, where G and B are zeros and R is identity.
if testing: f = mpimg.imread('../data/cameraman.tif') aux = np.resize(np.arange(256).astype('uint8'), (256,1)) ct = np.concatenate((aux, np.zeros((256,2),'uint8')), 1) g = ia.applylut(f, ct) # generate (bands,H,W) g = g.transpose(1,2,0) # convert to (H,W,bands) ia.adshow(f) ia.adshow(g)
src/applylut.ipynb
robertoalotufo/ia898
mit
Equation $$ g(r,c) = IT( f(r,c) ) $$ $$ g_{R}(r,c) = IT_R( f(r,c))\ g_{G}(r,c) = IT_G( f(r,c))\ g_{B}(r,c) = IT_B( f(r,c)) $$ See Also: ia636:colormap Pseudocolor maps
if testing: print('testing applylut') print(repr(ia.applylut(np.array([0,1,2,3]),np.array([0,1,2,3]))) == repr(np.array([0,1,2,3]))) print(repr(ia.applylut(np.array([0,1,2,3]),np.array([[0,0,0],[1,1,1],[2,2,2],[3,3,3]]))) == repr(np.array([[0,0,0], [1,1,1], [2,2,2],[3,3,3]])))
src/applylut.ipynb
robertoalotufo/ia898
mit
Step 2 Now lets create a variable, <code>filePath</code>, that is a string containing the full path to the file we want to import. The code below looks in the current working directory for the file given a file name input by the user. This isn't necessary, and is just included for convienence. Alternatively, user can i...
dirPath = os.path.realpath('.') fileName = 'assets/coolingExample.xlsx' filePath = os.path.join(dirPath, fileName)
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
Step 3 Great! Now lets read the data into a dataframe called <code>df</code>. This will allow our data to be accessible by the string in the header.
df = pd.read_excel(filePath,header=0) df.head()
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
Our data is now accessible by a key value. The keys are the column headers in the dataframe. In this example case, those are 'Time (s) - Dev1/ai0' and 'Temperature - Dev1/ai0'. For example, lets access the data in the first column.
df[df.columns[0]]
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
What would happen if we tried to access the data with an invalid key, say <code>1</code> for example? Lets try it to find out. Note: I enclose this code in a <code>try: except:</code> statement in order to prevent a huge error from being generated.
try: df[1] except KeyError: print("KeyError: 1 - not a valid key")
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit
So lets say you have a large dataframe with unknown columns. There is a simple way to index them without having prior knowledge of what the dataframe columns are. Namely, the <code>columns</code> method in pandas.
cols = df.columns for col in cols: print(df[col])
Guides/python/excelToPandas.ipynb
rocketproplab/Guides
mit