markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the...
# TODO: Tune Parameters epochs = 15 batch_size = 512 keep_probability = 0.75
image-classification/dlnd_image_classification.ipynb
kvr777/deep-learning
mit
Remove color fill
import numpy as np num_categories = df.Category.unique().size p = sns.countplot(data=df, y = 'Category', hue = 'islong', saturation=1, xerr=7*np.arange(num_categories), edgecolor=(0,0,0), linewidth=2, ...
visualizations/seaborn/notebooks/.ipynb_checkpoints/countplot-checkpoint.ipynb
apryor6/apryor6.github.io
mit
Model initialization The chromosphere model doesn't take any special initialization arguments, so the initialization is straightforward.
tm = ChromosphereModel()
notebooks/example_chromosphere_model.ipynb
hpparvi/PyTransit
gpl-2.0
Model use Evaluation The transit model can be evaluated using either a set of scalar parameters, a parameter vector (1D ndarray), or a parameter vector array (2D ndarray). The model flux is returned as a 1D ndarray in the first two cases, and a 2D ndarray in the last (one model per parameter vector). tm.evaluate_ps(k,...
def plot_transits(tm, fmt='k'): fig, axs = subplots(1, 3, figsize = (13,3), constrained_layout=True, sharey=True) flux = tm.evaluate_ps(k, t0, p, a, i, e, w) axs[0].plot(tm.time, flux, fmt) axs[0].set_title('Individual parameters') flux = tm.evaluate_pv(pvp[0]) axs[1].plot(tm.time, flux, fmt) ...
notebooks/example_chromosphere_model.ipynb
hpparvi/PyTransit
gpl-2.0
Supersampling The transit model can be supersampled by setting the nsamples and exptimes arguments in set_data.
tm.set_data(times_lc, nsamples=10, exptimes=0.01) plot_transits(tm)
notebooks/example_chromosphere_model.ipynb
hpparvi/PyTransit
gpl-2.0
Heterogeneous time series PyTransit allows for heterogeneous time series, that is, a single time series can contain several individual light curves (with, e.g., different time cadences and required supersampling rates) observed (possibly) in different passbands. If a time series contains several light curves, it also n...
times_1 = linspace(0.85, 1.0, 500) times_2 = linspace(1.0, 1.15, 10) times = concatenate([times_1, times_2]) lcids = concatenate([full(times_1.size, 0, 'int'), full(times_2.size, 1, 'int')]) nsamples = [1, 10] exptimes = [0, 0.0167] tm.set_data(times, lcids, nsamples=nsamples, exptimes=exptimes) plot_transits(tm, 'k....
notebooks/example_chromosphere_model.ipynb
hpparvi/PyTransit
gpl-2.0
Now we can use interact to run plot_sample_stats with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n.
def sample_stat(sample): return sample.min() slider = widgets.IntSlider(min=10, max=1000, value=100) interact(plot_sample_stats, n=slider, xlim=fixed([55, 95])) None
pycon2016/tutorials/computation_statistics/sampling.ipynb
rawrgulmuffins/presentation_notes
mit
Other sample statistics This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic. Exercise 1: Fill in sample_stat below with any of these statistics: Standard deviation of the sample. Coefficient of variation, which is the sample...
def sample_stat(sample): # TODO: replace the following line with another sample statistic #return sample.std() return numpy.percentile(sample, 50) slider = widgets.IntSlider(min=10, max=1000, value=100) interact(plot_sample_stats, n=slider, xlim=fixed([10, 90])) None
pycon2016/tutorials/computation_statistics/sampling.ipynb
rawrgulmuffins/presentation_notes
mit
What values change, and what remain the same? When running the one sample t test, p is always higher than 0.05, meaning that at 5% significane level the sample provides sufficient evidence to conclude that the mean of the sample is the calculated mean in all cases (for both populations) Regarding the samples: means cha...
#Change the population value for pop1 to 0.3 pop1 = np.random.binomial(10, 0.3, 10000) pop2 = np.random.binomial(10,0.5, 10000) plt.hist(pop1, alpha=0.5, label='Population 1') plt.hist(pop2, alpha=0.5, label='Population 2') plt.legend(loc='upper right') plt.show() print(pop1.mean()) print(pop2.mean()) print(pop1....
Drill Central Limit Theorem.ipynb
borja876/Thinkful-DataScience-Borja
mit
What changes, and why? The t-value decreases in the second case (when p1=0.4, p2=0.5) and the p-value is much higher. The t value decreases because the difference between means is lower and the increase if the p-value shows the that the noise due to variability is growing in weach case
#Change the distribution of your populations from binomial to a distribution of your choice pop3 = np.random.standard_t(25, 10000) pop4 = logistic = np.random.logistic(9,2, 10000) plt.hist(pop1, alpha=0.5, label='Population 3') plt.hist(pop2, alpha=0.5, label='Population 4') plt.legend(loc='upper right') plt.show()...
Drill Central Limit Theorem.ipynb
borja876/Thinkful-DataScience-Borja
mit
BERT Preprocessing with TF Text <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/guide/bert_preprocessing_guide"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="htt...
!pip install -q -U tensorflow-text import tensorflow as tf import tensorflow_text as text import functools
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Our data contains two text features and we can create a example tf.data.Dataset. Our goal is to create a function that we can supply Dataset.map() with to be used in training.
examples = { "text_a": [ b"Sponge bob Squarepants is an Avenger", b"Marvel Avengers" ], "text_b": [ b"Barack Obama is the President.", b"President is the highest office" ], } dataset = tf.data.Dataset.from_tensor_slices(examples) next(iter(dataset))
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Content Trimming The main input to BERT is a concatenation of two sentences. However, BERT requires inputs to be in a fixed-size and shape and we may have content which exceed our budget. We can tackle this by using a text.Trimmer to trim our content down to a predetermined size (once concatenated along the last axis)...
trimmer = text.RoundRobinTrimmer(max_seq_length=[_MAX_SEQ_LEN]) trimmed = trimmer.trim([segment_a, segment_b]) trimmed
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
trimmed now contains the segments where the number of elements across a batch is 8 elements (when concatenated along axis=-1). Combining segments Now that we have segments trimmed, we can combine them together to get a single RaggedTensor. BERT uses special tokens to indicate the beginning ([CLS]) and end of a segment ...
segments_combined, segments_ids = text.combine_segments( [segment_a, segment_b], start_of_sequence_id=_START_TOKEN, end_of_segment_id=_END_TOKEN) segments_combined, segments_ids
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Choosing the Masked Value The methodology described the original BERT paper for choosing the value for masking is as follows: For mask_token_rate of the time, replace the item with the [MASK] token: "my dog is hairy" -&gt; "my dog is [MASK]" For random_token_rate of the time, replace the item with a random word: "my d...
input_ids = tf.ragged.constant([[19, 7, 21, 20, 9, 8], [13, 4, 16, 5], [15, 10, 12, 11, 6]]) mask_values_chooser = text.MaskValuesChooser(_VOCAB_SIZE, _MASK_TOKEN, 0.8) mask_values_chooser.get_mask_values(input_ids)
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Padding Model Inputs Now that we have all the inputs for our model, the last step in our preprocessing is to package them into fixed 2-dimensional Tensors with padding and also generate a mask Tensor indicating the values which are pad values. We can use text.pad_model_inputs() to help us with this task.
# Prepare and pad combined segment inputs input_word_ids, input_mask = text.pad_model_inputs( masked_token_ids, max_seq_length=_MAX_SEQ_LEN) input_type_ids, _ = text.pad_model_inputs( masked_token_ids, max_seq_length=_MAX_SEQ_LEN) # Prepare and pad masking task inputs masked_lm_positions, masked_lm_weights = text....
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Review Let's review what we have so far and assemble our preprocessing function. Here's what we have:
def bert_pretrain_preprocess(vocab_table, features): # Input is a string Tensor of documents, shape [batch, 1]. text_a = features["text_a"] text_b = features["text_b"] # Tokenize segments to shape [num_sentences, (num_words)] each. tokenizer = text.BertTokenizer( vocab_table, token_out_type=tf.in...
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
We previously constructed a tf.data.Dataset and we can now use our assembled preprocessing function bert_pretrain_preprocess() in Dataset.map(). This allows us to create an input pipeline for transforming our raw string data into integer inputs and feed directly into our model.
dataset = tf.data.Dataset.from_tensors(examples) dataset = dataset.map(functools.partial( bert_pretrain_preprocess, lookup_table)) next(iter(dataset))
third_party/tensorflow-text/src/docs/guide/bert_preprocessing_guide.ipynb
nwjs/chromium.src
bsd-3-clause
Applying a pretrained model In this tutorial, you will learn how to apply pyannote.audio models on an audio file, whose manual annotation is depicted below
# clone pyannote-audio Github repository and update ROOT_DIR accordingly ROOT_DIR = "/Users/bredin/Development/pyannote/pyannote-audio" AUDIO_FILE = f"{ROOT_DIR}/tutorials/assets/sample.wav" from pyannote.database.util import load_rttm REFERENCE = f"{ROOT_DIR}/tutorials/assets/sample.rttm" reference = load_rttm(REFERE...
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Loading models from 🤗 hub Pretrained models are available on 🤗 Huggingface model hub and can be listed by looking for the pyannote-audio-model tag.
from huggingface_hub import HfApi available_models = [m.modelId for m in HfApi().list_models(filter="pyannote-audio-model")] available_models
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Let's load the speaker segmentation model...
from pyannote.audio import Model model = Model.from_pretrained("pyannote/segmentation")
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
... which consists in SincNet feature extraction (sincnet) , LSTM sequence modeling (lstm), a few feed-forward layers (linear), and a final multi-label classifier:
model.summarize()
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
More details about the model are provided by its specifications...
specs = model.specifications specs
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
... which can be understood like that: duration = 5.0: the model ingests 5s-long audio chunks Resolution.FRAME and len(classes) == 4: the model output a sequence of frame-wise 4-dimensoinal scores Problem.MULTI_LABEL_CLASSIFICATION for each frame, more than one speaker can be active at once To apply the model on the...
from pyannote.audio import Inference inference = Inference(model, step=2.5) output = inference(AUDIO_FILE) output
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
For each of the 11 positions of the 5s window, the model outputs a 4-dimensional vector every 16ms (293 frames for 5 seconds), corresponding to the probabilities that each of (up to) 4 speakers is active.
output.data.shape
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Processing a file from memory In case the audio file is not stored on disk, pipelines can also process audio provided as a {"waveform": ..., "sample_rate": ...} dictionary.
import torchaudio waveform, sample_rate = torchaudio.load(AUDIO_FILE) print(f"{type(waveform)=}") print(f"{waveform.shape=}") print(f"{waveform.dtype=}") audio_in_memory = {"waveform": waveform, "sample_rate": sample_rate} output = inference(audio_in_memory) output
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Processing part of a file If needed, Inference can be used to process only part of a file:
from pyannote.core import Segment output = inference.crop(AUDIO_FILE, Segment(10, 20)) output
tutorials/applying_a_model.ipynb
pyannote/pyannote-audio
mit
Mesoporous pore size distribution Let's start by analysing the mesoporous size distribution of some of our nitrogen physisorption samples. The MCM-41 sample should have a very well defined, singular pore size in the mesopore range, with the pores as open-ended cylinders. We can use a common method, relying on a descrip...
isotherm = next(i for i in isotherms_n2_77k if i.material=='MCM-41') print(isotherm.material) results = pgc.psd_mesoporous( isotherm, pore_geometry='cylinder', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
The distribution is what we expected, a single narrow peak. Since we asked for extra verbosity, the function has generated a graph. The graph automatically sets a minimum limit of 1.5 angstrom, where the Kelvin equation methods break down. The result dictionary returned contains the x and y points of the graph. Dependi...
isotherm = next(i for i in isotherms_n2_77k if i.material=='Takeda 5A') print(isotherm.material) result_dict_meso = pgc.psd_mesoporous( isotherm, psd_model='pygaps-DH', pore_geometry='slit', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Now let's break down the available settings with the mesoporous PSD function. A psd_model parameter to select specific implementations of the methods, such as the BJH method or the DH method. A pore_geometry parameter can be used to specify the known pore geometry of the pore. The Kelvin equation parameters change...
isotherm = next(i for i in isotherms_n2_77k if i.material=='MCM-41') print(isotherm.material) result_dict = pgc.psd_mesoporous( isotherm, psd_model='DH', pore_geometry='cylinder', branch='ads', thickness_model='Halsey', kelvin_model='Kelvin-KJS', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
<div class="alert alert-info"> **Note:** If the user wants to customise the standard plots which are displayed, they are available for use in the `pygaps.graphing.calc_graphs` module </div> Microporous pore size distribution For microporous samples, we can use the psd_microporous function. The available model is an ...
isotherm = next(i for i in isotherms_n2_77k if i.material == 'Takeda 5A') print(isotherm.material) result_dict_micro = pgc.psd_microporous( isotherm, psd_model='HK', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
We see that we could have a peak around 0.7 nm, but could use more adsorption data at low pressure for better resolution. It should be noted that the model breaks down with pores bigger than around 3 nm. The framework comes with other models for the surface, like as the Saito-Foley derived oxide-ion model. Below is an ...
adsorbate_params = { 'molecular_diameter': 0.3, 'polarizability': 1.76e-3, 'magnetic_susceptibility': 3.6e-8, 'surface_density': 6.71e+18, 'liquid_density': 0.806, 'adsorbate_molar_mass': 28.0134 } isotherm = next(i for i in isotherms_n2_77k if i.material == 'UiO-66(Zr)') print(isotherm.materia...
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Finally, other types of H-K modified models are also available, like the Rege-Yang adapted model (RY), or a Cheng-Yang modification of both H-K (HK-CY) and R-Y (RY-CY) models.
isotherm = next(i for i in isotherms_n2_77k if i.material == 'Takeda 5A') print(isotherm.material) result_dict_micro = pgc.psd_microporous( isotherm, psd_model='RY', verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Kernel fit pore size distribution The kernel fitting method is currently the most powerful method for pore size distribution calculations. It requires a DFT kernel, or a collection of previously simulated adsorption isotherms which cover the entire pore range which we want to investigate. The calculation of the DFT ker...
isotherm = next(i for i in isotherms_n2_77k if i.material=='Takeda 5A') result_dict_dft = {} result_dict_dft = pgc.psd_dft( isotherm, branch='ads', kernel='DFT-N2-77K-carbon-slit', verbose=True, p_limits=None, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
The output is automatically smoothed using a b-spline method. Further (or less) smoothing can be specified by the bspline_order parameter. The higher the order, more smoothing is applied. Specify "0" to return the data as-fitted.
isotherm = next(i for i in isotherms_n2_77k if i.material == 'Takeda 5A') result_dict_dft = pgc.psd_dft( isotherm, bspline_order=5, verbose=True, )
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Comparing all the PSD methods For comparison purposes, we will compare the pore size distributions obtained through all the methods above. The sample on which all methods are applicable is the Takeda carbon. We will first plot the data using the existing function plot, then use the graph returned to plot the remaining ...
from pygaps.graphing.calc_graphs import psd_plot ax = psd_plot( result_dict_dft['pore_widths'], result_dict_dft['pore_distribution'], method='comparison', labeldiff='DFT', labelcum=None, left=0.4, right=8 ) ax.plot( result_dict_micro['pore_widths'], result_dict_micro['pore_distribut...
docs/examples/psd.ipynb
pauliacomi/pyGAPS
mit
Let's create a minimalist class that behaves as CoTeDe would like to. It is like a dictionary of relevant variables with a propriety attrs with some metatada.
class DummyDataset(object): """Minimalist data object that contains data and attributes """ def __init__(self): """Two dictionaries to store the data and attributes """ self.attrs = {} self.data = {} def __getitem__(self, key): """Return the requested...
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's create an empty data object.
mydata = DummyDataset()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's define some metadata as position and time that the profile was measured.
mydata.attrs['datetime'] = datetime(2016,6,4) mydata.attrs['latitude'] = 15 mydata.attrs['longitude'] = -38 print(mydata.attrs)
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Now let's create some data. Here I'll use create pressure, temperature, and salinity. I'm using masked array, but it could be a simple array. Here I'm creating these values, but in a real world case we would be reading from a netCDF, an ASCII file, an SQL query, or whatever is your data source.
mydata.data['PRES'] = ma.fix_invalid([2, 6, 10, 21, 44, 79, 100, 150, 200, 400, 410, 650, 1000, 2000, 5000]) mydata.data['TEMP'] = ma.fix_invalid([25.32, 25.34, 25.34, 25.31, 24.99, 23.46, 21.85, 17.95, 15.39, 11.08, 6.93, 7.93, 5.71, 3.58, np.nan]) mydata.data['PSAL'] = ma.fix_invalid([36.49, 36.51, 36.52, 36.53, 36.5...
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's check the available variables
mydata.keys()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's check one of the variables, temperature:
mydata['TEMP']
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Now that we have our data and metadata as this object, CoTeDe can do its job. On this example let's evaluate this fictious profile using the EuroGOOS recommended QC test. For that we can use ProfileQC() like:
pqced = ProfileQC(mydata, cfg='eurogoos')
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
The returned object (pqced) has the same content of the original mydata. Let's check the variables again,
pqced.keys()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
But now there is a new propriety named 'flags' which is a dictionary with all tests applied and the flag resulted. Those flags are groupped by variables.
pqced.flags.keys()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's see which flags are available for temperature,
pqced.flags['TEMP'].keys()
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Let's check the flags for the test gradient conditional to the depth, as defined by EuroGOOS
pqced.flags['TEMP']['gradient_depthconditional']
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
One means that that measurement was approved by this test. Nine means that the data was not available or not valid at that level. And zero means no QC. For the gradient test it is not possible to evaluate the first or the last values (check the manual), so those measurements exist but the flag was zero. The overall fla...
pqced.flags['PSAL']['overall']
docs/notebooks/GenericDataObject.ipynb
castelao/CoTeDe
bsd-3-clause
Job Queues Job Queues are one of the key classes of the library. You place jobs in them, and they run them and retrieve the data. You do not have to bother of where exactly things are run and how they are retrieved, everything is abstracted away and already adapted the specific clusters that we are using. In one line y...
jq_cfg_local = {'jq_type':'local'} virtualenv = 'test_py3' # by default root python. ex: virtualenv = 'test_xp_man' for venv in ~/virtualenvs/test_xp_man jq_cfg_plafrim = {'jq_type':'plafrim', 'modules':['slurm','language/python/3.5.2'], 'virtual_env': virtualenv, 'requirements': [pip_arg_xp_man], #'u...
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
The requirements section tells job queues to install a version of the library on the cluster if it does not exist yet. You can add other libraries, or add them for specific jobs. By default, virtual_env is set to None, meaning that everything runs and requirements are installed in the root python interpretor. If you pr...
jq_cfg = jq_cfg_local_multiprocess jq = xp_man.job_queue.get_jobqueue(**jq_cfg) print(jq.get_status_string())
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
Jobs Jobs are the objects that need to be executed. Here we will use a simple type of job, ExampleJob. It goes through a loop of 24 steps, prints the value of the counter variable, waits a random time between 1 and 2 seconds between each steps, and at the end saves the value in a file < job.descr >data.dat Other types ...
job_cfg = { 'estimated_time':120,#in seconds #'virtual_env':'test', #'requirements':[], #..., } job = xp_man.job.ExampleJob(**job_cfg) jq.add_job(job) # of course, you can add as many jobs as you want, like in next cell print(jq.get_status_string()) for i in range(20): job_cfg_2 = { 'descr' ...
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
Last step is to update the queue. One update will check the current status of each job attached to jq, and process its next step, being sending it to the cluster, retrieving it, unpacking it, etc
#jq.ssh_session.reconnect() jq.update_queue()
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
You can tell jq to automatically do updates until all jobs are done or in error status:
jq.auto_finish_queue()
notebook/JobQueues.ipynb
wschuell/experiment_manager
agpl-3.0
Define the non-orthogonal curvilinear coordinates. Here psi=(u, v) are the new coordinates and rv is the position vector $$ \vec{r} = u \mathbf{i} + v\left(1- \frac{\sin 2u}{10} \right) \mathbf{j} $$ where $\mathbf{i}$ and $\mathbf{j}$ are the Cartesian unit vectors in $x$- and $y$-directions, respectively.
u = sp.Symbol('x', real=True, positive=True) v = sp.Symbol('y', real=True) psi = (u, v) rv = (u, v*(1-sp.sin(2*u)/10))
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Now choose basis functions and create tensor product space. Notice that one has to use complex Fourier space and not the real, because the integral measure is a function of u.
N = 20 #B0 = FunctionSpace(N, 'C', bc=(0, 0), domain=(0, 2*np.pi)) B0 = FunctionSpace(N, 'F', dtype='D', domain=(0, 2*np.pi)) B1 = FunctionSpace(N, 'L', bc=(0, 0), domain=(-1, 1)) T = TensorProductSpace(comm, (B0, B1), dtype='D', coordinates=(psi, rv, sp.Q.negative(sp.sin(2*u)-10) & sp.Q.negative(sp.sin(2*u)/10-1))) p...
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Plot the mesh to see the domain.
mesh = T.local_cartesian_mesh() x, y = mesh plt.figure(figsize=(10, 4)) for i, (xi, yi) in enumerate(zip(x, y)): plt.plot(xi, yi, 'b') plt.plot(x[:, i], y[:, i], 'r')
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Print the Laplace operator in curvilinear coordinates. We use replace to simplify the expression.
dp = div(grad(p)) g = sp.Symbol('g', real=True, positive=True) replace = [(1-sp.sin(2*u)/10, sp.sqrt(g)), (sp.sin(2*u)-10, -10*sp.sqrt(g)), (5*sp.sin(2*u)-50, -50*sp.sqrt(g))] Math((dp*T.coors.sg**2).tolatex(funcname='p', symbol_names={u: 'u', v: 'v'}, replace=replace))
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Solve Poisson's equation. First define a manufactured solution and assemble the right hand side
ue = sp.sin(2*u)*(1-v**2) f = (div(grad(p))).tosympy(basis=ue, psi=psi) fj = Array(T, buffer=f*T.coors.sg) f_hat = Function(T) f_hat = inner(q, fj, output_array=f_hat)
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Then assemble the left hand side and solve using a generic 2D solver
M = inner(q, div(grad(p))*T.coors.sg) #M = inner(grad(q*T.coors.sg), -grad(p)) u_hat = Function(T) Sol1 = Solver2D(M) u_hat = Sol1(f_hat, u_hat) uj = u_hat.backward() uq = Array(T, buffer=ue) print('Error =', np.linalg.norm(uj-uq)) for i in range(len(M)): print(len(M[i].mats[0].keys()), len(M[i].mats[1].keys()), M...
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Plot the solution in the wavy Cartesian domain
xx, yy = T.local_cartesian_mesh() plt.figure(figsize=(12, 4)) plt.contourf(xx, yy, uj.real) plt.colorbar()
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
Inspect the sparsity pattern of the generated matrix on the left hand side
from matplotlib.pyplot import spy plt.figure() spy(Sol1.mat, markersize=0.1) from scipy.sparse.linalg import eigs mats = inner(q*T.coors.sg, -div(grad(p))) Sol1 = Solver2D(mats) BB = inner(p, q*T.coors.sg) Sol2 = Solver2D([BB]) f = eigs(Sol1.mat, k=20, M=Sol2.mat, which='LM', sigma=0) mats l = 10 u_hat = Function(T)...
binder/non-orthogonal-poisson.ipynb
spectralDNS/shenfun
bsd-2-clause
We've succesfully loaded our data, but there are still a couple preprocessing steps to go through first. Specifically, we're going to: Change the row labels from dcids to names for readability. Change the column name "dc/e9gftzl2hm8h9" to the more human readable "Commute_Time" The raw commute time values from Dat...
# Make Row Names More Readable # --- First, we'll copy the dcids into their own column # --- Next, we'll get the "name" property of each dcid # --- Then add the returned dictionary to our data frame as a new column # --- Finally, we'll set this column as the new index df = raw_features_df.copy(deep=True) df['DCID'] = d...
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
Now that we have our features and labels set, it's time to start modeling! 1) Model Selection The results of our models are only good if our models are correct in the first place. "Good" here can mean different things depending on your application -- we'll talk more about that later in this assignment. What's important...
# For ease of visualization, we'll focus on just a few cities subset_city_dcids = ["geoId/0667000", # San Francisco, CA "geoId/3651000", # NYC, NY "geoId/1304000", # Atlanta, GA "geoId/2404000", # Baltimore, MD "geoId/3050200", # Missou...
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
The following blocks of code will generate 2 candidate classifiers, for labeling the datapoints as either label 0 (high obesity rate), or label 1 (high obesity rate). The code will also output an accuracy score, which for this section is defined as: $\text{Accuracy} = \frac{\text{# correctly labeled}}{\text{# total da...
# Classifier 1 classifier1 = svm.SVC() classifier1.fit(X, Y["Label"]) fig, ax = plt.subplots() ax.set_title('Classifier 1') ax.set_ylabel('Percent_Person_SleepLessThan7Hours') ax.set_xlabel('Percent_Person_PhysicalInactivity') plot_decision_regions(X.to_numpy(), Y["Label"].to_numpy(), ...
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
1A) Which model do you think is better, Classifier 1 or Classifier 2? Explain your reasoning. 1B) Classifier 2 has a higher accuracy than Classifier 1, but has a more complicated decision boundary. Which do you think would generalize best to new data? 1.2) The Importance of Generalizability So, how did we do? Let's see...
# Original Data X_full = df[["Percent_Person_PhysicalInactivity", "Percent_Person_SleepLessThan7Hours"]] Y_full = df[['Label']] # Visualize the data cCycle = ['#1f77b4', '#ff7f0e'] mCycle = ['s', '^'] fig, ax = plt.subplots() ax.set_title('Original Data') ax.set_ylabel('Percent_Person_SleepLessThan7H...
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2A) In light of all the new datapoints, now which classifier do you think is better, Classifer 1 or Classifier 2? Explain your reasoning. 2B) In Question 1, Classifier 1 had a lower accuracy than Classifier 2. After adding more datapoints, we now see the reverse, with Classifier 1 having a higher accuracy than Classifi...
# Use all features that aren't obesity X_large = df.dropna()[[ "Median_Income_Person", "Percent_NoHealthInsurance", "Percent_Person_PhysicalInactivity", "Percent_Person_SleepLessThan7Hours", "Percent_Person_WithHighBloodPressure", "Perc...
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2.1) Accuracy 2.1.1) Classification Accuracy We've seen an example of an evaluation metric already -- accuracy! The accuracy score used question 1 is more commonly known as classification accuracy, and is the most common metric used in classification problems. As a refresher, the classification accuracy is the ratio of...
print('Accuracy of the large model is: %.2f' % large_model.score(X_large,Y_large))
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2.2) Train/Test Splits The ability of a model to perform well on new, previously unseen data (drawn from the same distribution as the data used the create the model) is called Generalization. For most applications, we prefer models that generalize well over those that don't. One way to check the generalizability of a m...
''' Try a variety of different splits by changing the test_size variable, which represents the ratio of points to use in the test set. For example, for a 75% Training, 25% Test split, use test_size=0.25 ''' test_size = 0.25 # Change me! Enter a value between 0 and 1 print(f'{np.round((1-test_size)*100)}% Traini...
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2.2.3) Training vs. Test Accuracy As you may have noticed from 2.2.2, we can calculate two different accuracies after performing a train/test split. A training accuracy based on how well the model performs on the data it was trained on, and a test accuracy based on how well the model performs on held out data. Typicall...
''' Set the number of folds by changing k. ''' k = 5 # Enter an integer >=2. Number of folds. print(f'Test accuracies for {k} splits:') scores = cross_val_score(large_model, X_large, Y_large, cv=k) for i in range(k): print('\tFold %d: %.2f' % (i+1, scores[i])) print('Average score across all folds: %.2f' % np.mean(s...
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
2.3A) Play around with the code box above to find a good value of $k$. What happens if $k$ is very large or very small? 2.2B) How does the average score across all folds change with $k$? 2.4) Other Metrics Worth Knowing 2.4.1) What about Regression? -- Mean Squared Error Different models and different problems often us...
# Classifier A x_A = df[["Count_Person", "Median_Income_Person"]] y_A = df["Label"] classifierA = linear_model.Perceptron() classifierA.fit(x_A, y_A) scores = cross_val_score(classifierA, x_A, y_A, cv=5) print('Classifier A') print('-------------') print('Number of Data Points:', x_A.shape[0]) print('Number...
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
3A) Run the code boxes above and select which model you would choose to deploy. Justify your answer. 3B) Consider a new Classifier D. Its results look like this: Number of Data Points: 5,000 \ Number of Features: 10,000 \ Training Classification Accuracy: 98% \ 5-Fold Cross Validation Accuracy: 95%. Would you deploy...
your_local_dcid = "geoId/0649670" # Replace with your own! # Get your local data from data commons local_data = datacommons_pandas.build_multivariate_dataframe(your_local_dcid,stat_vars_to_query) # Cleaning and Preprocessing local_data['DCID'] = local_data.index city_name_dict = datacommons.get_property_values(city_d...
notebooks/intro_data_science/Classification_and_Model_Evaluation.ipynb
datacommonsorg/api-python
apache-2.0
读写压缩文件 gzip 和 bz2 模块可以很容易的处理这些文件。 两个模块都为 open() 函数提供了另外的实现来解决这个问题 ```Python 读 gzip compression import gzip with gzip.open('somefile.gz', 'rt') as f: text = f.read() bz2 compression import bz2 with bz2.open('somefile.bz2', 'rt') as f: text = f.read() ``` ```Python 写 gzip compression import gzip with gzip.open(...
import json data = { 'name' : 'ACME', 'shares' : 100, 'price' : 542.23 } json_str = json.dumps(data) json_str data = json.loads(json_str) data
nbs/IO_file.ipynb
AutuanLiu/Python
mit
```python Writing JSON data with open('data.json', 'w') as f: json.dump(data, f) Reading data back with open('data.json', 'r') as f: data = json.load(f) ```
d = {'a': True, 'b': 'Hello', 'c': None} json.dumps(d)
nbs/IO_file.ipynb
AutuanLiu/Python
mit
Date Preprocessing Hint: How you divide training and test data set? And apply other techinques we have learned if needed. You could take a look at the Iris data set case in the textbook.
#Your code comes here import numpy as np from sklearn.metrics import accuracy_score if Version(sklearn_version) < '0.18': from sklearn.cross_validation import train_test_split else: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, r...
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #1 Perceptron
#Your code, including traing and testing, to observe the accuracies. from sklearn.linear_model import Perceptron ppn = Perceptron(n_iter = 40, eta0 = 0.1, random_state = 0) ppn.fit(X_train_std, y_train) y_pred = ppn.predict(X_test_std) print('[Perceptron] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #2 Logistic Regression
#Your code, including traing and testing, to observe the accuracies. from sklearn.linear_model import LogisticRegression lr = LogisticRegression(C = 1000.0, random_state = 0) lr.fit(X_train_std, y_train) y_pred = lr.predict(X_test_std) print('[Logistic Regression] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #3 SVM
#Your code, including traing and testing, to observe the accuracies. from sklearn.svm import SVC #Linear SVM svm0 = SVC(kernel='linear', C = 1.0, random_state = 0) svm0.fit(X_train_std, y_train) y_pred0 = svm0.predict(X_test_std) #RBF SVM svm1 = SVC(kernel='rbf', random_state = 0, gamma = 0.1, C = 1.0) svm1.fit(X_tra...
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #4 Decision Tree
#Your code, including traing and testing, to observe the accuracies. from sklearn.tree import DecisionTreeClassifier dt = DecisionTreeClassifier(criterion='entropy', max_depth=10, random_state=0) dt.fit(X_train_std, y_train) y_pred = dt.predict(X_test_std) print('[Decision Tree] Accuracy: %.2f' % accuracy_score(y_test...
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifer #5 Random Forest
#Your code, including traing and testing, to observe the accuracies. from sklearn.ensemble import RandomForestClassifier rf =RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1, n_jobs=2) rf.fit(X_train_std, y_train) y_pred = rf.predict(X_test_std) print('[Random Forest] Accuracy: %.2f' % accur...
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #6 KNN
#Your code, including traing and testing, to observe the accuracies. from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors = 5, p = 2, metric = 'minkowski') knn.fit(X_train_std, y_train) y_pred = knn.predict(X_test_std) print('[KNN] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Classifier #7 Naive Bayes
#Your code, including traing and testing, to observe the accuracies. from sklearn.naive_bayes import GaussianNB gnb = GaussianNB() gnb.fit(X_train_std, y_train) y_pred = gnb.predict(X_test_std) print('[Naive Bayes] Accuracy: %.2f' % accuracy_score(y_test, y_pred))
assignments/ex03_xdnian.ipynb
xdnian/pyml
mit
Main Contributors The blame file incorporates every single line of code with the author that changed that line at last.
top10 = blame_log.author.value_counts().head(10) top10 %matplotlib inline top10_authors.plot.pie();
notebooks/No-Go-Areas.ipynb
feststelltaste/software-analytics
gpl-3.0
No-Go Areas We want to find the components, where knowledge is probably outdated.
blame_log.timestamp = pd.to_datetime(blame_log.timestamp) blame_log.head() blame_log['age'] = pd.Timestamp('today') - blame_log.timestamp blame_log.head() blame_log['component'] = blame_log.path.str.split("/").str[:2].str.join(":") blame_log.head() age_per_component = blame_log.groupby('component') \ .age.min()....
notebooks/No-Go-Areas.ipynb
feststelltaste/software-analytics
gpl-3.0
These are the oldest 10 components
age_per_component.tail(10)
notebooks/No-Go-Areas.ipynb
feststelltaste/software-analytics
gpl-3.0
For all components, we create an overview with a bar chart.
age_per_component.plot.bar(figsize=[15,5])
notebooks/No-Go-Areas.ipynb
feststelltaste/software-analytics
gpl-3.0
Explore the Data Play around with view_sentence_range to view different parts of the data.
view_sentence_range = (500, 600) """ DON'T MODIFY ANYTHING IN THIS CELL """ import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = ...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dict...
import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function words = set(text) ...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to token...
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ return { '.': '||Period||', ',': '||Comma||', '"': '||Quotation_Mark||', ';': '||Semicolon||', ...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Inpu...
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function Input = tf.placeholder(tf.int32, (None, None), "input") Targets = tf.placeholder(tf.int32, (None, None), "targets") Learning...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the follo...
def get_init_cell(batch_size, rnn_size, lstm_layers=2): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNN...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence.
def get_embed(input_data, vocab_size, embed_dim): """ Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. """ # TODO: Implement Fun...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs, initial_state=None): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32, initial_st...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number...
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logi...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - Th...
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Func...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_e...
# Number of Epochs num_epochs = 55 # Batch Size batch_size = 128 # RNN Size rnn_size = 512 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 12 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 40 run_name = "e={},b={},rnn={},embed={},seq={},...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Build the Graph Build the graph using the neural network you implemented.
""" DON'T MODIFY ANYTHING IN THIS CELL """ from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0]...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
import sys """ DON'T MODIFY ANYTHING IN THIS CELL """ batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) train_logger = tf.summary.FileWriter('./logs/{}'.format(run_name), sess.graph) for epoch_i in range(nu...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ probabilities = probabilities...
tv-script-generation/dlnd_tv_script_generation.ipynb
scottquiring/Udacity_Deeplearning
mit
Congratulations! You first MPL plot. Let's make this a little bit larger, use a style to make it look better, and add some annotations.
mpl.style.use('bmh') fig, ax = plt.subplots(1) ax.plot(data[32, 32, 15, :]) ax.set_xlabel('Time (TR)') ax.set_ylabel('MRI signal (a.u.)') ax.set_title('Time-series from voxel [32, 32, 15]') fig.set_size_inches([12, 6])
beginner-python/002-plots.ipynb
ohbm/brain-hacking-101
apache-2.0
Impressions about the data? If we want to compare several voxels side by side we can plot them on the same axis:
fig, ax = plt.subplots(1) ax.plot(data[32, 32, 15, :]) ax.plot(data[32, 32, 14, :]) ax.plot(data[32, 32, 13, :]) ax.plot(data[32, 32, 12, :]) ax.set_xlabel('Time (TR)') ax.set_ylabel('MRI signal (a.u.)') ax.set_title('Time-series from a few voxels') fig.set_size_inches([12, 6])
beginner-python/002-plots.ipynb
ohbm/brain-hacking-101
apache-2.0