markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Each read in the fastq file format has four lines, one is a unique read name, one containing the sequence of bases, one +, and one containing quality scores. The quality scores correspond to the sequencer's confidence in making the base call. It is good practice to examine the quality of your data before you proceed wi...
!fastqc ../data/reads/mutant1_OIST-2015-03-28.fq.gz from IPython.display import IFrame IFrame('../data/reads/mutant1_OIST-2015-03-28_fastqc.html', width=1000, height=1000)
src/Raw data.ipynb
mikheyev/phage-lab
mit
Key statistics Basic Statistics. Reports number of sequences, and basic details Per base sequence quality. The distribution of sequence quality scored over the length of the read. The quality scale is logarithmic. Notice that the quality degrades rapidly over the length of the read. This is a key characteristic of Ill...
import gzip from Bio import SeqIO with gzip.open("../data/reads/mutant1_OIST-2015-03-28.fq.gz", 'rt') as infile: # open and decompress input file for rec in SeqIO.parse(infile, "fastq"): # start looping over all records print(rec) #print record contents break # stop looping, we only want to see on...
src/Raw data.ipynb
mikheyev/phage-lab
mit
You can see the methods associated with each object, suce as rec usig the dir command.
print(dir(rec)) # print methods associaat
src/Raw data.ipynb
mikheyev/phage-lab
mit
For example, we can reverse complement the sequence:
rec.reverse_complement()
src/Raw data.ipynb
mikheyev/phage-lab
mit
Preprocessing functional near-infrared spectroscopy (fNIRS) data This tutorial covers how to convert functional near-infrared spectroscopy (fNIRS) data from raw measurements to relative oxyhaemoglobin (HbO) and deoxyhaemoglobin (HbR) concentration, view the average waveform, and topographic representation of the respon...
import os.path as op import numpy as np import matplotlib.pyplot as plt from itertools import compress import mne fnirs_data_folder = mne.datasets.fnirs_motor.data_path() fnirs_cw_amplitude_dir = op.join(fnirs_data_folder, 'Participant-1') raw_intensity = mne.io.read_raw_nirx(fnirs_cw_amplitude_dir, verbose=True) ra...
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Providing more meaningful annotation information First, we attribute more meaningful names to the trigger codes which are stored as annotations. Second, we include information about the duration of each stimulus, which was 5 seconds for all conditions in this experiment. Third, we remove the trigger code 15, which sign...
raw_intensity.annotations.set_durations(5) raw_intensity.annotations.rename({'1.0': 'Control', '2.0': 'Tapping/Left', '3.0': 'Tapping/Right'}) unwanted = np.nonzero(raw_intensity.annotations.description == '15.0') raw_intensity.annotations.delete(unwan...
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Viewing location of sensors over brain surface Here we validate that the location of sources-detector pairs and channels are in the expected locations. Source-detector pairs are shown as lines between the optodes, channels (the mid point of source-detector pairs) are optionally shown as orange dots. Source are optional...
subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects') brain = mne.viz.Brain( 'fsaverage', subjects_dir=subjects_dir, background='w', cortex='0.5') brain.add_sensors( raw_intensity.info, trans='fsaverage', fnirs=['channels', 'pairs', 'sources', 'detectors']) brain.show_view(azimuth=20, elevati...
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Selecting channels appropriate for detecting neural responses First we remove channels that are too close together (short channels) to detect a neural response (less than 1 cm distance between optodes). These short channels can be seen in the figure above. To achieve this we pick all the channels that are not considere...
picks = mne.pick_types(raw_intensity.info, meg=False, fnirs=True) dists = mne.preprocessing.nirs.source_detector_distances( raw_intensity.info, picks=picks) raw_intensity.pick(picks[dists > 0.01]) raw_intensity.plot(n_channels=len(raw_intensity.ch_names), duration=500, show_scrollbars=False)
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Converting from raw intensity to optical density The raw intensity values are then converted to optical density.
raw_od = mne.preprocessing.nirs.optical_density(raw_intensity) raw_od.plot(n_channels=len(raw_od.ch_names), duration=500, show_scrollbars=False)
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Evaluating the quality of the data At this stage we can quantify the quality of the coupling between the scalp and the optodes using the scalp coupling index. This method looks for the presence of a prominent synchronous signal in the frequency range of cardiac signals across both photodetected signals. In this example...
sci = mne.preprocessing.nirs.scalp_coupling_index(raw_od) fig, ax = plt.subplots() ax.hist(sci) ax.set(xlabel='Scalp Coupling Index', ylabel='Count', xlim=[0, 1])
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
In this example we will mark all channels with a SCI less than 0.5 as bad (this dataset is quite clean, so no channels are marked as bad).
raw_od.info['bads'] = list(compress(raw_od.ch_names, sci < 0.5))
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
At this stage it is appropriate to inspect your data (for instructions on how to use the interactive data visualisation tool see tut-visualize-raw) to ensure that channels with poor scalp coupling have been removed. If your data contains lots of artifacts you may decide to apply artifact reduction techniques as describ...
raw_haemo = mne.preprocessing.nirs.beer_lambert_law(raw_od, ppf=0.1) raw_haemo.plot(n_channels=len(raw_haemo.ch_names), duration=500, show_scrollbars=False)
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Removing heart rate from signal The haemodynamic response has frequency content predominantly below 0.5 Hz. An increase in activity around 1 Hz can be seen in the data that is due to the person's heart beat and is unwanted. So we use a low pass filter to remove this. A high pass filter is also included to remove slow d...
fig = raw_haemo.plot_psd(average=True) fig.suptitle('Before filtering', weight='bold', size='x-large') fig.subplots_adjust(top=0.88) raw_haemo = raw_haemo.filter(0.05, 0.7, h_trans_bandwidth=0.2, l_trans_bandwidth=0.02) fig = raw_haemo.plot_psd(average=True) fig.suptitle('After filtering', ...
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Extract epochs Now that the signal has been converted to relative haemoglobin concentration, and the unwanted heart rate component has been removed, we can extract epochs related to each of the experimental conditions. First we extract the events of interest and visualise them to ensure they are correct.
events, event_dict = mne.events_from_annotations(raw_haemo) fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw_haemo.info['sfreq']) fig.subplots_adjust(right=0.7) # make room for the legend
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Next we define the range of our epochs, the rejection criteria, baseline correction, and extract the epochs. We visualise the log of which epochs were dropped.
reject_criteria = dict(hbo=80e-6) tmin, tmax = -5, 15 epochs = mne.Epochs(raw_haemo, events, event_id=event_dict, tmin=tmin, tmax=tmax, reject=reject_criteria, reject_by_annotation=True, proj=True, baseline=(None, 0), preload=True, detrend...
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
View consistency of responses across trials Now we can view the haemodynamic response for our tapping condition. We visualise the response for both the oxy- and deoxyhaemoglobin, and observe the expected peak in HbO at around 6 seconds consistently across trials, and the consistent dip in HbR that is slightly delayed r...
epochs['Tapping'].plot_image(combine='mean', vmin=-30, vmax=30, ts_args=dict(ylim=dict(hbo=[-15, 15], hbr=[-15, 15])))
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We can also view the epoched data for the control condition and observe that it does not show the expected morphology.
epochs['Control'].plot_image(combine='mean', vmin=-30, vmax=30, ts_args=dict(ylim=dict(hbo=[-15, 15], hbr=[-15, 15])))
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
View consistency of responses across channels Similarly we can view how consistent the response is across the optode pairs that we selected. All the channels in this data are located over the motor cortex, and all channels show a similar pattern in the data.
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15, 6)) clims = dict(hbo=[-20, 20], hbr=[-20, 20]) epochs['Control'].average().plot_image(axes=axes[:, 0], clim=clims) epochs['Tapping'].average().plot_image(axes=axes[:, 1], clim=clims) for column, condition in enumerate(['Control', 'Tapping']): for ax in axes[:,...
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot standard fNIRS response image Next we generate the most common visualisation of fNIRS data: plotting both the HbO and HbR on the same figure to illustrate the relation between the two signals.
evoked_dict = {'Tapping/HbO': epochs['Tapping'].average(picks='hbo'), 'Tapping/HbR': epochs['Tapping'].average(picks='hbr'), 'Control/HbO': epochs['Control'].average(picks='hbo'), 'Control/HbR': epochs['Control'].average(picks='hbr')} # Rename channels until the encoding of...
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
View topographic representation of activity Next we view how the topographic activity changes throughout the response.
times = np.arange(-3.5, 13.2, 3.0) topomap_args = dict(extrapolate='local') epochs['Tapping'].average(picks='hbo').plot_joint( times=times, topomap_args=topomap_args)
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compare tapping of left and right hands Finally we generate topo maps for the left and right conditions to view the location of activity. First we visualise the HbO activity.
times = np.arange(4.0, 11.0, 1.0) epochs['Tapping/Left'].average(picks='hbo').plot_topomap( times=times, **topomap_args) epochs['Tapping/Right'].average(picks='hbo').plot_topomap( times=times, **topomap_args)
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And we also view the HbR activity for the two conditions.
epochs['Tapping/Left'].average(picks='hbr').plot_topomap( times=times, **topomap_args) epochs['Tapping/Right'].average(picks='hbr').plot_topomap( times=times, **topomap_args)
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
And we can plot the comparison at a single time point for two conditions.
fig, axes = plt.subplots(nrows=2, ncols=4, figsize=(9, 5), gridspec_kw=dict(width_ratios=[1, 1, 1, 0.1])) vmin, vmax, ts = -8, 8, 9.0 evoked_left = epochs['Tapping/Left'].average() evoked_right = epochs['Tapping/Right'].average() evoked_left.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, ...
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Lastly, we can also look at the individual waveforms to see what is driving the topographic plot above.
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6, 4)) mne.viz.plot_evoked_topo(epochs['Left'].average(picks='hbo'), color='b', axes=axes, legend=False) mne.viz.plot_evoked_topo(epochs['Right'].average(picks='hbo'), color='r', axes=axes, legend=False) # Tidy the le...
dev/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
解码用于医学成像的 DICOM 文件 <table class="tfo-notebook-buttons" align="left"> <td><a target="_blank" href="https://tensorflow.google.cn/io/tutorials/dicom"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td> <td><a target="_blank" href="https://colab.research.google.com/github/tens...
!curl -OL https://github.com/tensorflow/io/raw/master/docs/tutorials/dicom/dicom_00000001_000.dcm !ls -l dicom_00000001_000.dcm
site/zh-cn/io/tutorials/dicom.ipynb
tensorflow/docs-l10n
apache-2.0
安装要求的软件包,然后重新启动运行时
try: # Use the Colab's preinstalled TensorFlow 2.x %tensorflow_version 2.x except: pass !pip install tensorflow-io
site/zh-cn/io/tutorials/dicom.ipynb
tensorflow/docs-l10n
apache-2.0
解码 DICOM 图像
import matplotlib.pyplot as plt import numpy as np import tensorflow as tf import tensorflow_io as tfio image_bytes = tf.io.read_file('dicom_00000001_000.dcm') image = tfio.image.decode_dicom_image(image_bytes, dtype=tf.uint16) skipped = tfio.image.decode_dicom_image(image_bytes, on_error='skip', dtype=tf.uint8) ...
site/zh-cn/io/tutorials/dicom.ipynb
tensorflow/docs-l10n
apache-2.0
解码 DICOM 元数据和使用标记 decode_dicom_data 用于解码标记信息。dicom_tags 包含有用的信息,如患者的年龄和性别,因此可以使用 dicom_tags.PatientsAge 和 dicom_tags.PatientsSex 等 DICOM 标记。tensorflow_io 借用了 pydicom dicom 软件包的标记法。
tag_id = tfio.image.dicom_tags.PatientsAge tag_value = tfio.image.decode_dicom_data(image_bytes,tag_id) print(tag_value) print(f"PatientsAge : {tag_value.numpy().decode('UTF-8')}") tag_id = tfio.image.dicom_tags.PatientsSex tag_value = tfio.image.decode_dicom_data(image_bytes,tag_id) print(f"PatientsSex : {tag_value....
site/zh-cn/io/tutorials/dicom.ipynb
tensorflow/docs-l10n
apache-2.0
Products
cd git-repos/sr320.github.io/jupyter/ ls mkdir analyses/$(date +%F) for i in ("1_ATCACG","2_CGATGT","3_TTAGGC","4_TGACCA","5_ACAGTG","6_GCCAAT","7_CAGATC","8_ACTTGA"): !cp /Volumes/caviar/wd/2016-10-11/mkfmt_{i}.txt analyses/$(date +%F)/mkfmt_{i}.txt !head analyses/$(date +%F)/*
jupyter/Olurida/Fidalgo-SIbs-postbsmap.ipynb
sr320/sr320.github.io
mit
Model parameters Set use_toy_data to True for toy experiments. This will train the network on two unique examples. The real dataset is morphological reinflection task: Hungarian nouns in the instrumental case. Hungarian features both vowel harmony and assimilation. A few examples are listed here (capitalization is adde...
PROJECT_DIR = "../../" use_toy_data = False LOG_DIR = 'logs' # Tensorboard log directory if use_toy_data: batch_size = 8 embedding_dim = 5 cell_size = 32 max_len = 6 else: batch_size = 64 embedding_dim = 20 cell_size = 128 max_len = 33 use_attention = True use_bidirectional_encode...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Download data if necessary The input data is expected in the following format: ~~~ i n p u t 1 TAB o u t p u t 1 i n p u t 2 TAB o u t p u t 2 ~~~ Each line contains a single input-output pair separated by a TAB. Tokens are space-separated.
if use_toy_data: input_fn = 'toy_input.txt' with open(input_fn, 'w') as f: f.write('a b c\td e f d e f\n') f.write('d e f\ta b c a b c\n') else: DATA_DIR = '../../data/' input_fn = 'instrumental.full.train' input_fn = os.path.join(DATA_DIR, input_fn) if not os.path.exists(input_f...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Load and preprocess data
class Dataset(object): PAD = 0 SOS = 1 EOS = 2 UNK = 3 #src_vocab = ['PAD', 'UNK'] constants = ['PAD', 'SOS', 'EOS', 'UNK'] hu_alphabet = list("aábcdeéfghiíjklmnoóöőpqrstuúüűvwxyz-+._") def __init__(self, fn, config, src_alphabet=None, tgt_alphabet=None): self.config = confi...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Create model Embedding The input and output embeddings are the same.
class Config(object): default_fn = os.path.join( PROJECT_DIR, "config", "seq2seq", "default.yaml" ) @staticmethod def load_defaults(fn=default_fn): with open(fn) as f: return yaml.load(f) @classmethod def from_yaml(cls, fn): params = yaml.load(fn) ...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Encoder
with tf.variable_scope("encoder"): if use_bidirectional_encoder: fw_cell = tf.nn.rnn_cell.BasicLSTMCell(cell_size) fw_cell = tf.contrib.rnn.DropoutWrapper(fw_cell, input_keep_prob=0.8) bw_cell = tf.nn.rnn_cell.BasicLSTMCell(cell_size) bw_cell = tf.contrib.rnn.DropoutWrapper(bw_c...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Decoder
with tf.variable_scope("decoder", dtype="float32") as scope: if use_bidirectional_encoder: decoder_cells = [] for i in range(2): decoder_cell = tf.contrib.rnn.BasicLSTMCell(cell_size) decoder_cell = tf.contrib.rnn.DropoutWrapper(decoder_cell, input_keep_prob=0.8) ...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Loss and training operations
with tf.variable_scope("train"): if is_time_major: logits = tf.transpose(logits, [1, 0, 2]) crossent = tf.nn.sparse_softmax_cross_entropy_with_logits( labels=dataset.tgt_out_ids, logits=logits) target_weights = tf.sequence_mask(dataset.tgt_size, tf.shape(logits)[1], tf.float32) ...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Greedy decoder for inference
with tf.variable_scope("greedy_decoder"): g_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper( embedding, tf.fill([dataset.config.batch_size], dataset.SOS), dataset.EOS) g_decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, g_helper, decoder_initial_state, ...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Beam search decoder
if use_attention is False: with tf.variable_scope("beam_search"): beam_width = 4 start_tokens = tf.fill([config.batch_size], dataset.SOS) bm_dec_initial_state = tf.contrib.seq2seq.tile_batch( encoder_state, multiplier=beam_width) bm_decoder = tf.contrib.seq2seq.BeamSearch...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Starting session
#sess = tf.Session(config=tf.ConfigProto(device_count={'GPU': 0})) sess = tf.Session() dataset.run_initializers(sess) sess.run(tf.global_variables_initializer()) merged_summary = tf.summary.merge_all() writer = tf.summary.FileWriter(os.path.join(LOG_DIR, 's2s_sandbox', 'tmp')) writer.add_graph(sess.graph)
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Training
%%time def train(epochs, logstep, lr): print("Running {} epochs with learning rate {}".format(epochs, lr)) for i in range(epochs): _, s = sess.run([update, merged_summary], feed_dict={learning_rate: lr, max_global_norm: 5.0}) l = sess.run(loss) writer.add_summary(s, i) if i % lo...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Inference
inv_vocab = {i: v for i, v in enumerate(dataset.tgt_vocab)} inv_vocab[-1] = 'UNK' skip_symbols = ('PAD',) def decode_ids(input_ids, output_ids): decoded = [] for sample_i in range(output_ids.shape[0]): input_sample = input_ids[sample_i] output_sample = output_ids[sample_i] input_decoded...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Beam search decoding
if use_attention is False: all_decoded = [] for beam_i in range(beam_width): inputs = [] all_decoded.append([]) decoded = decode_ids(input_ids, bm_output_ids[:,:,beam_i]) for dec in decoded: all_decoded[-1].append(dec[1]) inputs.append(dec[0]) print('...
notebooks/sandbox/seq2seq_attention.ipynb
juditacs/morph-segmentation-experiments
mit
Data-MC comparisons Table of contents Data preprocessing Comparison of different sequential feature selections Serialize feature selection algorithm
import sys sys.path.append('/home/jbourbeau/cr-composition') print('Added to PYTHONPATH') from __future__ import division, print_function import numpy as np import pandas as pd import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec from icecube.weighting.weighting import from_simprod import compositi...
notebooks/legacy/lightheavy/data-MC-comparisons.ipynb
jrbourbeau/cr-composition
mit
Data preprocessing [ back to top ] 1. Load simulation/data dataframe and apply specified quality cuts 2. Extract desired features from dataframe 3. Get separate testing and training datasets 4. Feature selection Load simulation, format feature and target matrices
df_sim = comp.load_dataframe(datatype='sim', config='IC79') df_data = comp.load_dataframe(datatype='data', config='IC79') n_sim = len(df_sim) n_data = len(df_data) print('{} simulation events'.format(n_sim)) print('{} data events'.format(n_data)) beta_bins=np.linspace(1.4, 9.5, 75) plotting.make_verification_plot(df_...
notebooks/legacy/lightheavy/data-MC-comparisons.ipynb
jrbourbeau/cr-composition
mit
Fixing Data Types
from datetime import datetime as dt # Takes a date as a string, and returns a Python datetime object. # If there is no date given, returns None def parse_date(date): if date == '': return None else: return dt.strptime(date, '%Y-%m-%d') # Takes a string which is either an empty string or r...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Note when running the above cells that we are actively changing the contents of our data variables. If you try to run these cells multiple times in the same session, an error will occur. Investigating the Data
##################################### # 2 # ##################################### ## Find the total number of rows and the number of unique students (account keys) ## in each table. def get_unique_keys(a_list,the_key): a_set = set() try: for item in a_list: a...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Problems in the Data
##################################### # 3 # ##################################### ## Rename the "acct" column in the daily_engagement table to "account_key". #actually I modified the file
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Missing Engagement Records
##################################### # 4 # ##################################### ## Find any one student enrollments where the student is missing from the daily engagement table. ## Output that enrollment. notEngCount = 0 for enrollment in enrollments: student = enrollment['account...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Checking for More Problem Records
##################################### # 5 # ##################################### ## Find the number of surprising data points (enrollments missing from ## the engagement table) that remain, if any. num_problem_students = 0 for enrollment in enrollments: student = enrollment['accoun...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Tracking Down the Remaining Problems
# Create a set of the account keys for all Udacity test accounts udacity_test_accounts = set() for enrollment in enrollments: if enrollment['is_udacity']: udacity_test_accounts.add(enrollment['account_key']) len(udacity_test_accounts) # Given some data with an account_key field, removes any records corresp...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Refining the Question
##################################### # 6 # ##################################### ## Create a dictionary named paid_students containing all students who either ## haven't canceled yet or who remained enrolled for more than 7 days. The keys ## should be account keys, and the values shoul...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Getting Data from First Week
# Takes a student's join date and the date of a specific engagement record, # and returns True if that engagement record happened within one week # of the student joining. def within_one_week(join_date, engagement_date): time_delta = engagement_date - join_date return time_delta.days < 7 and time_delta.days >= ...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Exploring Student Engagement
from collections import defaultdict # Create a dictionary of engagement grouped by student. # The keys are account keys, and the values are lists of engagement records. def group_data(data,key_name): grouped_data = defaultdict(list) for record in data: key_value = record[key_name] grouped...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Debugging Data Analysis Code
##################################### # 8 # ##################################### ## Go through a similar process as before to see if there is a problem. ## Locate at least one surprising piece of data, output it, and take a look at it. student_with_max_minutes = None max_minutes = 0 c ...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Lessons Completed in First Week
##################################### # 9 # ##################################### ## Adapt the code above to find the mean, standard deviation, minimum, and maximum for ## the number of lessons completed by each student during the first week. Try creating ## one or more functions to re-...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Number of Visits in First Week
###################################### # 10 # ###################################### ## Find the mean, standard deviation, minimum, and maximum for the number of ## days each student visits the classroom during the first week. total_first_week = sum_grouped_data(engagement_by_account,'h...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Splitting out Passing Students
###################################### # 11 # ###################################### ## Create two lists of engagement data for paid students in the first week. ## The first list should contain data for students who eventually pass the ## subway project, and the second list should conta...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Comparing the Two Student Groups
###################################### # 12 # ###################################### ## Compute some metrics you're interested in and see how they differ for ## students who pass the subway project vs. students who don't. A good ## starting point would be the metrics we looked at earlie...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Making Histograms
###################################### # 13 # ###################################### ## Make histograms of the three metrics we looked at earlier for both ## students who passed the subway project and students who didn't. You ## might also want to make histograms of any other metrics yo...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
Improving Plots and Sharing Findings
###################################### # 14 # ###################################### ## Make a more polished version of at least one of your visualizations ## from earlier. Try importing the seaborn library to make the visualization ## look better, adding axis labels and a title, and ch...
lesson01/L1_Starter_Code.ipynb
ulitosCoder/DataAnalysis
gpl-2.0
The intuitive specification Usually, hierachical models are specified in a centered way. In a regression model, individual slopes would be centered around a group mean with a certain group variance, which controls the shrinkage:
with pm.Model() as hierarchical_model_centered: # Hyperpriors for group nodes mu_a = pm.Normal("mu_a", mu=0.0, sd=100**2) sigma_a = pm.HalfCauchy("sigma_a", 5) mu_b = pm.Normal("mu_b", mu=0.0, sd=100**2) sigma_b = pm.HalfCauchy("sigma_b", 5) # Intercept for each county, distributed around group...
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
I have seen plenty of traces with terrible convergences but this one might look fine to the unassuming eye. Perhaps sigma_b has some problems, so let's look at the Rhat:
print("Rhat(sigma_b) = {}".format(pm.diagnostics.gelman_rubin(hierarchical_centered_trace)["sigma_b"]))
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
Not too bad -- well below 1.01. I used to think this wasn't a big deal but Michael Betancourt in his StanCon 2017 talk makes a strong point that it is actually very problematic. To understand what's going on, let's take a closer look at the slopes b and their group variance (i.e. how far they are allowed to move from t...
fig, axs = plt.subplots(nrows=2) axs[0].plot(hierarchical_centered_trace.get_values("sigma_b", chains=1), alpha=0.5) axs[0].set(ylabel="sigma_b") axs[1].plot(hierarchical_centered_trace.get_values("b", chains=1), alpha=0.5) axs[1].set(ylabel="b");
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
sigma_b seems to drift into this area of very small values and get stuck there for a while. This is a common pattern and the sampler is trying to tell you that there is a region in space that it can't quite explore efficiently. While stuck down there, the slopes b_i become all squished together. We've entered The Funn...
x = pd.Series(hierarchical_centered_trace["b"][:, 75], name="slope b_75") y = pd.Series(hierarchical_centered_trace["sigma_b"], name="slope group variance sigma_b") sns.jointplot(x, y, ylim=(0, 0.7));
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
This makes sense, as the slope group variance goes to zero (or, said differently, we apply maximum shrinkage), individual slopes are not allowed to deviate from the slope group mean, so they all collapose to the group mean. While this property of the posterior in itself is not problematic, it makes the job extremely di...
with pm.Model() as hierarchical_model_non_centered: # Hyperpriors for group nodes mu_a = pm.Normal("mu_a", mu=0.0, sd=100**2) sigma_a = pm.HalfCauchy("sigma_a", 5) mu_b = pm.Normal("mu_b", mu=0.0, sd=100**2) sigma_b = pm.HalfCauchy("sigma_b", 5) # Before: # a = pm.Normal('a', mu=mu_a, sd=si...
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
Pay attention to the definitions of a_offset, a, b_offset, and b and compare them to before (commented out). What's going on here? It's pretty neat actually. Instead of saying that our individual slopes b are normally distributed around a group mean (i.e. modeling their absolute values directly), we can say that they a...
# Inference button (TM)! with hierarchical_model_non_centered: hierarchical_non_centered_trace = pm.sample(draws=5000, tune=1000)[1000:] pm.traceplot(hierarchical_non_centered_trace, varnames=["sigma_b"]);
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
That looks much better as also confirmed by the joint plot:
fig, axs = plt.subplots(ncols=2, sharex=True, sharey=True) x = pd.Series(hierarchical_centered_trace["b"][:, 75], name="slope b_75") y = pd.Series(hierarchical_centered_trace["sigma_b"], name="slope group variance sigma_b") axs[0].plot(x, y, ".") axs[0].set(title="Centered", ylabel="sigma_b", xlabel="b_75") x = pd.S...
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
To really drive this home, let's also compare the sigma_b marginal posteriors of the two models:
pm.kdeplot( np.stack( [ hierarchical_centered_trace["sigma_b"], hierarchical_non_centered_trace["sigma_b"], ] ).T ) plt.axvline(hierarchical_centered_trace["sigma_b"].mean(), color="b", linestyle="--") plt.axvline(hierarchical_non_centered_trace["sigma_b"].mean(), color="...
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
That's crazy -- there's a large region of very small sigma_b values that the sampler could not even explore before. In other words, our previous inferences ("Centered") were severely biased towards higher values of sigma_b. Indeed, if you look at the previous blog post the sampler never even got stuck in that low regio...
x = pd.Series(hierarchical_non_centered_trace["b_offset"][:, 75], name="slope b_offset_75") y = pd.Series(hierarchical_non_centered_trace["sigma_b"], name="slope group variance sigma_b") sns.jointplot(x, y, ylim=(0, 0.7))
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
This is the space the sampler sees; you can see how the funnel is flattened out. We can freely change the (relative) slope offset parameters even if the slope group variance is tiny as it just acts as a scaling parameter. Note that the funnel is still there -- it's a perfectly valid property of the model -- but the sam...
with hierarchical_model_centered: mode = pm.find_MAP() mode["b"] np.exp(mode["sigma_b_log_"])
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
As you can see, the slopes are all identical and the group slope variance is effectively zero. The reason is again related to the funnel. The MAP only cares about the probability density which is highest at the bottom of the funnel. But if you could only choose one point in parameter space to summarize the posterior a...
hierarchical_non_centered_trace["b"].mean(axis=0) hierarchical_non_centered_trace["sigma_b"].mean(axis=0)
notebooks/misc/linreg_hierarchical_non_centered_pymc3.ipynb
probml/pyprobml
mit
You can put in some standard date formats. Pandas' will convert them accordingly.
new_years_dinner = pd.Timestamp("2020-01-01 19:00") new_years_dinner
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
We can also create relative time information
time_needed_to_sober_up = pd.Timedelta("1 day") time_needed_to_sober_up
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
We can also do calculations with thos objects.
completely_sober = new_years_dinner + time_needed_to_sober_up completely_sober
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Time series We can work with a list of time-based data, too. Here we use pandas' date_range method to create such a list (with m for end of months).
dates = pd.DataFrame( pd.date_range("2020-03-01", periods=5, freq="m"), columns=["day"] ) dates
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
With this, we calculate with time in a similar way as above.
dates["day_after_tomorrow"] = dates['day'] + pd.Timedelta("2 days") dates
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
DateTimeProperties object Especially the DateTimeProperties object contains time related data as attributes or methods that we can use.
dt_properties = dates['day'].dt dt_properties
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Let's take a look the some of the properties.
# this code is just for demonstration purposes and not needed in an analysis [x for x in dir(dt_properties) if not x.startswith("_")]
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
We can e.g. call the method day_name() on a date time series to get the name of the day for a date.
dt_properties.day_name()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Timestamp Series Let's work with some real data (or at least a part of it). Example Scenario The following dataset is an excerpt from a change log of a software. We want to take a look at which hour of the day the changes are made to the software. First try We can read in time-based datasets as any other dataset.
change_log = pd.read_csv("datasets/change_history.csv") change_log.head()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Note, if we import a dataset like this, the time data will be of a simple object data type.
change_log.info()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
So we have to convert that data first into a time-based data type with pandas' to_datetime() function.
change_log['timestamp'] = pd.to_datetime(change_log['timestamp']) change_log.info()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Next, we want to see at whick hour of the day most changes were done. We can use the same strategies to get more detailed information like in the previous examples.
change_log['hour'] = change_log['timestamp'].dt.hour change_log.head()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Let's simply count the number of changes per hour.
changes_per_hour = change_log['hour'].value_counts(sort=False) changes_per_hour.head()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
And create a little bar chart.
changes_per_hour.plot.bar();
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
At the first glance, this looks pretty fine. But there is a problem: Missing data. E.g. at 3am and 5am, there weren't any changes. We can handle this by using the more advanced resample functionality of pandas. This allows us to determine at which frequency we summarize time-based data. Second try: resampling time For ...
change_log = pd.read_csv("datasets/change_history.csv", parse_dates=[0], index_col=0) change_log.head() change_log['changes'] = 1 change_log.head()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Now we are able to apply the resample function on it with the information that we want to group our data hourly. We also have to decided what we want to do with the
hourly_changes = change_log.resample("h").count() hourly_changes.head() hourly_changes['hour'] = hourly_changes.index.hour hourly_changes.head() changes_per_hour = hourly_changes.groupby("hour").sum() changes_per_hour.head() changes_per_hour.plot.bar();
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Display progressions
hourly_changes.head() accumulated_changes = hourly_changes[['changes']].cumsum() accumulated_changes.head() accumulated_changes.plot();
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Grouping time and data So far, we did group only on time-based data. But what if we want, e.g., group the weekly changes by each developer? Let's do this! Once again, we read in the dataset that we already know. We only let pandas parse the timestamp information.
change_log = pd.read_csv("datasets/change_history.csv", parse_dates=[0]) change_log.head()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
For this scenario, we also need some developers.
devs = pd.Series(["Alice", "Bob", "John", "Steve", "Yvonne"]) devs
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Let's add some artificial ones to the changes and also mark each change with a separate column.
change_log['dev'] = devs.sample(len(change_log), replace=True).values change_log['changes'] = 1 change_log.head()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
OK, we want to group the changes per week per developer to find out the most active developer of the week (if this makes sense is up to you to find out ;-). For this, we use groupby with a pandas Grouper. With the Grouper, we can say which column we want to group at which frequency (seconds, minutes, ... , years and so...
weekly_changes_per_dev = \ change_log.groupby([ pd.Grouper(key='timestamp', freq='w'), 'dev']) \ .sum() weekly_changes_per_dev.head()
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
This give as a Dataframe which lists the number of changes per week for each developers. We sort this list to get a kind of "most active developer per week list":
weekly_changes_per_dev.sort_values( by=['timestamp', 'changes'], ascending=[True, False])
cheatbooks/timeseries.ipynb
feststelltaste/software-analytics
gpl-3.0
Opening a file, creating a new Dataset Let's create a new, empty netCDF file named 'data/new.nc', opened for writing. Be careful, opening a file with 'w' will clobber any existing data (unless clobber=False is used, in which case an exception is raised if the file already exists). mode='r' is the default. mode='a' ope...
try: ncfile.close() # just to be safe, make sure dataset is not already open. except: pass ncfile = netCDF4.Dataset('data/new.nc',mode='w',format='NETCDF4_CLASSIC') print(ncfile)
examples/writing_netCDF.ipynb
Unidata/netcdf4-python
mit
Creating dimensions The ncfile object we created is a container for dimensions, variables, and attributes. First, let's create some dimensions using the createDimension method. Every dimension has a name and a length. The name is a string that is used to specify the dimension to be used when creating a variable,...
lat_dim = ncfile.createDimension('lat', 73) # latitude axis lon_dim = ncfile.createDimension('lon', 144) # longitude axis time_dim = ncfile.createDimension('time', None) # unlimited axis (can be appended to). for dim in ncfile.dimensions.items(): print(dim)
examples/writing_netCDF.ipynb
Unidata/netcdf4-python
mit
Creating attributes netCDF attributes can be created just like you would for any python object. Best to adhere to established conventions (like the CF conventions) We won't try to adhere to any specific convention here though.
ncfile.title='My model data' print(ncfile.title)
examples/writing_netCDF.ipynb
Unidata/netcdf4-python
mit
Try adding some more attributes... Creating variables Now let's add some variables and store some data in them. A variable has a name, a type, a shape, and some data values. The shape of a variable is specified by a tuple of dimension names. A variable should also have some named attributes, such as 'units', tha...
# Define two variables with the same names as dimensions, # a conventional way to define "coordinate variables". lat = ncfile.createVariable('lat', np.float32, ('lat',)) lat.units = 'degrees_north' lat.long_name = 'latitude' lon = ncfile.createVariable('lon', np.float32, ('lon',)) lon.units = 'degrees_east' lon.long_na...
examples/writing_netCDF.ipynb
Unidata/netcdf4-python
mit
Pre-defined variable attributes (read only) The netCDF4 module provides some useful pre-defined Python attributes for netCDF variables, such as dimensions, shape, dtype, ndim. Note: since no data has been written yet, the length of the 'time' dimension is 0.
print("-- Some pre-defined attributes for variable temp:") print("temp.dimensions:", temp.dimensions) print("temp.shape:", temp.shape) print("temp.dtype:", temp.dtype) print("temp.ndim:", temp.ndim)
examples/writing_netCDF.ipynb
Unidata/netcdf4-python
mit
Writing data To write data to a netCDF variable object, just treat it like a numpy array and assign values to a slice.
nlats = len(lat_dim); nlons = len(lon_dim); ntimes = 3 # Write latitudes, longitudes. # Note: the ":" is necessary in these "write" statements lat[:] = -90. + (180./nlats)*np.arange(nlats) # south pole to north pole lon[:] = (180./nlats)*np.arange(nlons) # Greenwich meridian eastward # create a 3D array of random numbe...
examples/writing_netCDF.ipynb
Unidata/netcdf4-python
mit
You can just treat a netCDF Variable object like a numpy array and assign values to it. Variables automatically grow along unlimited dimensions (unlike numpy arrays) The above writes the whole 3D variable all at once, but you can write it a slice at a time instead. Let's add another time slice....
# create a 2D array of random numbers data_slice = np.random.uniform(low=280,high=330,size=(nlats,nlons)) temp[3,:,:] = data_slice # Appends the 4th time slice print("-- Wrote more data, temp.shape is now ", temp.shape)
examples/writing_netCDF.ipynb
Unidata/netcdf4-python
mit