markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Unique Değerleri Listeleme ve Sayısını Bulma | workerdf['Departman'].unique()
workerdf['Departman'].nunique() | _____no_output_____ | CC0-1.0 | pandas/3.0.pandas_methods_features.ipynb | enesonmez/data-science-tutorial-turkish |
Sütundaki Değerlerden Toplamda Kaçar Adet Var? | workerdf['Departman'].value_counts() | _____no_output_____ | CC0-1.0 | pandas/3.0.pandas_methods_features.ipynb | enesonmez/data-science-tutorial-turkish |
Değerler Üzerinde Fonksiyon Yardımı ile İşlemler Yapmak | workerdf['Maas'].apply(lambda maas : maas*0.66) | _____no_output_____ | CC0-1.0 | pandas/3.0.pandas_methods_features.ipynb | enesonmez/data-science-tutorial-turkish |
Dataframe'de Null Değer Var mı? | workerdf.isnull() | _____no_output_____ | CC0-1.0 | pandas/3.0.pandas_methods_features.ipynb | enesonmez/data-science-tutorial-turkish |
Pivot Table | characters = {'Karakter Sınıfı':['South Park','South Park','Simpson','Simpson','Simpson'],
'Karakter Ismi':['Cartman','Kenny','Homer','Bart','Bart'],
'Puan':[9,10,50,20,10]}
dfcharacters = pd.DataFrame(characters)
dfcharacters
dfcharacters.pivot_table(values='Puan',index=['Karakter Sınıfı','Karakter Ismi'],aggfunc=np.sum) | _____no_output_____ | CC0-1.0 | pandas/3.0.pandas_methods_features.ipynb | enesonmez/data-science-tutorial-turkish |
Belli Bir Sütuna Göre Değerleri Sıralama (Sorting) | workerdf.sort_values(by='Maas', ascending=False) | _____no_output_____ | CC0-1.0 | pandas/3.0.pandas_methods_features.ipynb | enesonmez/data-science-tutorial-turkish |
Duplicate Veriler | employees = [('Stuti', 28, 'Varanasi'),
('Saumya', 32, 'Delhi'),
('Aaditya', 25, 'Mumbai'),
('Saumya', 32, 'Delhi'),
('Saumya', 32, 'Delhi'),
('Saumya', 32, 'Mumbai'),
('Aaditya', 40, 'Dehradun'),
('Seema', 32, 'Delhi')]
df = pd.DataFrame(employees, columns = ['Name', 'Age', 'City'])
duplicate = df[df.duplicated()]
print("Duplicate Rows :")
duplicate
duplicate = df[df.duplicated('City')]
print("Duplicate Rows based on City :")
duplicate
df.drop_duplicates() | _____no_output_____ | CC0-1.0 | pandas/3.0.pandas_methods_features.ipynb | enesonmez/data-science-tutorial-turkish |
Newswires Classification with Reuters Imports | import numpy as np # Numpy
from matplotlib import pyplot as plt # Matplotlib
import keras # Keras
import pandas as pd # Pandas
from keras.datasets import reuters # Reuters Dataset
from keras.utils.np_utils import to_categorical # Categirical Classifier
import random # Random | _____no_output_____ | Apache-2.0 | ML Problems/Newswires Classification with Reuters - DLP/Models/Newswires_Classification_with_Reuters.ipynb | keivanipchihagh/Intro_to_DS_and_ML |
Load dataset | (train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words = 10000)
print('Size:', len(train_data))
print('Training Data:', train_data[0]) | _____no_output_____ | Apache-2.0 | ML Problems/Newswires Classification with Reuters - DLP/Models/Newswires_Classification_with_Reuters.ipynb | keivanipchihagh/Intro_to_DS_and_ML |
Get the feel of data | def decode(index): # Decoding the sequential integers into the corresponding words
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in test_data[0]])
return decoded_newswire
print("Decoded test data sample [0]: ", decode(0)) | _____no_output_____ | Apache-2.0 | ML Problems/Newswires Classification with Reuters - DLP/Models/Newswires_Classification_with_Reuters.ipynb | keivanipchihagh/Intro_to_DS_and_ML |
Data Prep (One-Hot Encoding) | def vectorize_sequences(sequences, dimension = 10000): # Encoding the integer sequences into a binary matrix
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
train_data = vectorize_sequences(train_data)
test_data = vectorize_sequences(test_data)
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels) | _____no_output_____ | Apache-2.0 | ML Problems/Newswires Classification with Reuters - DLP/Models/Newswires_Classification_with_Reuters.ipynb | keivanipchihagh/Intro_to_DS_and_ML |
Building the model | model = keras.models.Sequential()
model.add(keras.layers.Dense(units = 64, activation = 'relu', input_shape = (10000,)))
model.add(keras.layers.Dense(units = 64, activation = 'relu'))
model.add(keras.layers.Dense(units = 46, activation = 'softmax'))
model.compile( optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy'])
model.summary() | _____no_output_____ | Apache-2.0 | ML Problems/Newswires Classification with Reuters - DLP/Models/Newswires_Classification_with_Reuters.ipynb | keivanipchihagh/Intro_to_DS_and_ML |
Training the model | x_val = train_data[:1000]
train_data = train_data[1000:]
y_val = train_labels[:1000]
train_labels = train_labels[1000:]
history = model.fit(train_data, train_labels, batch_size = 512, epochs = 10, validation_data = (x_val, y_val), verbose = False) | _____no_output_____ | Apache-2.0 | ML Problems/Newswires Classification with Reuters - DLP/Models/Newswires_Classification_with_Reuters.ipynb | keivanipchihagh/Intro_to_DS_and_ML |
Evaluating the model | result = model.evaluate(train_data, train_labels)
print('Loss:', result[0])
print('Accuracy:', result[1] * 100) | _____no_output_____ | Apache-2.0 | ML Problems/Newswires Classification with Reuters - DLP/Models/Newswires_Classification_with_Reuters.ipynb | keivanipchihagh/Intro_to_DS_and_ML |
Statistics | epochs = range(1, len(history.history['loss']) + 1)
plt.plot(epochs, history.history['loss'], 'b', label = 'Training Loss')
plt.plot(epochs, history.history['val_loss'], 'r', label = 'Validation Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf()
plt.plot(epochs, history.history['accuracy'], 'b', label = 'Training Accuracy')
plt.plot(epochs, history.history['val_accuracy'], 'r', label = 'Validation Accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show() | _____no_output_____ | Apache-2.0 | ML Problems/Newswires Classification with Reuters - DLP/Models/Newswires_Classification_with_Reuters.ipynb | keivanipchihagh/Intro_to_DS_and_ML |
Making predictions | prediction_index = random.randint(0, len(test_data))
prediction_data = test_data[prediction_index]
decoded_prediction_data = decode(prediction_index)
# Info
print('Random prediction index:', prediction_index)
print('Original prediction Data:', prediction_data)
print('Decoded prediction Data:', decoded_prediction_data)
print('Expected prediction label:', np.argmax(test_labels[prediction_index]))
# Prediction
predictions = model.predict(test_data)
print('Prediction index: ', np.argmax(predictions[prediction_index])) | _____no_output_____ | Apache-2.0 | ML Problems/Newswires Classification with Reuters - DLP/Models/Newswires_Classification_with_Reuters.ipynb | keivanipchihagh/Intro_to_DS_and_ML |
Interpreting text models: IMDB sentiment analysis This notebook loads pretrained CNN model for sentiment analysis on IMDB dataset. It makes predictions on test samples and interprets those predictions using integrated gradients method.The model was trained using an open source sentiment analysis tutorials described in: https://github.com/bentrevett/pytorch-sentiment-analysis/blob/master/4%20-%20Convolutional%20Sentiment%20Analysis.ipynb**Note:** Before running this tutorial, please install: - spacy package, and its NLP modules for English language (https://spacy.io/usage)- sentencpiece (https://pypi.org/project/sentencepiece/) | import spacy
import torch
import torchtext
import torchtext.data
import torch.nn as nn
import torch.nn.functional as F
from torchtext.vocab import Vocab
from captum.attr import LayerIntegratedGradients, TokenReferenceBase, visualization
nlp = spacy.load('en')
device = torch.device("cuda:5" if torch.cuda.is_available() else "cpu") | _____no_output_____ | Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
The dataset used for training this model can be found in: https://ai.stanford.edu/~amaas/data/sentiment/Redefining the model in order to be able to load it. | class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (fs, embedding_dim))
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [sent len, batch size]
#text = text.permute(1, 0)
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
#conved_n = [batch size, n_filters, sent len - filter_sizes[n] + 1]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat(pooled, dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
| _____no_output_____ | Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
Loads pretrained model and sets the model to eval mode. The model is already in the provided in the 'models' folder. Download source: https://github.com/pytorch/captum/blob/master/tutorials/models/imdb-model-cnn.pt | model = torch.load('models/imdb-model-cnn.pt')
model.eval()
model = model.to(device) | /opt/anaconda3/lib/python3.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.sparse.Embedding' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/opt/anaconda3/lib/python3.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.container.ModuleList' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/opt/anaconda3/lib/python3.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/opt/anaconda3/lib/python3.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/opt/anaconda3/lib/python3.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.activation.ReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/opt/anaconda3/lib/python3.7/site-packages/torch/serialization.py:593: SourceChangeWarning: source code of class 'torch.nn.modules.dropout.Dropout' has changed. you can retrieve the original source code by accessing the object's source attribute or set `torch.nn.Module.dump_patches = True` and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
| Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
Load a small subset of test data using torchtext from IMDB dataset. | TEXT = torchtext.data.Field(lower=True, tokenize='spacy')
Label = torchtext.data.LabelField(dtype = torch.float)
| _____no_output_____ | Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
Download IMDB file 'aclImdb_v1.tar.gz' from https://ai.stanford.edu/~amaas/data/sentiment/ in a 'data' subfolder where this notebook is saved.Then unpack file using 'tar -xzf aclImdb_v1.tar.gz' | train, test = torchtext.datasets.IMDB.splits(text_field=TEXT,
label_field=Label,
train='train',
test='test',
path='data/aclImdb')
test, _ = test.split(split_ratio = 0.04)
len(test.examples)
# expected result: 1000 | _____no_output_____ | Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
Loading and setting up vocabulary for word embeddings using torchtext. | from torchtext import vocab
loaded_vectors = vocab.GloVe(name='6B', dim=50)
# If you prefer to use pre-downloaded glove vectors, you can load them with the following two command line
# loaded_vectors = torchtext.vocab.Vectors('data/glove.6B.50d.txt')
# source for downloading: https://github.com/uclnlp/inferbeddings/tree/master/data/glove
TEXT.build_vocab(train, vectors=loaded_vectors, max_size=len(loaded_vectors.stoi))
TEXT.vocab.set_vectors(stoi=loaded_vectors.stoi, vectors=loaded_vectors.vectors, dim=loaded_vectors.dim)
Label.build_vocab(train)
print('Vocabulary Size: ', len(TEXT.vocab))
# expected result: 101982 | Vocabulary Size: 101982
| Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
Define the padding token. The padding token will also serve as the reference/baseline token used for the application of the Integrated Gradients. The padding token is used for this since it is one of the most commonly used references for tokens.This is then used with the Captum helper class `TokenReferenceBase` further down to generate a reference for each input text using the number of tokens in the text and a reference token index. | PAD = 'pad'
PAD_INDEX = TEXT.vocab.stoi[PAD]
print(PAD, PAD_INDEX) | pad 6976
| Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
Let's create an instance of `LayerIntegratedGradients` using forward function of our model and the embedding layer.This instance of layer integrated gradients will be used to interpret movie rating review.Layer Integrated Gradients will allow us to assign an attribution score to each word/token embedding tensor in the movie review text. We will ultimately sum the attribution scores across all embedding dimensions for each word/token in order to attain a word/token level attribution score.Note that we can also use `IntegratedGradients` class instead, however in that case we need to precompute the embeddings and wrap Embedding layer with `InterpretableEmbeddingBase` module. This is necessary because we cannot perform input scaling and subtraction on the level of word/token indices and need access to the embedding layer. | lig = LayerIntegratedGradients(model, model.embedding) | _____no_output_____ | Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
In the cell below, we define a generic function that generates attributions for each movie rating and stores them in a list using `VisualizationDataRecord` class. This will ultimately be used for visualization purposes. | def interpret_sentence(model, sentence, min_len = 7, label = 0):
# create input tensor from sentence
text_list = sentence_to_wordlist(sentence, min_len)
text_tensor, reference_tensor = wordlist_to_tensors(text_list)
# apply model forward function with sigmoid
model.zero_grad()
pred = torch.sigmoid(model(text_tensor)).item()
pred_ind = round(pred)
# compute attributions and approximation delta using layer integrated gradients
attributions, delta = lig.attribute(text_tensor, reference_tensor, \
n_steps=500, return_convergence_delta = True)
print('pred: ', Label.vocab.itos[pred_ind], '(', '%.2f'%pred, ')', ', delta: ', abs(delta))
attributions = attributions.sum(dim=2).squeeze(0)
attributions = attributions / torch.norm(attributions)
attributions = attributions.cpu().detach().numpy()
# create and return data visualization record
return visualization.VisualizationDataRecord(
attributions,
pred,
Label.vocab.itos[pred_ind],
Label.vocab.itos[label],
Label.vocab.itos[1],
attributions.sum(),
text_list,
delta)
# add_attributions_to_visualizer(attributions, text_list, pred, pred_ind, label, delta, vis_data_records_ig)
def sentence_to_wordlist(sentence, min_len = 7):
# convert sentence into list of word/tokens (using spacy tokenizer)
text = [tok.text for tok in nlp.tokenizer(sentence)]
# fill text up with 'pad' tokens
if len(text) < min_len:
text += [PAD] * (min_len - len(text))
return text
def wordlist_to_tensors(text):
# get list of token/word indices using the vocabulary
sentence_indices = [TEXT.vocab.stoi[t] for t in text]
# transform token indices list into torch tensor
sentence_tensor = torch.tensor(sentence_indices, device=device)
sentence_tensor = sentence_tensor.unsqueeze(0)
# create reference tensor using the padding token (one of the most frequently used tokens)
token_reference = TokenReferenceBase(reference_token_idx = PAD_INDEX)
reference_tensor = token_reference.generate_reference(len(text), device=device).unsqueeze(0)
return sentence_tensor, reference_tensor
| _____no_output_____ | Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
Below cells call `interpret_sentence` to interpret a couple handcrafted review phrases. | # reset accumulated data
vis_records = []
vis_records.append(interpret_sentence(model, 'It was a fantastic performance !', label=1))
vis_records.append(interpret_sentence(model, 'Best film ever', label=1))
vis_records.append(interpret_sentence(model, 'Such a great show!', label=1))
vis_records.append(interpret_sentence(model, 'I\'ve never watched something as bad', label=0))
vis_records.append(interpret_sentence(model, 'It is a disgusting movie!', label=0))
vis_records.append(interpret_sentence(model, 'Makes a poorly convincing argument', label=0))
vis_records.append(interpret_sentence(model, 'Makes a fairly convincing argument', label=1))
vis_records.append(interpret_sentence(model, 'Skyfall is one of the best action film in recent years but is just too long', min_len=18, label=1))
| pred: pos ( 0.99 ) , delta: tensor([0.0007])
pred: pos ( 0.71 ) , delta: tensor([0.0001])
pred: pos ( 0.95 ) , delta: tensor([0.0003])
pred: neg ( 0.22 ) , delta: tensor([0.0012])
pred: neg ( 0.38 ) , delta: tensor([0.0005])
pred: neg ( 0.01 ) , delta: tensor([0.0005])
pred: pos ( 0.66 ) , delta: tensor([0.0003])
pred: pos ( 0.91 ) , delta: tensor([0.0034])
| Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
Below is an example of how we can visualize attributions for the text tokens. Feel free to visualize it differently if you choose to have a different visualization method. | vis_example = vis_records[-1]
# print(dir(vis_example))
print('raw input: ', vis_example.raw_input)
print('true class: ', vis_example.true_class)
print('pred class (prob): ', vis_example.pred_class, '(', vis_example.pred_prob, ')')
print('attr score (sum over word attributions): ', vis_example.attr_score)
print('word attributions\n', vis_example.word_attributions)
print('Visualize attributions based on Integrated Gradients')
visualization.visualize_text(vis_records)
| Visualize attributions based on Integrated Gradients
| Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
Above cell generates an output similar to this: | from IPython.display import Image
Image(filename='img/sentiment_analysis.png')
| _____no_output_____ | Apache-2.0 | IMDB_TorchText_Interpret.ipynb | matthiaszimmermann/pytorch_torchtext_captum |
The Structure and Geometry of the Human Brain[Noah C. Benson](https://nben.net/) <[nben@uw.edu](mailto:nben@uw.edu)> [eScience Institute](https://escience.washingtonn.edu/) [University of Washington](https://www.washington.edu/) [Seattle, WA 98195](https://seattle.gov/) Introduction This notebook is designed to accompany the lecture "Introduction to the Strugure and Geometry of the Human Brain" as part of the Neurohackademt 2020 curriculum. It can be run either in Neurohackademy's Jupyterhub environment, or using the `docker-compose.yml` file (see the `README.md` file for instructions).In this notebook we will examine various structural and geometric data used commonly in neuroscience. These demos will primarily use [FreeSurfer](http://surfer.nmr.mgh.harvard.edu/) subjects. In the lecture and the Neurohackademy Jupyterhub environment, we will look primarily at a subject named `nben`; however, you can alternately use the subject `bert`, which is an example subject that comes with FreeSurfer. Optionally, this notebook can be used with subject from the [Human Connectome Project (HCP)](https://db.humanconnectome.org/)--see the `README.md` file for instructions on getting credentials for use with the HCP.We will look at these data using both the [`nibabel`](https://nipy.org/nibabel/), which is an excellent core library for importing various kinds of neuroimaging data, as well as [`neuropythy`](https://github.com/noahbenson/neuropythy), which builds on `nibabel` to provide a user-friendly API for interacting with subjects. At its core, `neuropythy` is a library for interacting with neuroscientific data in the context of brain structure.This notebook itself consists of this introduction as well as four sections that follow the topic areas in the slide-deck from the lecture. These sections are intended to be explored in order. Libraries Before running any of the code in this notebook, we need to start by importing a few libraries and making sure we have configured those that need to be configured (mainly, `matplotlib`). | # We will need os for paths:
import os
# Numpy, Scipy, and Matplotlib are effectively standard libraries.
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.pyplot as plt
# Ipyvolume is a 3D plotting library that is used by neuropythy.
import ipyvolume as ipv
# Nibabel is the library that understands various neuroimaging file
# formats; it is also used by neuropythy.
import nibabel as nib
# Neuropythy is the main library we will be using in this notebook.
import neuropythy as ny
%matplotlib inline | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
MRI and Volumetric Data The first section of this notebook will deal with MR images and volumetric data. We will start by loading in an MRImage. We will use the same image that was visualized in the lecture (if you are not using the Jupyterhub, you won't have access to this subject, but you can use the subject `'bert'` instead).--- Load a subject. ---For starters, we will load the subject. | subject_id = 'nben'
subject = ny.freesurfer_subject(subject_id)
# If you have configured the HCP credentials and wish to use an HCP
# subject instead of nben:
#
#subject_id = 111312
#subject = ny.hcp_subject(subject_id) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
The `freesurfer_subject` function returns a `neuropythy` `Subject` object. | subject | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
--- Load an MRImage file. ---Let's load in an image file. FreeSurfer directories contain a subdirectory `mri/` that contains all of the volumetric/image data for the subject. This includes images that have been preprocessed as well as copies of the original T1-weighted image. We will load an image called `T1.mgz`. | # This function will load data from a subject's directory using neuropythy's
# builtin ny.load() function; in most cases, this calls down to nibabel's own
# nib.load() function.
im = subject.load('mri/T1.mgz')
# For an HCP subject, use this file instead:
#im = subject.load("T1w/T1w_acpc_dc.nii.gz")
# The return value should be a nibabel image object.
im
# In fact, we could just as easily have loaded the same object using nibabel:
im_from_nibabel = nib.load(subject.path + '/mri/T1.mgz')
print('From neuropythy: ', im.get_filename())
print('From nibabel: ', im_from_nibabel.get_filename())
# And neuropythy manages this image as part of the subject-data. Neuropythy's
# name for it is 'intensity_normalized', which is due to its position as an
# output in FreeSurfer's processing pipeline.
ny_im = subject.images['intensity_normalized']
(ny_im.dataobj == im.dataobj).all() | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
--- Visualize some slices of the image. ---Next, we will make 2D plots of some of the image slices. Feel free to change which slices you visualize; I have just chosen some defaults. | # What axis do we want to plot slices along? 0, 1, or 2 (for the first, second,
# or third 3D image axis).
axis = 2
# Which slices along this axis should we plot? These must be at least 0 and at
# most 255 (There are 256 slices in each dimension of these images).
slices = [75, 125, 175]
# Make a figure and axes using matplotlib.pyplot:
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = im.dataobj[slice_num,:,:]
elif axis == 1:
imslice = im.dataobj[:,slice_num,:]
else:
imslice = im.dataobj[:,:,slice_num]
ax.imshow(imslice, cmap='gray')
# Turn off labels:
ax.axis('off') | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
--- Visualize the 3D Image as a whole. ---Next we will use `ipyvolume` to render a 3D View of the volume. The volume plotting function is part of `ipyvolume` and has a variety of options that are beyond the scope of this demo. | # Note that this will generate a warning, which can be safely ignored.
fig = ipv.figure()
ipv.quickvolshow(subject.images['intensity_normalized'].dataobj)
ipv.show() | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
--- Load and visualize anatomical segments. ---FreeSurfer creates a segmentation image file called `aseg.mgz`, which we can load and use to identify ROIs. First, we will load this file and plot some slices from it. | # First load the file; any of these lines will work:
#aseg = subject.load('mri/aseg.mgz')
#aseg = nib.load(subject.path + '/mri/aseg.mgz')
aseg = subject.images['segmentation'] | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
We can plot this as-is, but we don't know what the values in the numbers correspond to. Nonetheless, let's go ahead. This code block is the same as the block we used to plot slices above except that it uses the new image `aseg` we just loaded. | # What axis do we want to plot slices along? 0, 1, or 2 (for the first, second,
# or third 3D image axis).
axis = 2
# Which slices along this axis should we plot? These must be at least 0 and at
# most 255 (There are 256 slices in each dimension of these images).
slices = [75, 125, 175]
# Make a figure and axes using matplotlib.pyplot:
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = aseg.dataobj[slice_num,:,:]
elif axis == 1:
imslice = aseg.dataobj[:,slice_num,:]
else:
imslice = aseg.dataobj[:,:,slice_num]
ax.imshow(imslice, cmap='gray')
# Turn off labels:
ax.axis('off') | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
Clearly, the balues in the plots above are discretized, but it's not clear what they correspond to. The map of numbers to characters and colors can be found in the various FreeSurfer color LUT files. These are all located in the FreeSurfer home directory and end with `LUT.txt`. They are essentially spreadsheets and are loaded by `neuropythy` as `pandas.DataFrame` objects. In `neuropythy`, the LUT objects are associated with the `'freesurfer_home'` configuration variable. This has been setup automatically in the course and the `neuropythy` docker-image. | ny.config['freesurfer_home'].luts['aseg'] | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
So suppose we want to look at left cerebral cortex. In the table, this has value 3. We can find this value in the images we are plotting and plot only it to see the ROI in each the slices we plot. | # We want to plot left cerebral cortex (label ID = 3, per the LUT)
label = 3
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = aseg.dataobj[slice_num,:,:]
elif axis == 1:
imslice = aseg.dataobj[:,slice_num,:]
else:
imslice = aseg.dataobj[:,:,slice_num]
# Plot only the values that are equal to the label ID.
imslice = (imslice == label)
ax.imshow(imslice, cmap='gray')
# Turn off labels:
ax.axis('off') | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
By plotting the LH cortex specifically, we can see that LEFT is in the direction of increasing rows (down the image slices, if you used `axis = 2`), thus RIGHT must be in the direction of decreasing rows in the image. Let's also make some images from these slices in which we replace each of the pixels in each slice with the color recommended by the color LUT. | # We are using this color LUT:
lut = ny.config['freesurfer_home'].luts['aseg']
# The axis:
axis = 2
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = aseg.dataobj[slice_num,:,:]
elif axis == 1:
imslice = aseg.dataobj[:,slice_num,:]
else:
imslice = aseg.dataobj[:,:,slice_num]
# Convert the slice into an RGBA image using the color LUT:
rgba_im = np.zeros(imslice.shape + (4,))
for (label_id, row) in lut.iterrows():
rgba_im[imslice == label_id,:] = row['color']
ax.imshow(rgba_im)
# Turn off labels:
ax.axis('off') | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
Cortical Surface Data Cortical surface data is handled and represented much differently than volumetric data. This section demonstrates how to interact with cortical surface data in a Jupyter notebook, primarily using `neuropythy`.To start off, however, we will just load a surface file using `nibabel` to see what one contains.--- Load a Surface-Geometry File Using `nibabel` --- | # Each subject has a number of surface files; we will look at the
# left hemisphere, white surface.
hemi = 'lh'
surf = 'white'
# Feel free to change hemi to 'rh' for the RH and surf to 'pial'
# or 'inflated'.
# We load the surface from the subject's 'surf' directory in FreeSurfer.
# Nibabel refers to these files as "geometry" files.
filename = subject.path + f'/surf/{hemi}.{surf}'
# If you are using an HCP subject, you should instead load from this path:
#relpath = f'T1w/{subject.name}/surf/{hemi}.{surf}'
#filename = subject.pseudo_path.local_path(relpath)
# Read the file, using nibabel.
surface_data = nib.freesurfer.read_geometry(filename)
# What does this return?
surface_data | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
So when `nibabel` reads in one of these surface files, what we get back is an `n x 3` matrix of real numbers (coordiantes) and an `m x 3` matrix of integers (triangle indices).The `ipyvolume` module has support for plotting triangle meshes--let's see how it works. | # Extract the coordinates and triangle-faces.
(coords, faces) = surface_data
# And get the (x,y,z) from coordinates.
(x, y, z) = coords.T
# Now, plot the triangle mesh.
fig = ipv.figure()
ipv.plot_trisurf(x, y, z, triangles=faces)
# Adjust the plot limits (making them equal makes the plot look good).
ipv.pylab.xlim(-100,100)
ipv.pylab.ylim(-100,100)
ipv.pylab.zlim(-100,100)
# Generally, one must call show() with ipyvolume.
ipv.show() | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
--- Hemisphere (`neuropythy.mri.Cortex`) objects ---Although one can load and plot cortical surfaces with `nibabel`, `neuropythy` builds on `nibabel` by providing a framework around which the cortical surface can be represented. It includes a number of utilities related specifically to cortical surface analysis, and allows much of the power of FreeSurfer to be leveraged through simple Python data structures.To start with, we will look at our subject's hemispheres (`neuropythy.mri.Cortex` objects) and how they represent surfaces. | # Grab the hemisphere for our subject.
cortex = subject.hemis[hemi]
# Note that `cortex = subject.lh` and `cortex = subject.rh` are equivalent
# to `cortex = subject.hemis['lh']` and `cortex = subject.hemis['rh']`.
# What is cortex?
cortex | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
From this we can see which hemisphere we have selected, the number of triangle faces that it has, and the number of vertices that it has. Let's look at a few of its' properties. SurfacesEach hemisphere has a number of surfaces; we can view them through the `cortex.surfaces` dictionary. | cortex.surfaces.keys()
cortex.surfaces['white_smooth'] | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
The `'white_smooth'` mesh is a well-processed mesh of the white surface that has been well-smoothed. You might notice that there is a `'midgray'` surface, even though FreeSurfer does not include a mid-gray mesh file. The `'midgray'` mesh, however, can be made by averaging the white and pial mesh vertices.Recall that all surfaces of a hemisphere have equivalent vertices and identical triangles. We can test that here. | np.array_equal(cortex.surfaces['white'].tess.faces,
cortex.surfaces['pial'].tess.faces) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
Surfaces track a large amount of data about their meshes and vertices and inherit most of the properties of hemispheres that are discussed below. In addition, surfaces uniquely carry data about cortical distances and surface areas. For example: | # The area of each of the triangle-faces in nthe white surface mesh, in mm^2.
cortex.surfaces['white'].face_areas
# The length of each edge in the white surface mesh, in mm.
cortex.surfaces['white'].edge_lengths
# And the edges themselves, as indices like the faces.
cortex.surfaces['white'].tess.edges | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
Vertex Properties Properties arre values assigned to each surface vertex. They can include anatomical or geometric properties, such as ROI labels (i.e., a vector of values for each vertex: `True` if the vertex is in the ROI and `False` if not), cortical thickness (in mm), the vertex surface-area (in square mm), the curvature, or data from other functional measurements, such as BOLD-time-series data or source-localized MEG data.The properties of a hemisphere are stored in the `properties` value. `Cortex.properties` is a kind of dictionary object and can generally be treated as a dictionary. One can also access property vectors via `cortex.prop(property_name)` rather than `cortex.properties[property_name]`; the former is largely short-hand for the latter. | sorted(cortex.properties.keys()) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
A few thigs worth noting: First, not all FreeSurfer subjects will have all of the properties listed. This is because different versions of FreeSurfer include different files, and sometimes subjects are distributed without their full set of files (e.g., to save storage space). However, rather than go and try to load all of these files right away, `neuropythy` makes place-holders for them and loads them only when first requested (this saves on loading time drastically). Accordingly, if you try to use a property whose file doesn't exist, an nexception will be raised.Additionally, notice that the first several properties are for Brodmann Area labels. The ones ending in `_label` are `True` / `False` boolean labels indicating whether the vertex is in the given ROI (according to an estimation based on anatomy). The subject we are using in the Jupyterhub environment does not actually have these files included, but they do have, for example `BA1_weight` files. The weights represent the probability that a vertex is in the associated ROI, so we can make a label from this. | ba1_label = cortex.prop('BA1_weight') >= 0.5 | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
We can now plot this property using `neuropythy`'s `cortex_plot()` function. | ny.cortex_plot(cortex.surfaces['white'], color=ba1_label) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
**Improving this plot.** While this plot shows us where the ROI is, it's rather hard to interpret. Rather, we would prefer to plot the ROI in red and the rest of the brain using a binarized curvature map. `neuropythy` supports this kind of binarized curvature map as a default underlay, so, in fact, the easiest way to accomplish this is to tell `cortex_plot` to color the surface red, but to add a vertex mask that instructs the function to *only* color the ROI vertices.Additionally, it is easier to see the inflated surface, so we will switch to that. | ny.cortex_plot(cortex.surfaces['inflated'], color='r', mask=ba1_label) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
We can optionally make this red ROI plot a little bit transparent as well. | ny.cortex_plot(cortex.surfaces['inflated'], color='r', mask=ba1_label, alpha=0.4) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
**Plotting the weight instead of the label.** Alternately, we might have wanted to plot the weight / probability of the ROI. Continuous properties like probability can be plotted using color-maps, similar to how they are plotted in `matplotlib`. | ny.cortex_plot(cortex.surfaces['inflated'], color='BA1_weight',
cmap='hot', vmin=0, vmax=1, alpha=0.6) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
**Another property.** Other properties can be very informative. For example, the cortical thickness property, which is stored in mm. This can tell us the parts of the brain that are thick or not thick. | ny.cortex_plot(cortex.surfaces['inflated'], color='thickness',
cmap='hot', vmin=1, vmax=6) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
--- Interpolation (Surface to Image and Image to Surface) ---Hemisphere/Cortex objects also manage interpolation, both to/from image volumes as well as to/from the cortical surfaces of other subjects (we will demo interpolation between subjects in the last section). Here we will focus on the former: interpolation to and from images.**Cortex to Image Interpolation.**Because our subjects only have structural data and do not have functional data, we do not have anything handy to interpolate out of a volume onto a surface. So instead, we will start by innterpolating from the cortex into the volume. A good property for this is the subject's cortical thickness. Thickness is difficult to calculate in the volume, so if one wants thickness data in a volume, it would typically be calculated using surface meshes then projected back into the volume. We will do that now.Note that in order to create a new image, we have to provide the interpolation method with some information about how the image is oriented and shaped. This includes two critical pieces of information: the `'image_shape'` (i.e., the `numpy.shape` of the image's array) and the `'affine'`, which is simply the affine-transformation that aligns the image with the subject. Usually, it is easiest to provide this information in the form of a template image. For all kinds of subjects (HCP and FreeSurfer), an image is correctly aligned with a subject and thus the subject's cortical surfaces if its affine transfomation correctly aligns it with `subject.images['brain']`. | # We need a template image; the new image will have the same shape,
# affine, image type, and hader as the template image.
template_im = subject.images['brain']
# We can use just the template's header for this.
template = template_im.header
# We can alternately just provide information about the image geometry:
#template = {'image_shape': (256,256,256), 'affine': template_im.affine}
# Alternately, we can provide an actual image into which the data will
# be inserted. In this case, we would want to make a cleared-duplicate
# of the brain image (i.e. all voxels set to 0)
#template = ny.image_clear(template_im)
# All of the above templates should provide the same result.
# We are going to save the property from both hemispheres into an image.
lh_prop = subject.lh.prop('thickness')
rh_prop = subject.rh.prop('thickness')
# This may be either 'linear' or 'nearest'; for thickness 'linear'
# is probably best, but the difference will be small.
method = 'linear'
# Do the interpolation. This may take a few minutes the first time it is run.
new_im = subject.cortex_to_image((lh_prop, rh_prop), template, method=method,
# The template is integer, so we override it.
dtype='float') | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
Now that we have made this new image, let's take a look at it by plotting some slices from it, once again. | # What axis do we want to plot slices along? 0, 1, or 2 (for the first, second,
# or third 3D image axis).
axis = 2
# Which slices along this axis should we plot? These must be at least 0 and at
# most 255 (There are 256 slices in each dimension of these images).
slices = [75, 125, 175]
# Make a figure and axes using matplotlib.pyplot:
(fig, axes) = plt.subplots(1, len(slices), figsize=(5, 5/len(slices)), dpi=144)
# Plot each of the slices:
for (ax, slice_num) in zip(axes, slices):
# Get the slice:
if axis == 0:
imslice = new_im.dataobj[slice_num,:,:]
elif axis == 1:
imslice = new_im.dataobj[:,slice_num,:]
else:
imslice = new_im.dataobj[:,:,slice_num]
ax.imshow(imslice, cmap='hot', vmin=0, vmax=6)
# Turn off labels:
ax.axis('off') | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
**Image to Cortex Interpolation.** A good test of our interpolation methods is now to ensure that, when we interpolate data from the image we just created back to the cortex, we get approximately the same values. The values we interpolate back out of the volume will not be identical to the volumes we started with because the resolution of the image is finite, but they should be close.The `image_to_cortex()` method of the `Subject` class is capable of interpolating from an image to the cortical surface(s), based on the alignment of the image with the cortex. | (lh_prop_interp, rh_prop_interp) = subject.image_to_cortex(new_im, method=method) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
We can plot the hemispheres together to visualize the difference between the original thickenss and the thickness that was interpolated into an image then back onto the cortex. | fig = ny.cortex_plot(subject.lh, surface='midgray',
color=(lh_prop_interp - lh_prop)**2,
cmap='hot', vmin=0, vmax=2)
fig = ny.cortex_plot(subject.rh, surface='midgray',
color=(rh_prop_interp - rh_prop)**2,
cmap='hot', vmin=0, vmax=2,
figure=fig)
ipv.show() | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
Intersubject Surface Alignment Comparison between multiple subjects is usually accomplished by first aligning each subject's cortical surface with that of a template surface (*fsaverage* in FreeSurfer, *fs_LR* in the HCP), then interpolating between vertices in the aligned arrangements. The alignment to the template are calculated and saved by FreeSurfer, the HCPpipelines, and various other utilities, but as of when this tutorial was written, `neuropythy` only supports these first two formats. Alignments are calculated by warping the vertices of the subject's spherical (fully inflated) hemisphere in a diffeomorphic fashion with the goal of minimizing the difference between the sulcal topology (curvature and depth) of the subject's vertices and that of the nearby *fsaverage* vertices. The process involves a number of steps, and any who are interested should follow up with the various documentations and papers published by the [FreeSurfer group](https://surfer.nmr.mgh.harvard.edu/).For practical purposes, it is not necessary to understand the details of this algorithm--FreeSurfer is a large complex collection of software that has been under development for decades. However, to better understand what is produced by FreeSurfer's alignment procedure, let us start by looking at its outputs.--- Compare Subject Registrations ---To better understand the various spherical surfaces produced by FreeSurfer, let's start by plotting three spherical surfaces in 3D. The first will be the subject's "native" inflated spherical surface. The next will be the subjects "fsaverage"-aligned sphere. The last will be The *fsaverage* subject's native sphere.These spheres are accessed not through the `subject.surfaces` dictionary but through the `subject.registrations` dictionary. This is simply a design decision--registrations and surfaces are not fundamentally different except that registrations can be used for interpolation between subjects (more below).Note that you may need to zoom out once the plot has been made. | # Get the fsaverage subject.
fsaverage = ny.freesurfer_subject('fsaverage')
# Get the hemispheres we will be examining.
fsa_hemi = fsaverage.hemis[hemi]
sub_hemi = subject.hemis[hemi]
# Next, get the three registrations we want to plot.
sub_native_reg = sub_hemi.registrations['native']
sub_fsaverage_reg = sub_hemi.registrations['fsaverage']
fsa_native_reg = fsa_hemi.registrations['native']
# We want to plot them all three together in one scene, so to do this
# we need to translate two of them a bit along the x-axis.
sub_native_reg = sub_native_reg.translate([-225,0,0])
fsa_native_reg = fsa_native_reg.translate([ 225,0,0])
# Now plot them all.
fig = ipv.figure(width=900, height=300)
ny.cortex_plot(sub_native_reg, figure=fig)
ny.cortex_plot(fsa_native_reg, figure=fig)
ny.cortex_plot(sub_fsaverage_reg, figure=fig)
ipv.show() | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
--- Interpolate Between Subjects ---Interpolation between subjects requires interpolating between a shared registration. For a subject and the *fsaverage*, this is the subject's *fsaverage*-aligned registration and *fsaverage*'s native. However, for two non-meta subjects, the *fsaverage*-aligned registration of both subjects are used.We will first show how to interpolate from a subject over to the **fsaverage**. This is a very valuable operation to be able to do as it allows you to compute statistics across subejcts of cortical surface data (such as BOLD activation data or source-localized MEG data). | # The property we're going to interpolate over to fsaverage:
sub_prop = sub_hemi.prop('thickness')
# The method we use ('nearest' or 'linear'):
method = 'linear'
# Interpolate the subject's thickness onto the fsaverage surface.
fsa_prop = sub_hemi.interpolate(fsa_hemi, sub_prop, method=method)
# Let's make a plot of this:
ny.cortex_plot(fsa_hemi, surface='inflated',
color=fsa_prop, cmap='hot', vmin=0, vmax=6) | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
Okay, for our last exercise, let's interpolate back from the *fsaverage* subject to our subject. It is occasionally nice to be able to plot the *fsaverage*'s average curvature map as an underlay, so let's do that. | # This time we are going to interpolate curvature from the fsaverage
# back to the subject. When the property we are interpolating is a
# named property of the hemisphere, we can actually just specify it
# by name in the interpolation call.
fsa_curv_on_sub = fsa_hemi.interpolate(sub_hemi, 'curvature')
# We can make a duplicate subject hemisphere with this new property
# so that it's easy to plot this curvature map.
sub_hemi_fsacurv = sub_hemi.with_prop(curvature=fsa_curv_on_sub)
# Great, let's see what this looks like:
ny.cortex_plot(sub_hemi_fsacurv, surface='inflated') | _____no_output_____ | CC-BY-4.0 | we-geometry-benson/class-notebook.ipynb | bastivkl/nh2020-curriculum |
Baye's Theorem IntroductionBefor starting with *Bayes Theorem* we can have a look at some definitions.**Conditional Probability :**Conditional Probability is the Probability of one event occuring with some Relationship to one or more events.Let A and B be the two interdependent event,where A has already occured then the probabilty of B will be $$ P(B|A) = P(A \cap B)|P(A) $$ **Joint Probability :**Joint Probability is a Statistical measure that Calculates the Likehood of two events occuring together and at the same point in time. $$ P(A \cap B) = P(A|B) * P(B) $$ Bayes TheoremBayes Theorem was named after **Thomas Bayes**,who discovered it in **1763** and worked in the field of Decision Theory.Bayes Theorem is a mathematical formula used to determine the **Conditional Probability** of events without the **Joint Probability**.**Statement**If B$_{1}$, B$_{2}$, B$_{3}$,.....,B$_{n}$ are Mutually exclusive event with P(B$_{i}$) $\not=$ 0 ,( i=1,2,3,...,n) of Random Experiment then for any Arbitrary event A of the Sample Space of the above Experiment with P(A)>0,we have$$ P(B_{i}|A) = P(B_{i})P(A|B_{i})/ \sum\limits_{i=1}^{n} P(B_{i})P(A|B_{i}) $$**Proof**Let S be the Sample Space of the Random Experiment.The Event B$_{1}$, B$_{2}$, B$_{3}$,.....,B$_{n}$ being Exhaustive$$ S = (B_{1} \cup B_{2} \cup ...\cup B_{n}) \hspace{1cm} \hspace{0.1cm} [\therefore A \subset S] $$$$ A = A \cap S = A \cap ( B_{1} \cup B_{2} \cup B_{3},.....,\cup B_{n}) $$$$ = (A \cap B_{1}) \cup (A \cap B_{2}) \cup ... \cup (A \cap B_{n}) $$$$ P(A) = P(A \cap B_{1}) + P (A \cap B_{2}) + ...+ P(A \cap B_{n}) $$$$ \hspace{3cm} \hspace{0.1cm} = P(B_{1})P(A|B_{1}) + P(B_{2})P(A|B_{2}) + ... +P(B_{n})P(A|B_{n}) $$$$ = \sum\limits_{i=1}^{n} P(B_{i})P(A|B_{i}) $$Now,$$ P(A \cap B_{i}) = P(A)P(B_{i}|A) $$$$ P(B_{i}|A) = P(A \cap B_{i})/P(A) = P(B_{i})P(A|B_{i})/\sum\limits_{i=1}^{n} P(B_{i})P(A|B_{i}) $$**P(B)** is the Probability of occurence **B**.If we know that the event **A** has already occured.On knowing about the event **A**,**P(B)** is changed to **P(B|A)**.With the help of **Bayes Theorem we can Calculate P(B|A)**.**Naming Conventions :**P(A/B) : Posterior Probability P(A) : Prior ProbabilityP(B/A) : LikelihoodP(B) : Evidence So, Bayes Theorem can be Restated as :$$ Posterior = Likelihood * Prior / Evidence $$ Now we will be looking at some problem examples on Bayes Theorem. **Example 1** :Suppose that the reliability of a Covid-19 test is specified as follows:Of Population having Covid-19 , 90% of the test detect the desire but 10% go undetected.Of Population free of Covid-19 , 99% of the test are judged Covid-19 -tive but 1% are diagnosed showing Covid-19 +tive.From a large population of which only 0.1% have Covid-19,one person is selected at Random,given the Covid-19 test,and the pathologist Report him/her as Covid-19 positive.What is the Probability that the person actually have Covid-19? **Solution**Let, B$_{1}$ = The Person Selected is Actually having Covid-19.B$_{2}$ = The Person Selected is not having Covid-19.A = The Person Covid-19 Test is Diagnosed as Positive.P(B$_{1}$) = 0.1% = 0.1/100 = 0.001P(B$_{2}$) = 1-P(B$_{1}$) = 1-0.001 = 0.999P(A|B$_{1}$) = Probability that the person tested Covid-19 +tive given that he / she is actually having Covid-19.= 90/100 = 0.9 P(A|B$_{2}$) = Probability that the person tested Covid-19 +tive given that he / she is actually not having Covid-19.= 1/100 = 0.01 Required Probability = P(B$_{1}$|A) = P(B$_{1}$) * P(A|B$_{1}$)/ (((P(B$_{1}$) * P(A|B$_{1}$))+((P(B$_{2}$) * P(A|B$_{2}$))) = (0.001 * 0.9)/(0.001 * 0.9+0.999 * 0.01) = 90/1089 =0.08264 We will Now use Python to calculate the same. | #calculate P(B1|A) given P(B1),P(A|B1),P(A|B2),P(B2)
def bayes_theorem(p_b1,p_a_given_b1,p_a_given_b2,p_b2):
p_b1_given_a=(p_b1*p_a_given_b1)/((p_b1*p_a_given_b1)+(p_b2*p_a_given_b2))
return p_b1_given_a
#P(B1)
p_b1=0.001
#P(B2)
p_b2=0.999
#P(A|B1)
p_a_given_b1=0.9
#P(A|B2)
p_a_given_b2=0.01
result=bayes_theorem(p_b1,p_a_given_b1,p_a_given_b2,p_b2)
print('P(B1|A)=% .3f %%'%(result*100))
| P(B1|A)= 8.264 %
| MIT | notebooks/Baye's Theorem Notebook.ipynb | Ritu7683/Statistics-and-Econometrics-for-Data-Science |
**Example 2 :** In a Quiz,a contestant either guesses or cheat or knows the answer to a multiple choice question with four choices.The Probability that he/she makes a guess is 1/3 and the Probability that he /she cheats the answer is 1/6.The Probability that his answer is correct,given that he cheated it,is 1/8.Find the Probability that he knows the answer to the question,given that he/she correctly answered it. **Solution**Let, B$_{1}$ = Contestant guesses the answer.B$_{2}$ = Contestant cheated the answer.B$_{3}$ = Contestant knows the answer.A = Contestant answer correctly.clearly,P(B$_{1}$) = 1/3 , P(B$_{2}$) =1/6Since B$_{1}$ ,B$_{2}$, B$_{3}$ are mutually exclusive and exhaustive event.P(B$_{1}$) + P(B$_{2}$) + P(B$_{3}$) = 1 => P(B$_{3}$) = 1 - (P(B$_{1}$) + P(B$_{2}$))=1-1/3-1/6=1/2If B$_{1}$ has already occured,the contestant guesses,the there are four choices out of which only one is correct.$\therefore$ the Probability that he answers correctly given that he/she has made a guess is 1/4 i.e. **P(A|B$-{1}$) = 1/4**It is given that he knew the answer = 1By Bayes Theorem,Required Probability = P(B$_{3}$|A)= P(B$_{3}$)P(A|B$_{3}$)/(P(B$_{1}$)P(A|B$_{1}$)+P(B$_{2}$)P(A|B$_{2}$)+P(B$_{3}$)P(A|B$_{3}$))= (1/2 * 1) / ((1/3 * 1/4) + (1/6 * 1/8) + (1/2 * 1))=24/29 | #calculate P(B1|A) given P(B1),P(A|B1),P(A|B2),P(B2),P(B3),P(A|B3)
def bayes_theorem(p_b1,p_a_given_b1,p_a_given_b2,p_b2,p_b3,p_a_given_b3):
p_b3_given_a=(p_b3*p_a_given_b3)/((p_b1*p_a_given_b1)+(p_b2*p_a_given_b2)+(p_b3*p_a_given_b3))
return p_b3_given_a
#P(B1)
p_b1=1/3
#P(B2)
p_b2=1/6
#P(B3)
p_b3=1/2
#P(A|B1)
p_a_given_b1=1/4
#P(A|B2)
p_a_given_b2=1/8
#P(A|B3)
p_a_given_b3=1
result=bayes_theorem(p_b1,p_a_given_b1,p_a_given_b2,p_b2,p_b3,p_a_given_b3)
print('P(B3|A)=% .3f %%'%(result*100))
| P(B3|A)= 82.759 %
| MIT | notebooks/Baye's Theorem Notebook.ipynb | Ritu7683/Statistics-and-Econometrics-for-Data-Science |
[learning-python3.ipynb]: https://gist.githubusercontent.com/kenjyco/69eeb503125035f21a9d/raw/learning-python3.ipynbRight-click -> "save link as" [https://gist.githubusercontent.com/kenjyco/69eeb503125035f21a9d/raw/learning-python3.ipynb][learning-python3.ipynb] to get most up-to-date version of this notebook file. Quick note about Jupyter cellsWhen you are editing a cell in Jupyter notebook, you need to re-run the cell by pressing **` + `**. This will allow changes you made to be available to other cells.Use **``** to make new lines inside a cell you are editing. Code cellsRe-running will execute any statements you have written. To edit an existing code cell, click on it. Markdown cellsRe-running will render the markdown text. To edit an existing markdown cell, double-click on it. Common Jupyter operationsNear the top of the https://try.jupyter.org page, Jupyter provides a row of menu options (`File`, `Edit`, `View`, `Insert`, ...) and a row of tool bar icons (disk, plus sign, scissors, 2 files, clipboard and file, up arrow, ...). Inserting and removing cells- Use the "plus sign" icon to insert a cell below the currently selected cell- Use "Insert" -> "Insert Cell Above" from the menu to insert above Clear the output of all cells- Use "Kernel" -> "Restart" from the menu to restart the kernel - click on "clear all outputs & restart" to have all the output cleared Save your notebook file locally- Clear the output of all cells- Use "File" -> "Download as" -> "IPython Notebook (.ipynb)" to download a notebook file representing your https://try.jupyter.org session Load your notebook file in try.jupyter.org1. Visit https://try.jupyter.org2. Click the "Upload" button near the upper right corner3. Navigate your filesystem to find your `*.ipynb` file and click "open"4. Click the new "upload" button that appears next to your file name5. Click on your uploaded notebook file References- https://try.jupyter.org- https://docs.python.org/3/tutorial/index.html- https://docs.python.org/3/tutorial/introduction.html- https://daringfireball.net/projects/markdown/syntax Python objects, basic types, and variablesEverything in Python is an **object** and every object in Python has a **type**. Some of the basic types include:- **`int`** (integer; a whole number with no decimal place) - `10` - `-3`- **`float`** (float; a number that has a decimal place) - `7.41` - `-0.006`- **`str`** (string; a sequence of characters enclosed in single quotes, double quotes, or triple quotes) - `'this is a string using single quotes'` - `"this is a string using double quotes"` - `'''this is a triple quoted string using single quotes'''` - `"""this is a triple quoted string using double quotes"""`- **`bool`** (boolean; a binary value that is either true or false) - `True` - `False`- **`NoneType`** (a special type representing the absence of a value) - `None`In Python, a **variable** is a name you specify in your code that maps to a particular **object**, object **instance**, or value.By defining variables, we can refer to things by names that make sense to us. Names for variables can only contain letters, underscores (`_`), or numbers (no spaces, dashes, or other characters). Variable names must start with a letter or underscore. Basic operatorsIn Python, there are different types of **operators** (special symbols) that operate on different values. Some of the basic operators include:- arithmetic operators - **`+`** (addition) - **`-`** (subtraction) - **`*`** (multiplication) - **`/`** (division) - __`**`__ (exponent)- assignment operators - **`=`** (assign a value) - **`+=`** (add and re-assign; increment) - **`-=`** (subtract and re-assign; decrement) - **`*=`** (multiply and re-assign)- comparison operators (return either `True` or `False`) - **`==`** (equal to) - **`!=`** (not equal to) - **`<`** (less than) - **`<=`** (less than or equal to) - **`>`** (greater than) - **`>=`** (greater than or equal to)When multiple operators are used in a single expression, **operator precedence** determines which parts of the expression are evaluated in which order. Operators with higher precedence are evaluated first (like PEMDAS in math). Operators with the same precedence are evaluated from left to right.- `()` parentheses, for grouping- `**` exponent- `*`, `/` multiplication and division- `+`, `-` addition and subtraction- `==`, `!=`, ``, `>=` comparisons> See https://docs.python.org/3/reference/expressions.htmloperator-precedence | # Assigning some numbers to different variables
num1 = 10
num2 = -3
num3 = 7.41
num4 = -.6
num5 = 7
num6 = 3
num7 = 11.11
# Addition
num1 + num2
# Subtraction
num2 - num3
# Multiplication
num3 * num4
# Division
num4 / num5
# Exponent
num5 ** num6
# Increment existing variable
num7 += 4
num7
# Decrement existing variable
num6 -= 2
num6
# Multiply & re-assign
num3 *= 5
num3
# Assign the value of an expression to a variable
num8 = num1 + num2 * num3
num8
# Are these two expressions equal to each other?
num1 + num2 == num5
# Are these two expressions not equal to each other?
num3 != num4
# Is the first expression less than the second expression?
num5 < num6
# Is this expression True?
5 > 3 > 1
# Is this expression True?
5 > 3 < 4 == 3 + 1
# Assign some strings to different variables
simple_string1 = 'an example'
simple_string2 = "oranges "
# Addition
simple_string1 + ' of using the + operator'
# Notice that the string was not modified
simple_string1
# Multiplication
simple_string2 * 4
# This string wasn't modified either
simple_string2
# Are these two expressions equal to each other?
simple_string1 == simple_string2
# Are these two expressions equal to each other?
simple_string1 == 'an example'
# Add and re-assign
simple_string1 += ' that re-assigned the original string'
simple_string1
# Multiply and re-assign
simple_string2 *= 3
simple_string2
# Note: Subtraction, division, and decrement operators do not apply to strings. | _____no_output_____ | MIT | learning-python3.ipynb | lsst-epo/jupyter-presentation |
Basic containers> Note: **mutable** objects can be modified after creation and **immutable** objects cannot.Containers are objects that can be used to group other objects together. The basic container types include:- **`str`** (string: immutable; indexed by integers; items are stored in the order they were added)- **`list`** (list: mutable; indexed by integers; items are stored in the order they were added) - `[3, 5, 6, 3, 'dog', 'cat', False]`- **`tuple`** (tuple: immutable; indexed by integers; items are stored in the order they were added) - `(3, 5, 6, 3, 'dog', 'cat', False)`- **`set`** (set: mutable; not indexed at all; items are NOT stored in the order they were added; can only contain immutable objects; does NOT contain duplicate objects) - `{3, 5, 6, 3, 'dog', 'cat', False}`- **`dict`** (dictionary: mutable; key-value pairs are indexed by immutable keys; items are NOT stored in the order they were added) - `{'name': 'Jane', 'age': 23, 'fav_foods': ['pizza', 'fruit', 'fish']}`When defining lists, tuples, or sets, use commas (,) to separate the individual items. When defining dicts, use a colon (:) to separate keys from values and commas (,) to separate the key-value pairs.Strings, lists, and tuples are all **sequence types** that can use the `+`, `*`, `+=`, and `*=` operators. | # Assign some containers to different variables
list1 = [3, 5, 6, 3, 'dog', 'cat', False]
tuple1 = (3, 5, 6, 3, 'dog', 'cat', False)
set1 = {3, 5, 6, 3, 'dog', 'cat', False}
dict1 = {'name': 'Jane', 'age': 23, 'fav_foods': ['pizza', 'fruit', 'fish']}
# Items in the list object are stored in the order they were added
list1
# Items in the tuple object are stored in the order they were added
tuple1
# Items in the set object are not stored in the order they were added
# Also, notice that the value 3 only appears once in this set object
set1
# Items in the dict object are not stored in the order they were added
dict1
# Add and re-assign
list1 += [5, 'grapes']
list1
# Add and re-assign
tuple1 += (5, 'grapes')
tuple1
# Multiply
[1, 2, 3, 4] * 2
# Multiply
(1, 2, 3, 4) * 3 | _____no_output_____ | MIT | learning-python3.ipynb | lsst-epo/jupyter-presentation |
Accessing data in containersFor strings, lists, tuples, and dicts, we can use **subscript notation** (square brackets) to access data at an index.- strings, lists, and tuples are indexed by integers, **starting at 0** for first item - these sequence types also support accesing a range of items, known as **slicing** - use **negative indexing** to start at the back of the sequence- dicts are indexed by their keys> Note: sets are not indexed, so we cannot use subscript notation to access data elements. | # Access the first item in a sequence
list1[0]
# Access the last item in a sequence
tuple1[-1]
# Access a range of items in a sequence
simple_string1[3:8]
# Access a range of items in a sequence
tuple1[:-3]
# Access a range of items in a sequence
list1[4:]
# Access an item in a dictionary
dict1['name']
# Access an element of a sequence in a dictionary
dict1['fav_foods'][2] | _____no_output_____ | MIT | learning-python3.ipynb | lsst-epo/jupyter-presentation |
Python built-in functions and callablesA **function** is a Python object that you can "call" to **perform an action** or compute and **return another object**. You call a function by placing parentheses to the right of the function name. Some functions allow you to pass **arguments** inside the parentheses (separating multiple arguments with a comma). Internal to the function, these arguments are treated like variables.Python has several useful built-in functions to help you work with different objects and/or your environment. Here is a small sample of them:- **`type(obj)`** to determine the type of an object- **`len(container)`** to determine how many items are in a container- **`callable(obj)`** to determine if an object is callable- **`sorted(container)`** to return a new list from a container, with the items sorted- **`sum(container)`** to compute the sum of a container of numbers- **`min(container)`** to determine the smallest item in a container- **`max(container)`** to determine the largest item in a container- **`abs(number)`** to determine the absolute value of a number- **`repr(obj)`** to return a string representation of an object> Complete list of built-in functions: https://docs.python.org/3/library/functions.htmlThere are also different ways of defining your own functions and callable objects that we will explore later. | # Use the type() function to determine the type of an object
type(simple_string1)
# Use the len() function to determine how many items are in a container
len(dict1)
# Use the len() function to determine how many items are in a container
len(simple_string2)
# Use the callable() function to determine if an object is callable
callable(len)
# Use the callable() function to determine if an object is callable
callable(dict1)
# Use the sorted() function to return a new list from a container, with the items sorted
sorted([10, 1, 3.6, 7, 5, 2, -3])
# Use the sorted() function to return a new list from a container, with the items sorted
# - notice that capitalized strings come first
sorted(['dogs', 'cats', 'zebras', 'Chicago', 'California', 'ants', 'mice'])
# Use the sum() function to compute the sum of a container of numbers
sum([10, 1, 3.6, 7, 5, 2, -3])
# Use the min() function to determine the smallest item in a container
min([10, 1, 3.6, 7, 5, 2, -3])
# Use the min() function to determine the smallest item in a container
min(['g', 'z', 'a', 'y'])
# Use the max() function to determine the largest item in a container
max([10, 1, 3.6, 7, 5, 2, -3])
# Use the max() function to determine the largest item in a container
max('gibberish')
# Use the abs() function to determine the absolute value of a number
abs(10)
# Use the abs() function to determine the absolute value of a number
abs(-12)
# Use the repr() function to return a string representation of an object
repr(set1) | _____no_output_____ | MIT | learning-python3.ipynb | lsst-epo/jupyter-presentation |
Python object attributes (methods and properties)Different types of objects in Python have different **attributes** that can be referred to by name (similar to a variable). To access an attribute of an object, use a dot (`.`) after the object, then specify the attribute (i.e. `obj.attribute`)When an attribute of an object is a callable, that attribute is called a **method**. It is the same as a function, only this function is bound to a particular object.When an attribute of an object is not a callable, that attribute is called a **property**. It is just a piece of data about the object, that is itself another object.The built-in `dir()` function can be used to return a list of an object's attributes. Some methods on string objects- **`.capitalize()`** to return a capitalized version of the string (only first char uppercase)- **`.upper()`** to return an uppercase version of the string (all chars uppercase)- **`.lower()`** to return an lowercase version of the string (all chars lowercase)- **`.count(substring)`** to return the number of occurences of the substring in the string- **`.startswith(substring)`** to determine if the string starts with the substring- **`.endswith(substring)`** to determine if the string ends with the substring- **`.replace(old, new)`** to return a copy of the string with occurences of the "old" replaced by "new" | # Assign a string to a variable
a_string = 'tHis is a sTriNg'
# Return a capitalized version of the string
a_string.capitalize()
# Return an uppercase version of the string
a_string.upper()
# Return a lowercase version of the string
a_string.lower()
# Notice that the methods called have not actually modified the string
a_string
# Count number of occurences of a substring in the string
a_string.count('i')
# Count number of occurences of a substring in the string after a certain position
a_string.count('i', 7)
# Count number of occurences of a substring in the string
a_string.count('is')
# Does the string start with 'this'?
a_string.startswith('this')
# Does the lowercase string start with 'this'?
a_string.lower().startswith('this')
# Does the string end with 'Ng'?
a_string.endswith('Ng')
# Return a version of the string with a substring replaced with something else
a_string.replace('is', 'XYZ')
# Return a version of the string with a substring replaced with something else
a_string.replace('i', '!')
# Return a version of the string with the first 2 occurences a substring replaced with something else
a_string.replace('i', '!', 2) | _____no_output_____ | MIT | learning-python3.ipynb | lsst-epo/jupyter-presentation |
Some methods on list objects- **`.append(item)`** to add a single item to the list- **`.extend([item1, item2, ...])`** to add multiple items to the list- **`.remove(item)`** to remove a single item from the list- **`.pop()`** to remove and return the item at the end of the list- **`.pop(index)`** to remove and return an item at an index Some methods on set objects- **`.add(item)`** to add a single item to the set- **`.update([item1, item2, ...])`** to add multiple items to the set- **`.update(set2, set3, ...)`** to add items from all provided sets to the set- **`.remove(item)`** to remove a single item from the set- **`.pop()`** to remove and return a random item from the set- **`.difference(set2)`** to return items in the set that are not in another set- **`.intersection(set2)`** to return items in both sets- **`.union(set2)`** to return items that are in either set- **`.symmetric_difference(set2)`** to return items that are only in one set (not both)- **`.issuperset(set2)`** does the set contain everything in the other set?- **`.issubset(set2)`** is the set contained in the other set? Some methods on dict objects- **`.update([(key1, val1), (key2, val2), ...])`** to add multiple key-value pairs to the dict- **`.update(dict2)`** to add all keys and values from another dict to the dict- **`.pop(key)`** to remove key and return its value from the dict (error if key not found)- **`.pop(key, default_val)`** to remove key and return its value from the dict (or return default_val if key not found)- **`.get(key)`** to return the value at a specified key in the dict (or None if key not found)- **`.get(key, default_val)`** to return the value at a specified key in the dict (or default_val if key not found)- **`.keys()`** to return a list of keys in the dict- **`.values()`** to return a list of values in the dict- **`.items()`** to return a list of key-value pairs (tuples) in the dict Positional arguments and keyword arguments to callablesYou can call a function/method in a number of different ways:- `func()`: Call `func` with no arguments- `func(arg)`: Call `func` with one positional argument- `func(arg1, arg2)`: Call `func` with two positional arguments- `func(arg1, arg2, ..., argn)`: Call `func` with many positional arguments- `func(kwarg=value)`: Call `func` with one keyword argument - `func(kwarg1=value1, kwarg2=value2)`: Call `func` with two keyword arguments- `func(kwarg1=value1, kwarg2=value2, ..., kwargn=valuen)`: Call `func` with many keyword arguments- `func(arg1, arg2, kwarg1=value1, kwarg2=value2)`: Call `func` with positonal arguments and keyword arguments- `obj.method()`: Same for `func`.. and every other `func` exampleWhen using **positional arguments**, you must provide them in the order that the function defined them (the function's **signature**).When using **keyword arguments**, you can provide the arguments you want, in any order you want, as long as you specify each argument's name.When using positional and keyword arguments, positional arguments must come first. Formatting strings and using placeholders Python "for loops"It is easy to **iterate** over a collection of items using a **for loop**. The strings, lists, tuples, sets, and dictionaries we defined are all **iterable** containers.The for loop will go through the specified container, one item at a time, and provide a temporary variable for the current item. You can use this temporary variable like a normal variable. Python "if statements" and "while loops"Conditional expressions can be used with these two **conditional statements**.The **if statement** allows you to test a condition and perform some actions if the condition evaluates to `True`. You can also provide `elif` and/or `else` clauses to an if statement to take alternative actions if the condition evaluates to `False`.The **while loop** will keep looping until its conditional expression evaluates to `False`.> Note: It is possible to "loop forever" when using a while loop with a conditional expression that never evaluates to `False`.>> Note: Since the **for loop** will iterate over a container of items until there are no more, there is no need to specify a "stop looping" condition. List, set, and dict comprehensions Creating objects from arguments or other objectsThe basic types and containers we have used so far all provide **type constructors**:- `int()`- `float()`- `str()`- `list()`- `tuple()`- `set()`- `dict()`Up to this point, we have been defining objects of these built-in types using some syntactic shortcuts, since they are so common.Sometimes, you will have an object of one type that you need to convert to another type. Use the **type constructor** for the type of object you want to have, and pass in the object you currently have. Importing modules Exceptions Classes: Creating your own objects | # Define a new class called `Thing` that is derived from the base Python object
class Thing(object):
my_property = 'I am a "Thing"'
# Define a new class called `DictThing` that is derived from the `dict` type
class DictThing(dict):
my_property = 'I am a "DictThing"'
print(Thing)
print(type(Thing))
print(DictThing)
print(type(DictThing))
print(issubclass(DictThing, dict))
print(issubclass(DictThing, object))
# Create "instances" of our new classes
t = Thing()
d = DictThing()
print(t)
print(type(t))
print(d)
print(type(d))
# Interact with a DictThing instance just as you would a normal dictionary
d['name'] = 'Sally'
print(d)
d.update({
'age': 13,
'fav_foods': ['pizza', 'sushi', 'pad thai', 'waffles'],
'fav_color': 'green',
})
print(d)
print(d.my_property) | I am a "DictThing"
| MIT | learning-python3.ipynb | lsst-epo/jupyter-presentation |
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Lecture - generating a confounding variableThe prewatch material told a story about a hypothetical health condition where both the drug usage and overall health outcome were related to gender - thus making gender a confounding variable, obfuscating the possible relationship between the drug and the outcome.Let's use Python to generate data that actually behaves in this fashion! | # y = "health outcome" - predicted variable - dependent variable
# x = "drug usage" - explanatory variable - independent variable
import random
dir(random) # Reminding ourselves what we can do here
random.seed(10) # Random Seed for reproducibility
# Let's think of another scenario:
# We work for a company that sells accessories for mobile phones.
# They have an ecommerce site, and we are supposed to analyze logs
# to determine what sort of usage is related to purchases, and thus guide
# website development to encourage higher conversion.
# The hypothesis - users who spend longer on the site tend
# to spend more. Seems reasonable, no?
# But there's a confounding variable! If they're on a phone, they:
# a) Spend less time on the site, but
# b) Are more likely to be interested in the actual products!
# Let's use namedtuple to represent our data
from collections import namedtuple
# purchased and mobile are bools, time_on_site in seconds
User = namedtuple('User', ['purchased','time_on_site', 'mobile'])
example_user = User(False, 12, False)
print(example_user)
# And now let's generate 1000 example users
# 750 mobile, 250 not (i.e. desktop)
# A desktop user has a base conversion likelihood of 10%
# And it goes up by 1% for each 15 seconds they spend on the site
# And they spend anywhere from 10 seconds to 10 minutes on the site (uniform)
# Mobile users spend on average half as much time on the site as desktop
# But have three times as much base likelihood of buying something
users = []
for _ in range(250):
# Desktop users
time_on_site = random.uniform(10, 600)
purchased = random.random() < 0.1 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, False))
for _ in range(750):
# Mobile users
time_on_site = random.uniform(5, 300)
purchased = random.random() < 0.3 + (time_on_site / 1500)
users.append(User(purchased, time_on_site, True))
random.shuffle(users)
print(users[:10])
# !pip freeze
!pip install pandas==0.23.4
# Let's put this in a dataframe so we can look at it more easily
import pandas as pd
user_data = pd.DataFrame(users)
user_data.head()
# Let's use crosstabulation to try to see what's going on
pd.crosstab(user_data['purchased'], user_data['time_on_site'])
# Let's use crosstabulation to try to see what's going on
# pd.crosstab(user_data['purchased'], user_data['time_on_site'], margins=True)
# Trying to show the margins on our Crosstab. Think this might be another
# versioning issue.
# pd.crosstab(user_data['purchased'], time_bins, margins=True)
# OK, that's not quite what we want
# Time is continuous! We need to put it in discrete buckets
# Pandas calls these bins, and pandas.cut helps make them
time_bins = pd.cut(user_data['time_on_site'], 5) # 5 equal-sized bins
pd.crosstab(user_data['purchased'], time_bins)
# We can make this a bit clearer by normalizing (getting %)
pd.crosstab(user_data['purchased'], time_bins, normalize='columns')
# That seems counter to our hypothesis
# More time on the site can actually have fewer purchases
# But we know why, since we generated the data!
# Let's look at mobile and purchased
pd.crosstab(user_data['purchased'], user_data['mobile'], normalize='columns')
# Yep, mobile users are more likely to buy things
# But we're still not seeing the *whole* story until we look at all 3 at once
# Live/stretch goal - how can we do that?
ct = pd.crosstab(user_data['mobile'], [user_data['purchased'], time_bins],
rownames=['device'],
colnames=["purchased", "time on site"],
normalize='index')
ct
# help(user_data.plot)
import seaborn as sns
sns.heatmap(pd.crosstab(user_data['mobile'], [user_data['purchased'], time_bins] ),
cmap="YlGnBu", annot=True, cbar=False)
# user_data.hist()
pd.pivot_table(user_data, values='purchased',
index=time_bins).plot.bar()
pd.pivot_table(
user_data, values='mobile', index=time_bins).plot.bar();
user_data['time_on_site'].plot.density();
ct = pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']],
normalize='columns')
ct
ct.plot();
ct.plot(kind='bar')
ct.plot(kind='bar', stacked=True)
time_bins = pd.cut(user_data['time_on_site'], 6) # 6 equal-sized bins
ct = pd.crosstab(time_bins, [user_data['purchased'], user_data['mobile']],
normalize='columns')
ct
ct.plot(kind='bar', stacked=True) | _____no_output_____ | MIT | NDoshi_DS4_114_Making_Data_backed_Assertions.ipynb | ndoshi83/DS-Unit-1-Sprint-1-Dealing-With-Data |
Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships. | # TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
# Import pandas library
import pandas as pd
# Load data into pandas dataframe
df = pd.read_csv('https://raw.githubusercontent.com/ndoshi83/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module4-databackedassertions/persons.csv')
# Show example of df
df.head(10)
df.dtypes
# Start with a pairplot to compare all the variables
import seaborn as sns
sns.pairplot(df)
# Create a distplots for all three variables
sns.distplot(df['age']);
sns.distplot(df['weight']);
sns.distplot(df['exercise_time']);
sns.jointplot('exercise_time', 'weight', df, kind = 'kde') | _____no_output_____ | MIT | NDoshi_DS4_114_Making_Data_backed_Assertions.ipynb | ndoshi83/DS-Unit-1-Sprint-1-Dealing-With-Data |
Visual Designer (Data Prep)In this exercise we will be building a pipeline in Azure Machine Learning using the [Visual Designer](https://docs.microsoft.com/azure/machine-learning/concept-designer). Traditionally the Visual Designer is used for training and deploying models. Here we will build a data prep pipeline that get a dataset ready for downstream model scoring. Below you can see a final picture of the data prep pipeline that will be built as part of this exercise.The pipeline will join two datasets together that consists of the diabetes dataset. We will perform binning on the Age column. After joining the datasets together, we will use the [SQL Transformation](https://docs.microsoft.com/azure/machine-learning/component-reference/apply-sql-transformation) component to demonstrate the flexibility of the Visual Designer by creating an aggregate dataset. The resulting datasets will be landed in the /1-bronze folder of the data lake. Later we will build in a scoring pipeline that will use the result dataset. Step 1: Stage dataLet's first upload our source files to the /0-raw layer of the data lake. We will use this as the source for the pipeline. | import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
#TODO: Supply userid value for naming artifacts.
userid = ''
tabular_dataset_name = 'diabetes-data-bronze-' + userid
print(
tabular_dataset_name
)
from azureml.core import Datastore, Dataset
# Set datastore name where raw diabetes data is stored.
datastore_name = ''
datastore = Datastore.get(ws, datastore_name)
print("Found Datastore with name: %s" % datastore_name)
from azureml.data.datapath import DataPath
# Upload local csv files to ADLS using AML Datastore.
ds = Dataset.File.upload_directory(src_dir='../data/stage',
target=DataPath(datastore, '0-raw/diabetes/' + userid + '/stage/'),
show_progress=True)
type(ds) | _____no_output_____ | MIT | workshops/workshop1/code/Visual Designer Data Prep Pipeline.ipynb | samleung314/mslearn-dp100 |
Step 2: Create target datasetsRegister datasets to use as targets for writing data from pipeline. | diabetes_ds = Dataset.Tabular.from_delimited_files(path=(datastore,'1-bronze/diabetes/' + userid + '/diabetes.csv'),validate=False,infer_column_types=False)
diabetes_ds.register(ws,name=tabular_dataset_name,create_new_version=True)
diabetes_ds = Dataset.Tabular.from_delimited_files(path=(datastore,'1-bronze/diabetes/' + userid + '/diabetes_sql_example.csv'),validate=False,infer_column_types=False)
diabetes_ds.register(ws,name=tabular_dataset_name + '_sql_example',create_new_version=True) | _____no_output_____ | MIT | workshops/workshop1/code/Visual Designer Data Prep Pipeline.ipynb | samleung314/mslearn-dp100 |
Steps for Handling the missing value1. Import Libraries2. Load data3. Seprate Input and Output attributes4. Find the missing values and handle it in either way a. Removing data b. Imputation | # Step 1: Import Libraries
import numpy as np
import pandas as pd
from sklearn.impute import SimpleImputer
# Step 2: Load Data
datasets = pd.read_csv('./Datasets/Data_for_Missing_Values.csv')
print("\nData :\n",datasets)
print("\nData statistics\n",datasets.describe())
# Step 3: Seprate Input and Output attributes
# All rows, all columns except last
X = datasets.iloc[:, :-1].values
# Only last column
Y = datasets.iloc[:, -1].values
print("\n\nInput : \n", X)
print("\n\nOutput: \n", Y)
# Step 4: Find the missing values and handle it in either way
# 4a. Removing the row with all null values
datasets.dropna(how='all',inplace=True)
print("\nNew Data :",datasets)
# 4b. Imputation (Replacing null values with mean value of that attribute)
# All rows, all columns except last
new_X = datasets.iloc[:, :-1].values
# Only last column
new_Y = datasets.iloc[:, -1].values
# Using Imputer function to replace NaN values with mean of that parameter value
imputer = SimpleImputer(missing_values = np.nan,strategy = "mean")
# Fitting the data, function learns the stats
imputer = imputer.fit(new_X[:, 1:3])
# fit_transform() will execute those stats on the input ie. X[:, 1:3]
new_X[:, 1:3] = imputer.transform(new_X[:, 1:3])
# filling the missing value with mean
print("\n\nNew Input with Mean Value for NaN : \n\n", new_X)
|
New Input with Mean Value for NaN :
[['France' 44.0 72000.0]
['Spain' 27.0 48000.0]
['Germany' 30.0 54000.0]
['Spain' 38.0 61000.0]
['Germany' 40.0 62900.0]
['France' 35.0 58000.0]
['Spain' 39.4 52000.0]
['France' 48.0 79000.0]
['Germany' 50.0 83000.0]
['France' 37.0 67000.0]
['Spain' 45.0 55000.0]]
| MIT | Lab-2/3HandlingMissingValues.ipynb | yash-a-18/002_YashAmethiya |
Overview `clean_us_data.ipynb`: Fix data inconsistencies in the raw time series data from [`etl_us_data.ipynb`](./etl_us_data.ipynb).Inputs:* `outputs/us_counties.csv`: Raw county-level time series data for the United States, produced by running [etl_us_data.ipynb](./etl_us_data.ipynb)* `outputs/us_counties_meta.json`: Column type metadata for reading `data/us_counties.csv` with `pd.read_csv()`Outputs:* `outputs/us_counties_clean.csv`: The contents of `outputs/us_counties.csv` after data cleaning* `outputs/us_counties_clean_meta.json`: Column type metadata for reading `data/us_counties_clean.csv` with `pd.read_csv()`* `outputs/us_counties_clean.feather`: Binary version of `us_counties_clean.csv`, in [Feather](https://arrow.apache.org/docs/python/feather.html) format.* `outputs/dates.feather`: Dates associated with points in time series, in [Feather](https://arrow.apache.org/docs/python/feather.html) format.**Note:** You can redirect these input and output files by setting the environment variables `COVID_INPUTS_DIR` and `COVID_OUTPUTS_DIR` to replacement values for the prefixes `inputs` and `outputs`, respectively, in the above paths. Read and reformat the raw data | # Initialization boilerplate
import os
import json
import pandas as pd
import numpy as np
import scipy.optimize
import sklearn.metrics
import matplotlib.pyplot as plt
from typing import *
import text_extensions_for_pandas as tp
# Local file of utility functions
import util
# Allow environment variables to override data file locations.
_INPUTS_DIR = os.getenv("COVID_INPUTS_DIR", "inputs")
_OUTPUTS_DIR = os.getenv("COVID_OUTPUTS_DIR", "outputs")
util.ensure_dir_exists(_OUTPUTS_DIR) # create if necessary | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Read the CSV file from `etl_us_data.ipynb` and apply the saved type information | csv_file = os.path.join(_OUTPUTS_DIR, "us_counties.csv")
meta_file = os.path.join(_OUTPUTS_DIR, "us_counties_meta.json")
# Read column type metadata
with open(meta_file) as f:
cases_meta = json.load(f)
# Pandas does not currently support parsing datetime64 from CSV files.
# As a workaround, read the "Date" column as objects and manually
# convert after.
cases_meta["Date"] = "object"
cases_raw = pd.read_csv(csv_file, dtype=cases_meta, parse_dates=["Date"])
# Restore the Pandas index
cases_vertical = cases_raw.set_index(["FIPS", "Date"], verify_integrity=True)
cases_vertical | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Replace missing values in the secondary datasets with zeros | for colname in ("Confirmed_NYT", "Deaths_NYT", "Confirmed_USAFacts", "Deaths_USAFacts"):
cases_vertical[colname].fillna(0, inplace=True)
cases_vertical[colname] = cases_vertical[colname].astype("int64")
cases_vertical | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Collapse each time series down to a single cellThis kind of time series data is easier to manipulate at the macroscopic level if each time series occupies a single cell of the DataFrame. We use the [TensorArray](https://text-extensions-for-pandas.readthedocs.io/en/latest/text_extensions_for_pandas.TensorArray) Pandas extension type from [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas). | cases, dates = util.collapse_time_series(cases_vertical, ["Confirmed", "Deaths", "Recovered",
"Confirmed_NYT", "Deaths_NYT",
"Confirmed_USAFacts", "Deaths_USAFacts"])
cases
# Note that the previous cell also saved the values from the "Date"
# column of `cases_vertical` into the Python variable `dates`:
dates[:10], dates.shape
# Print out the time series for the Bronx as a sanity check
bronx_fips = 36005
cases.loc[bronx_fips]["Confirmed"] | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Correct for missing data for today in USAFacts dataThe USAFacts database only receives the previous day's updates late in the day,so it's often missing the last value. Substitute the previous day's value ifthat is the case. | # Last 10 days of the time series for the Bronx before this change
cases.loc[bronx_fips]["Deaths_USAFacts"].to_numpy()[-10:]
# last element <-- max(last element, second to last)
new_confirmed = cases["Confirmed_USAFacts"].to_numpy().copy()
new_confirmed[:, -1] = np.maximum(new_confirmed[:, -1], new_confirmed[:, -2])
cases["Confirmed_USAFacts"] = tp.TensorArray(new_confirmed)
new_deaths = cases["Deaths_USAFacts"].to_numpy().copy()
new_deaths[:, -1] = np.maximum(new_deaths[:, -1], new_deaths[:, -2])
cases["Deaths_USAFacts"] = tp.TensorArray(new_deaths)
# Last 10 days of the time series for the Bronx after this change
cases.loc[bronx_fips]["Deaths_USAFacts"].to_numpy()[-10:] | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Validate the New York City confirmed cases dataOlder versions of the Johns Hopkins data coded all of New York city as beingin New York County. Each borough is actually in a different countywith a different FIPS code.Verify that this problem hasn't recurred. | max_bronx_confirmed = np.max(cases.loc[36005]["Confirmed"])
if max_bronx_confirmed == 0:
raise ValueError(f"Time series for the Bronx is all zeros again:\n{cases.loc[36005]['Confirmed']}")
max_bronx_confirmed | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Also plot the New York City confirmed cases time series to allow for manual validation. | new_york_county_fips = 36061
nyc_fips = [
36005, # Bronx County
36047, # Kings County
new_york_county_fips, # New York County
36081, # Queens County
36085, # Richmond County
]
util.graph_examples(cases.loc[nyc_fips], "Confirmed", {}, num_to_pick=5) | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Adjust New York City deaths dataPlot deaths for New York City in the Johns Hopkins data set. The jump in June is due to a change in reporting. | util.graph_examples(cases.loc[nyc_fips], "Deaths", {}, num_to_pick=5) | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
New York Times version of the time series for deaths in New York city: | util.graph_examples(cases.loc[nyc_fips], "Deaths_NYT", {}, num_to_pick=5) | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
USAFacts version of the time series for deaths in New York city: | util.graph_examples(cases.loc[nyc_fips], "Deaths_USAFacts", {}, num_to_pick=5) | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Currently the USAFacts version is cleanest, so we use that one. | new_deaths = cases["Deaths"].copy(deep=True)
for fips in nyc_fips:
new_deaths.loc[fips] = cases["Deaths_USAFacts"].loc[fips]
cases["Deaths"] = new_deaths
print("After:")
util.graph_examples(cases.loc[nyc_fips], "Deaths", {}, num_to_pick=5) | After:
| Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Clean up the Rhode Island dataThe Johns Hopkins data reports zero deaths in most of Rhode Island. Use the secondary data set from the New York Times for Rhode Island. | print("Before:")
util.graph_examples(cases, "Deaths", {}, num_to_pick=8,
mask=(cases["State"] == "Rhode Island"))
# Use our secondary data set for all Rhode Island data.
ri_fips = cases[cases["State"] == "Rhode Island"].index.values.tolist()
for colname in ["Confirmed", "Deaths"]:
new_series = cases[colname].copy(deep=True)
for fips in ri_fips:
new_series.loc[fips] = cases[colname + "_NYT"].loc[fips]
cases[colname] = new_series
# Note that the secondary data set has not "Recovered" time series, so
# we leave those numbers alone for now.
print("After:")
util.graph_examples(cases, "Deaths", {}, num_to_pick=8,
mask=(cases["State"] == "Rhode Island")) | After:
| Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Clean up the Utah dataThe Johns Hopkins data for Utah is missing quite a few data points.Use the New York Times data for Utah. | print("Before:")
util.graph_examples(cases, "Confirmed", {}, num_to_pick=8,
mask=(cases["State"] == "Utah"))
# The Utah time series from the New York Times' data set are more
# complete, so we use those numbers.
ut_fips = cases[cases["State"] == "Utah"].index.values
for colname in ["Confirmed", "Deaths"]:
new_series = cases[colname].copy(deep=True)
for fips in ut_fips:
new_series.loc[fips] = cases[colname + "_NYT"].loc[fips]
cases[colname] = new_series
# Note that the secondary data set has not "Recovered" time series, so
# we leave those numbers alone for now.
print("After:")
util.graph_examples(cases, "Confirmed", {}, num_to_pick=8,
mask=(cases["State"] == "Utah")) | After:
| Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Flag additional problematic and missing data pointsUse heuristics to identify and flag problematic data points across all the time series. Generate Boolean masks that show the locations of theseoutliers. | # Now we're done with the secondary data set, so drop its columns.
cases = cases.drop(columns=["Confirmed_NYT", "Deaths_NYT", "Confirmed_USAFacts", "Deaths_USAFacts"])
cases
# Now we need to find and flag obvious data-entry errors.
# We'll start by creating columns of "is outlier" masks.
# We use integers instead of Boolean values as a workaround for
# https://github.com/pandas-dev/pandas/issues/33770
# Start out with everything initialized to "not an outlier"
cases["Confirmed_Outlier"] = tp.TensorArray(np.zeros_like(cases["Confirmed"].values))
cases["Deaths_Outlier"] = tp.TensorArray(np.zeros_like(cases["Deaths"].values))
cases["Recovered_Outlier"] = tp.TensorArray(np.zeros_like(cases["Recovered"].values))
cases | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Flag time series that go from zero to nonzero and back againOne type of anomaly that occurs fairly often involves a time seriesjumping from zero to a nonzero value, then back to zero again.Locate all instances of that pattern and mark the nonzero valuesas outliers. | def nonzero_then_zero(series: np.array):
empty_mask = np.zeros_like(series, dtype=np.int8)
if series[0] > 0:
# Special case: first value is nonzero
return empty_mask
first_nonzero_offset = 0
while first_nonzero_offset < len(series):
if series[first_nonzero_offset] > 0:
# Found the first nonzero.
# Find the distance to the next zero value.
next_zero_offset = first_nonzero_offset + 1
while (next_zero_offset < len(series)
and series[next_zero_offset] > 0):
next_zero_offset += 1
# Check the length of the run of zeros after
# dropping back to zero.
second_nonzero_offset = next_zero_offset + 1
while (second_nonzero_offset < len(series)
and series[second_nonzero_offset] == 0):
second_nonzero_offset += 1
nonzero_run_len = next_zero_offset - first_nonzero_offset
second_zero_run_len = second_nonzero_offset - next_zero_offset
# print(f"{first_nonzero_offset} -> {next_zero_offset} -> {second_nonzero_offset}; series len {len(series)}")
if next_zero_offset >= len(series):
# Everything after the first nonzero was a nonzero
return empty_mask
elif second_zero_run_len <= nonzero_run_len:
# Series dropped back to zero, but the second zero
# part was shorter than the nonzero section.
# In this case, it's more likely that the second run
# of zero values are actually missing values.
return empty_mask
else:
# Series went zero -> nonzero -> zero -> nonzero
# or zero -> nonzero -> zero -> [end]
nonzero_run_mask = empty_mask.copy()
nonzero_run_mask[first_nonzero_offset:next_zero_offset] = 1
return nonzero_run_mask
first_nonzero_offset += 1
# If we get here, the series was all zeros
return empty_mask
for colname in ["Confirmed", "Deaths", "Recovered"]:
addl_outliers = np.stack([nonzero_then_zero(s.to_numpy()) for s in cases[colname]])
outliers_colname = colname + "_Outlier"
new_outliers = cases[outliers_colname].values.astype(np.bool) | addl_outliers
cases[outliers_colname] = tp.TensorArray(new_outliers.astype(np.int8))
# fips = 13297
# print(cases.loc[fips]["Confirmed"])
# print(nonzero_then_zero(cases.loc[fips]["Confirmed"]))
# Let's have a look at which time series acquired the most outliers as
# a result of the code in the previous cell.
df = cases[["State", "County"]].copy()
df["Confirmed_Num_Outliers"] = np.count_nonzero(cases["Confirmed_Outlier"], axis=1)
counties_with_outliers = df.sort_values("Confirmed_Num_Outliers", ascending=False).head(10)
counties_with_outliers
# Plot the couties in the table above, with outliers highlighted.
# The graph_examples() function is defined in util.py.
util.graph_examples(cases, "Confirmed", {}, num_to_pick=10, mask=(cases.index.isin(counties_with_outliers.index))) | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Flag time series that drop to zero, then go back upAnother type of anomaly involves the time series dropping down to zero, then going up again. Since all three time series are supposedto be cumulative counts, this pattern most likely indicates missingdata.To correct for this problem, we mark any zero values after thefirst nonzero, non-outlier values as outliers, across all time series. | def zeros_after_first_nonzero(series: np.array, outliers: np.array):
nonzero_mask = (series != 0)
nonzero_and_not_outlier = nonzero_mask & (~outliers)
first_nonzero = np.argmax(nonzero_and_not_outlier)
if 0 == first_nonzero and series[0] == 0:
# np.argmax(nonzero_mask) will return 0 if there are no nonzeros
return np.zeros_like(series)
after_nonzero_mask = np.zeros_like(series)
after_nonzero_mask[first_nonzero:] = True
return (~nonzero_mask) & after_nonzero_mask
for colname in ["Confirmed", "Deaths", "Recovered"]:
outliers_colname = colname + "_Outlier"
addl_outliers = np.stack([zeros_after_first_nonzero(s.to_numpy(), o.to_numpy())
for s, o in zip(cases[colname], cases[outliers_colname])])
new_outliers = cases[outliers_colname].values.astype(np.bool) | addl_outliers
cases[outliers_colname] = tp.TensorArray(new_outliers.astype(np.int8))
# fips = 47039
# print(cases.loc[fips]["Confirmed"])
# print(cases.loc[fips]["Confirmed_Outlier"])
# print(zeros_after_first_nonzero(cases.loc[fips]["Confirmed"], cases.loc[fips]["Confirmed_Outlier"]))
# Redo our "top 10 by number of outliers" analysis with the additional outliers
df = cases[["State", "County"]].copy()
df["Confirmed_Num_Outliers"] = np.count_nonzero(cases["Confirmed_Outlier"], axis=1)
counties_with_outliers = df.sort_values("Confirmed_Num_Outliers", ascending=False).head(10)
counties_with_outliers
util.graph_examples(cases, "Confirmed", {}, num_to_pick=10, mask=(cases.index.isin(counties_with_outliers.index)))
# The steps we've just done have removed quite a few questionable
# data points, but you will definitely want to flag additional
# outliers by hand before trusting descriptive statistics about
# any county.
# TODO: Incorporate manual whitelists and blacklists of outliers
# into this notebook. | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Precompute totals for the last 7 daysSeveral of the notebooks downstream of this one need the number of cases and deathsfor the last 7 days, so we compute those values here for convenience. | def last_week_results(s: pd.Series):
arr = s.to_numpy()
today = arr[:,-1]
week_ago = arr[:,-8]
return today - week_ago
cases["Confirmed_7_Days"] = last_week_results(cases["Confirmed"])
cases["Deaths_7_Days"] = last_week_results(cases["Deaths"])
cases.head() | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Write out cleaned time series dataBy default, output files go to the `outputs` directory. You can use the `COVID_OUTPUTS_DIR` environment variable to override that location. CSV outputComma separated value (CSV) files are a portable text-base format supported by a wide varietyof different tools. The CSV format does not include type information, so we write a secondfile of schema data in JSON format. | # Break out our time series into multiple rows again for writing to disk.
cleaned_cases_vertical = util.explode_time_series(cases, dates)
cleaned_cases_vertical
# The outlier masks are stored as integers as a workaround for a Pandas
# bug. Convert them to Boolean values for writing to disk.
cleaned_cases_vertical["Confirmed_Outlier"] = cleaned_cases_vertical["Confirmed_Outlier"].astype(np.bool)
cleaned_cases_vertical["Deaths_Outlier"] = cleaned_cases_vertical["Deaths_Outlier"].astype(np.bool)
cleaned_cases_vertical["Recovered_Outlier"] = cleaned_cases_vertical["Recovered_Outlier"].astype(np.bool)
cleaned_cases_vertical
# Write out the results to a CSV file plus a JSON file of type metadata.
cleaned_cases_vertical_csv_data_file = os.path.join(_OUTPUTS_DIR,"us_counties_clean.csv")
print(f"Writing cleaned data to {cleaned_cases_vertical_csv_data_file}")
cleaned_cases_vertical.to_csv(cleaned_cases_vertical_csv_data_file, index=True)
col_type_mapping = {
key: str(value) for key, value in cleaned_cases_vertical.dtypes.iteritems()
}
cleaned_cases_vertical_json_data_file = os.path.join(_OUTPUTS_DIR,"us_counties_clean_meta.json")
print(f"Writing metadata to {cleaned_cases_vertical_json_data_file}")
with open(cleaned_cases_vertical_json_data_file, "w") as f:
json.dump(col_type_mapping, f) | Writing cleaned data to outputs/us_counties_clean.csv
| Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Feather outputThe [Feather](https://arrow.apache.org/docs/python/feather.html) file format supportsfast binary I/O over any data that can be represented using [Apache Arrow](https://arrow.apache.org/)Feather files also include schema and type information. | # Also write out the nested data in Feather format so that downstream
# notebooks don't have to re-nest it.
# No Feather serialization support for Pandas indices currently, so convert
# the index on FIPS code to a normal column
cases_for_feather = cases.reset_index()
cases_for_feather.head()
# Write to Feather and make sure that reading back works too.
# Also write dates that go with the time series
dates_file = os.path.join(_OUTPUTS_DIR, "dates.feather")
cases_file = os.path.join(_OUTPUTS_DIR, "us_counties_clean.feather")
pd.DataFrame({"date": dates}).to_feather(dates_file)
cases_for_feather.to_feather(cases_file)
pd.read_feather(cases_file).head()
# Also make sure the dates can be read back in from a binary file
pd.read_feather(dates_file).head() | _____no_output_____ | Apache-2.0 | notebooks/clean_us_data.ipynb | itsrawlinz-jeff/COVID19_VISUALIZATION-R_JEFF |
Comparing Gravitational waveforms to each other | # Install the software we need
import sys
!{sys.executable} -m pip install pycbc lalsuite ligo-common --no-cache-dir
%matplotlib inline
# We learn about the potential parameters of a source by comparing it to many different waveforms
# each of which represents a possible source with different properties.
import pylab
from pycbc.waveform import get_td_waveform
# We can directly compare how similar waveforms are to each other using an inner product between then called
# a 'match'. This maximizes over the possible time of arrival and phase. We'll generate a reference waveform
# which we'll compare to.
m1 = m2 = 20
f_lower = 20
approximant = "SEOBNRv4"
delta_t = 1.0 / 2048
hp, _ = get_td_waveform(approximant=approximant,
mass1=m1, mass2=m2,
delta_t=delta_t, f_lower=f_lower)
pylab.plot(hp.sample_times, hp)
pylab.xlabel('Time (s)')
pylab.ylabel('Strain')
# How similar waveforms are to each other depends on how important we consider different frequencies, we
# can account for this by weighting with an estimated power spectral density. We'll use here
# the predicted final Advanced LIGO final design sensitivity
from pycbc.psd import aLIGOZeroDetHighPower
psd = aLIGOZeroDetHighPower(len(hp) // 2 + 1, 1.0 / hp.duration, f_lower)
pylab.loglog(psd.sample_frequencies, psd)
pylab.xlabel('Frequency (Hz)')
pylab.ylabel('Strain**2 / Hz')
pylab.xlim(20, 1000)
# We can now compare how similar our waveform is to others with different masses
from pycbc.filter import match
import numpy
masses = numpy.arange(19, 21, .2)
matches = []
for m2 in masses:
hp2, _ = get_td_waveform(approximant=approximant,
mass1=m1, mass2=m2,
delta_t=delta_t, f_lower=f_lower)
hp2 = hp2[:len(hp)] if len(hp) < len(hp2) else hp2
hp2.resize(len(hp))
m, idx = match(hp, hp2, psd=psd, low_frequency_cutoff=f_lower)
matches.append(m)
pylab.plot(hp2.sample_times, hp2)
pylab.xlim(-.05, .02)
pylab.plot(masses, matches)
pylab.ylabel('Match')
pylab.xlabel('Mass of second object (Solar Masses)')
# You can think of the match also as the fraction of signal-to-noise that you could recover with a template that
# doesn't *exactly* look like your source | _____no_output_____ | MIT | PyCBC-Tutorials-master/examples/waveform_similarity.ipynb | basuparth/ICERM_Workshop |
___ ___ Scikit-learn Primer**Scikit-learn** (http://scikit-learn.org/) is an open-source machine learning library for Python that offers a variety of regression, classification and clustering algorithms.In this section we'll perform a fairly simple classification exercise with scikit-learn. In the next section we'll leverage the machine learning strength of scikit-learn to perform natural language classifications. Installation and Setup From the command line or terminal:> `conda install scikit-learn`> *or*> `pip install -U scikit-learn`Scikit-learn additionally requires that NumPy and SciPy be installed. For more info visit http://scikit-learn.org/stable/install.html Perform Imports and Load DataFor this exercise we'll be using the **SMSSpamCollection** dataset from [UCI datasets](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection) that contains more than 5 thousand SMS phone messages.You can check out the [**sms_readme**](../TextFiles/sms_readme.txt) file for more info.The file is a [tab-separated-values](https://en.wikipedia.org/wiki/Tab-separated_values) (tsv) file with four columns:> **label** - every message is labeled as either ***ham*** or ***spam***> **message** - the message itself> **length** - the number of characters in each message> **punct** - the number of punctuation characters in each message | import numpy as np
import pandas as pd
df = pd.read_csv('../TextFiles/smsspamcollection.tsv', sep='\t')
df.head()
len(df) | _____no_output_____ | Apache-2.0 | nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb | rishuatgithub/MLPy |
Check for missing values:Machine learning models usually require complete data. | df.isnull().sum() | _____no_output_____ | Apache-2.0 | nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb | rishuatgithub/MLPy |
Take a quick look at the *ham* and *spam* `label` column: | df['label'].unique()
df['label'].value_counts() | _____no_output_____ | Apache-2.0 | nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb | rishuatgithub/MLPy |
We see that 4825 out of 5572 messages, or 86.6%, are ham.This means that any machine learning model we create has to perform **better than 86.6%** to beat random chance. Visualize the data:Since we're not ready to do anything with the message text, let's see if we can predict ham/spam labels based on message length and punctuation counts. We'll look at message `length` first: | df['length'].describe() | _____no_output_____ | Apache-2.0 | nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb | rishuatgithub/MLPy |
This dataset is extremely skewed. The mean value is 80.5 and yet the max length is 910. Let's plot this on a logarithmic x-axis. | import matplotlib.pyplot as plt
%matplotlib inline
plt.xscale('log')
bins = 1.15**(np.arange(0,50))
plt.hist(df[df['label']=='ham']['length'],bins=bins,alpha=0.8)
plt.hist(df[df['label']=='spam']['length'],bins=bins,alpha=0.8)
plt.legend(('ham','spam'))
plt.show() | _____no_output_____ | Apache-2.0 | nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb | rishuatgithub/MLPy |
It looks like there's a small range of values where a message is more likely to be spam than ham.Now let's look at the `punct` column: | df['punct'].describe()
plt.xscale('log')
bins = 1.5**(np.arange(0,15))
plt.hist(df[df['label']=='ham']['punct'],bins=bins,alpha=0.8)
plt.hist(df[df['label']=='spam']['punct'],bins=bins,alpha=0.8)
plt.legend(('ham','spam'))
plt.show() | _____no_output_____ | Apache-2.0 | nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb | rishuatgithub/MLPy |
This looks even worse - there seem to be no values where one would pick spam over ham. We'll still try to build a machine learning classification model, but we should expect poor results. ___ Split the data into train & test sets:If we wanted to divide the DataFrame into two smaller sets, we could use> `train, test = train_test_split(df)`For our purposes let's also set up our Features (X) and Labels (y). The Label is simple - we're trying to predict the `label` column in our data. For Features we'll use the `length` and `punct` columns. *By convention, **X** is capitalized and **y** is lowercase.* Selecting featuresThere are two ways to build a feature set from the columns we want. If the number of features is small, then we can pass those in directly:> `X = df[['length','punct']]`If the number of features is large, then it may be easier to drop the Label and any other unwanted columns:> `X = df.drop(['label','message'], axis=1)`These operations make copies of **df**, but do not change the original DataFrame in place. All the original data is preserved. | # Create Feature and Label sets
X = df[['length','punct']] # note the double set of brackets
y = df['label'] | _____no_output_____ | Apache-2.0 | nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb | rishuatgithub/MLPy |
Additional train/test/split arguments:The default test size for `train_test_split` is 30%. Here we'll assign 33% of the data for testing.Also, we can set a `random_state` seed value to ensure that everyone uses the same "random" training & testing sets. | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
print('Training Data Shape:', X_train.shape)
print('Testing Data Shape: ', X_test.shape) | Training Data Shape: (3733, 2)
Testing Data Shape: (1839, 2)
| Apache-2.0 | nlp/UPDATED_NLP_COURSE/03-Text-Classification/00-SciKit-Learn-Primer.ipynb | rishuatgithub/MLPy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.