markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Use a classification metric: accuracy[Classification metrics are different from regression metrics!](https://scikit-learn.org/stable/modules/model_evaluation.html)- Don't use _regression_ metrics to evaluate _classification_ tasks.- Don't use _classification_ metrics to evaluate _regression_ tasks.[Accuracy](https://s...
from sklearn.metrics import accuracy_score accuracy_score(y_train, y_pred) train[target].value_counts(normalize=True) y_val = val[target] y_val y_pred = [0] * len(y_val) accuracy_score(y_val, y_pred)
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Challenge In your assignment, your Sprint Challenge, and your upcoming Kaggle challenge, you'll begin with the majority class baseline. How quickly can you beat this baseline? Express and explain the intuition and interpretation of Logistic Regression OverviewTo help us get an intuition for *Logistic* Regression, le...
train.describe() # 1. Import estimator class from sklearn.linear_model import LinearRegression # 2. Instantiate this class linear_reg = LinearRegression() # 3. Arrange X feature matrices (already did y target vectors) features = ['Pclass', 'Age', 'Fare'] X_train = train[features] X_val = val[features] # Impute missi...
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Logistic Regression!
from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver='lbfgs') log_reg.fit(X_train_imputed, y_train) print('Validation Accuracy', log_reg.score(X_val_imputed, y_val)) # The predictions look like this log_reg.predict(X_val_imputed) log_reg.predict(test_case) log_reg.predict_proba(test_...
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
So, clearly a more appropriate model in this situation! For more on the math, [see this Wikipedia example](https://en.wikipedia.org/wiki/Logistic_regressionProbability_of_passing_an_exam_versus_hours_of_study). Use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression models OverviewNow tha...
features = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked'] target = 'Survived' X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_train.shape, y_train.shape, X_val.shape, y_val.shape import category_encoders as ce from sklearn.impute import SimpleImputer from ...
Validation Accuracy: 0.7802690582959642
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Plot coefficients:
%matplotlib inline coefficients = pd.Series(model.coef_[0], X_train_encoded.columns) coefficients.sort_values().plot.barh();
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
!pip install -qqq torchtext -qqq pytorch-transformers dgl !pip install -qqqU git+https://github.com/harvardnlp/pytorch-struct import torchtext import torch from torch_struct import SentCFG from torch_struct.networks import NeuralCFG import torch_struct.data # Download and the load default data. WORD = torchtext.data.Fi...
_____no_output_____
BSD-3-Clause
torchbenchmark/models/pytorch_struct/notebooks/Unsupervised_CFG.ipynb
ramiro050/benchmark
Sklearn sklearn.linear_model
from matplotlib.colors import ListedColormap from sklearn import cross_validation, datasets, linear_model, metrics import numpy as np %pylab inline
Populating the interactive namespace from numpy and matplotlib
MIT
LearningOnMarkedData/week2/sklearn.linear_model_part2.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
Линейная регрессия Генерация данных
data, target, coef = datasets.make_regression(n_features = 2, n_informative = 1, n_targets = 1, noise = 5., coef = True, random_state = 2) pylab.scatter(list(map(lambda x:x[0], data)), target, color='r') pylab.scatter(list(map(lambda x:x[1], data)), target, color='b') trai...
_____no_output_____
MIT
LearningOnMarkedData/week2/sklearn.linear_model_part2.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
LinearRegression
linear_regressor = linear_model.LinearRegression() linear_regressor.fit(train_data, train_labels) predictions = linear_regressor.predict(test_data) print(test_labels) print(predictions) metrics.mean_absolute_error(test_labels, predictions) linear_scoring = cross_validation.cross_val_score(linear_regressor, data, target...
y = 38.15*x1 + -0.16*x2 + -0.88
MIT
LearningOnMarkedData/week2/sklearn.linear_model_part2.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
Lasso
lasso_regressor = linear_model.Lasso(random_state = 3) lasso_regressor.fit(train_data, train_labels) lasso_predictions = lasso_regressor.predict(test_data) lasso_scoring = cross_validation.cross_val_score(lasso_regressor, data, target, scoring = scorer, cv = 10) print('mean: {}, std: {}'.format(lasso_scoring.mean(), la...
y = 37.31*x1 + -0.00*x2
MIT
LearningOnMarkedData/week2/sklearn.linear_model_part2.ipynb
ishatserka/MachineLearningAndDataAnalysisCoursera
Questão 1 Resolva o sistema linear $Ax = b$ em que$A =\begin{bmatrix}9. & −4. & 1. & 0. & 0. & 0. & 0. \\−4. & 6. & −4. & 1. & 0. & 0. & 0. \\1. & −4. & 6. & −4. & 1. & 0. & 0. \\0. & 1. & −4. & 6. & −4. & 1. & 0. \\\vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\0. & 0. & 1. & −4. & 6. & −4. & 1. \\0. ...
def timer(f): @functools.wraps(f) def wrapper_timer(*args, **kwargs): tempo_inicio = time.perf_counter() retorno = f(*args, **kwargs) tempo_fim = time.perf_counter() tempo_exec = tempo_fim - tempo_inicio print(f"Tempo de Execução: {tempo_exec:0.4f} segundos") retu...
_____no_output_____
MIT
Atividade 04.ipynb
Lucas-Otavio/MS211K-2s21
Solução por Eliminação de Gauss com Pivoteamento Parcial
P, x = EliminacaoGaussLUPivot(A, b) x print(np.max(np.abs(P @ A @ x - b)))
2.384185791015625e-07
MIT
Atividade 04.ipynb
Lucas-Otavio/MS211K-2s21
Método Gauss-Jacobi
@timer def GaussJacobi(A, b): n = A.shape[0] x_history = list() x_old = np.zeros(n) x_new = np.zeros(n) k_limite = 200 k = 0 tau = 1E-4 Dr = 1 while (k < k_limite and Dr > tau): for i in range(n): soma = 0 for j in range(n): if (i == j...
_____no_output_____
MIT
Atividade 04.ipynb
Lucas-Otavio/MS211K-2s21
Método Gauss-Seidel
@timer def GaussSeidel(A, b, k_limite=200): n = A.shape[0] x_history = list() x_old = np.zeros(n) x_new = np.zeros(n) k = 0 tau = 1E-4 Dr = 1 while (k < k_limite and Dr > tau): for i in range(n): soma = 0 for j in range(n): if (i == j): ...
_____no_output_____
MIT
Atividade 04.ipynb
Lucas-Otavio/MS211K-2s21
hello> API details.
#hide from nbdev.showdoc import * from nbdev_tutorial.core import * %load_ext autoreload %autoreload 2 #export class HelloSayer: "Say hello to `to` using `say_hello`" def __init__(self, to): self.to = to def say(self): "Do the saying" return say_hello(self.to) show_doc(HelloSayer) Card(suit...
_____no_output_____
Apache-2.0
01_hello.ipynb
hannesloots/nbdev-tutorial
Assignment 9: Implement Dynamic ProgrammingIn this exercise, we will begin to explore the concept of dynamic programming and how it related to various object containers with respect to computational complexity. Deliverables: 1) Choose and implement a Dynamic Programming algorithm in Python, make sure you are us...
import numpy as np import pandas as pd import seaborn as sns import time #import itertools import random import matplotlib.pyplot as plt #import networkx as nx #import pydot #from networkx.drawing.nx_pydot import graphviz_layout #from collections import deque # Dynamic Programming Approach of Finding LIS by reducin...
Longest increaseing sequence has a length of: 5
MIT
Assignment9- Dynamic.ipynb
bblank70/MSDS432
B. Test Array Generation
RANDOM_SEED = 300 np.random.seed(RANDOM_SEED) arr100 = list(np.random.randint(low=1, high= 5000, size=100)) np.random.seed(RANDOM_SEED) arr200 = list(np.random.randint(low=1, high= 5000, size=200)) np.random.seed(RANDOM_SEED) arr400 = list(np.random.randint(low=1, high= 5000, size=400)) np.random.seed...
_____no_output_____
MIT
Assignment9- Dynamic.ipynb
bblank70/MSDS432
Table1. Performance Summary
summary = { 'ArraySize' : [len(arr100), len(arr200), len(arr400), len(arr600), len(arr800)], 'SequenceLength' : [metrics[0][0],metrics[0][1], metrics[0][2], metrics[0][3], metrics[0][4]], 'Time(ms)' : [metrics[1][0],metrics[1][1], metrics[1][2], metrics[1][3], metrics[1][4]] } df =pd.DataFrame(summary)...
_____no_output_____
MIT
Assignment9- Dynamic.ipynb
bblank70/MSDS432
Figure 1. Performance
sns.scatterplot(data=df, x='Time(ms)', y='ArraySize')
_____no_output_____
MIT
Assignment9- Dynamic.ipynb
bblank70/MSDS432
Chapter 3 Questions 3.1 Form dollar bars for E-mini S&P 500 futures:1. Apply a symmetric CUSUM filter (Chapter 2, Section 2.5.2.1) where the threshold is the standard deviation of daily returns (Snippet 3.1).2. Use Snippet 3.4 on a pandas series t1, where numDays=1.3. On those sampled features, apply the triple-barri...
import numpy as np import pandas as pd import timeit from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import roc_curve, classification_report, confusion_matrix from mlfinlab.corefns.core_functions import CoreFunctions from mlfinlab.fracdiff....
_____no_output_____
MIT
Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb
alexanu/research
**Apply a symmetric CUSUM filter (Chapter 2, Section 2.5.2.1) where the threshold is the standard deviation of daily returns (Snippet 3.1).**
# Compute daily volatility vol = CoreFunctions.get_daily_vol(close=data['close'], lookback=50) vol.plot(figsize=(14, 7), title='Volatility as caclulated by de Prado') plt.show() # Apply Symmetric CUSUM Filter and get timestamps for events # Note: Only the CUSUM filter needs a point estimate for volatility cusum_events ...
1%|▏ | 592/39998 [00:00<00:06, 5912.41it/s]
MIT
Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb
alexanu/research
**Use Snippet 3.4 on a pandas series t1, where numDays=1.**
# Compute vertical barrier vertical_barriers = CoreFunctions.add_vertical_barrier(cusum_events, data['close']) vertical_barriers.head()
_____no_output_____
MIT
Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb
alexanu/research
**On those sampled features, apply the triple-barrier method, where ptSl=[1,1] and t1 is the series you created in point 1.b.**
triple_barrier_events = CoreFunctions.get_events(close=data['close'], t_events=cusum_events, pt_sl=[1, 1], target=vol, min_ret=0.01, num_threads=1, ...
_____no_output_____
MIT
Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb
alexanu/research
--- 3.2 From exercise 1, use Snippet 3.8 to drop rare labels.
clean_labels = CoreFunctions.drop_labels(labels) print(labels.shape) print(clean_labels.shape)
(660, 3) (660, 3)
MIT
Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb
alexanu/research
--- 3.3 Adjust the getBins function (Snippet 3.5) to return a 0 whenever the vertical barrier is the one touched first.This change was made inside the module CoreFunctions. --- 3.4 Develop a trend-following strategy based on a popular technical analysis statistic (e.g., crossing moving averages). For each observation, ...
# This question is answered in the notebook: 2019-03-06_JJ_Trend-Following-Question
_____no_output_____
MIT
Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb
alexanu/research
---- 3.5 Develop a mean-reverting strategy based on Bollinger bands. For each observation, the model suggests a side, but not a size of the bet.* (a) Derive meta-labels for ptSl = [0, 2] and t1 where numDays = 1. Use as trgt the daily standard deviation as computed by Snippet 3.1.* (b) Train a random forest to decide w...
# This question is answered in the notebook: 2019-03-07_BBand-Question
_____no_output_____
MIT
Chapter3/2019-03-03_AS_JJ_Chapter3-Part1.ipynb
alexanu/research
Import Libraries
from __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms
_____no_output_____
MIT
Prasant_Kumar/Assignments/F9.ipynb
ks1320/Traffic-Surveillance-System
Data TransformationsWe first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
# Train Phase transformations train_transforms = transforms.Compose([ # transforms.Resize((28, 28)), # transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1), transforms.RandomRo...
_____no_output_____
MIT
Prasant_Kumar/Assignments/F9.ipynb
ks1320/Traffic-Surveillance-System
Dataset and Creating Train/Test Split
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms) test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
_____no_output_____
MIT
Prasant_Kumar/Assignments/F9.ipynb
ks1320/Traffic-Surveillance-System
Dataloader Arguments & Test/Train Dataloaders
SEED = 1 # CUDA? cuda = torch.cuda.is_available() print("CUDA Available?", cuda) # For reproducibility torch.manual_seed(SEED) if cuda: torch.cuda.manual_seed(SEED) # dataloader arguments - something you'll fetch these from cmdprmt dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=T...
CUDA Available? True
MIT
Prasant_Kumar/Assignments/F9.ipynb
ks1320/Traffic-Surveillance-System
The modelLet's start with the model we first saw
import torch.nn.functional as F dropout_value = 0.1 class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Input Block self.convblock1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=16, kernel_size=(3, 3), padding=0, bias=False), nn.ReLU(), ...
_____no_output_____
MIT
Prasant_Kumar/Assignments/F9.ipynb
ks1320/Traffic-Surveillance-System
Model ParamsCan't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help
!pip install torchsummary from torchsummary import summary use_cuda = torch.cuda.is_available() device = torch.device("cuda" if use_cuda else "cpu") print(device) model = Net().to(device) summary(model, input_size=(1, 28, 28))
Requirement already satisfied: torchsummary in /usr/local/lib/python3.6/dist-packages (1.5.1) cuda ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 ...
MIT
Prasant_Kumar/Assignments/F9.ipynb
ks1320/Traffic-Surveillance-System
Training and TestingAll right, so we have 24M params, and that's too many, we know that. But the purpose of this notebook is to set things right for our future experiments. Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions
from tqdm import tqdm train_losses = [] test_losses = [] train_acc = [] test_acc = [] def train(model, device, train_loader, optimizer, epoch): model.train() pbar = tqdm(train_loader) correct = 0 processed = 0 for batch_idx, (data, target) in enumerate(pbar): # get samples data, target = data.to(dev...
0%| | 0/469 [00:00<?, ?it/s]
MIT
Prasant_Kumar/Assignments/F9.ipynb
ks1320/Traffic-Surveillance-System
Document AI Specialized Parser with HITLThis notebook shows you how to use Document AI's specialized parsers ex. Invoice, Receipt, W2, W9, etc. and also shows Human in the Loop (HITL) output for supported parsers.
# Install necessary Python libraries and restart your kernel after. !python -m pip install -r ../requirements.txt from google.cloud import documentai_v1beta3 as documentai from PIL import Image, ImageDraw import os import pandas as pd
_____no_output_____
Apache-2.0
specialized/specialized_form_parser.ipynb
jlehga1/documentai-notebooks
Set your processor variables
# TODO(developer): Fill these variables with your values before running the sample PROJECT_ID = "YOUR_PROJECT_ID_HERE" LOCATION = "us" # Format is 'us' or 'eu' PROCESSOR_ID = "PROCESSOR_ID" # Create processor in Cloud Console PDF_PATH = "../resources/procurement/invoices/invoice.pdf" # Update to path of target docume...
_____no_output_____
Apache-2.0
specialized/specialized_form_parser.ipynb
jlehga1/documentai-notebooks
The following code calls the synchronous API and parses the form fields and values.
def process_document_sample(): # Instantiates a client client_options = {"api_endpoint": "{}-documentai.googleapis.com".format(LOCATION)} client = documentai.DocumentProcessorServiceClient(client_options=client_options) # The full resource name of the processor, e.g.: # projects/project-id/location...
_____no_output_____
Apache-2.0
specialized/specialized_form_parser.ipynb
jlehga1/documentai-notebooks
Draw the bounding boxesWe will now use the spatial data returned by the processor to mark our values on the invoice pdf file that we first converted into a jpg.
JPG_PATH = "../resources/procurement/invoices/invoice.jpg" # Update to path of a jpg of your sample document. document_image = Image.open(JPG_PATH) draw = ImageDraw.Draw(document_image) for entity in doc.entities: # Draw the bounding box around the entities vertices = [] for vertex in entity.page_anchor.pag...
_____no_output_____
Apache-2.0
specialized/specialized_form_parser.ipynb
jlehga1/documentai-notebooks
Human in the loop (HITL) Operation **Only complete this section if a HITL Operation is triggered.**
lro = "LONG_RUNNING_OPERATION" # LRO printed in the previous cell ex. projects/660199673046/locations/us/operations/174674963333130330 client = documentai.DocumentProcessorServiceClient() operation = client._transport.operations_client.get_operation(lro) if operation.done: print("HITL location: {} ".format(str(oper...
_____no_output_____
Apache-2.0
specialized/specialized_form_parser.ipynb
jlehga1/documentai-notebooks
Notebook which focuses on the randomly generated data sets and the performance comparison of algorithms on it
from IPython.core.display import display, HTML display(HTML('<style>.container {width:100% !important;}</style>')) %matplotlib notebook import matplotlib.pyplot as plt import numpy as np import torch from itertools import product, chain import nmf.mult import nmf.pgrad import nmf.nesterov import nmf_torch.mult impor...
_____no_output_____
MIT
performance_on_random.ipynb
ninextycode/finalYearProjectNMF
02: Fitting Power Spectrum Models=================================Introduction to the module, beginning with the FOOOF object.
# Import the FOOOF object from fooof import FOOOF # Import utility to download and load example data from fooof.utils.download import load_fooof_data # Download examples data files needed for this example freqs = load_fooof_data('freqs.npy', folder='data') spectrum = load_fooof_data('spectrum.npy', folder='data')
_____no_output_____
Apache-2.0
doc/auto_tutorials/plot_02-FOOOF.ipynb
varman-m/eeg_notebooks_doc
FOOOF Object------------At the core of the module, which is object oriented, is the :class:`~fooof.FOOOF` object,which holds relevant data and settings as attributes, and contains methods to run thealgorithm to parameterize neural power spectra.The organization is similar to sklearn:- A model object is initialized, wit...
# Initialize a FOOOF object fm = FOOOF() # Set the frequency range to fit the model freq_range = [2, 40] # Report: fit the model, print the resulting parameters, and plot the reconstruction fm.report(freqs, spectrum, freq_range)
_____no_output_____
Apache-2.0
doc/auto_tutorials/plot_02-FOOOF.ipynb
varman-m/eeg_notebooks_doc
Fitting Models with 'Report'~~~~~~~~~~~~~~~~~~~~~~~~~~~~The above method 'report', is a convenience method that calls a series of methods:- :meth:`~fooof.FOOOF.fit`: fits the power spectrum model- :meth:`~fooof.FOOOF.print_results`: prints out the results- :meth:`~fooof.FOOOF.plot`: plots to data and model fitEach of t...
# Alternatively, just fit the model with FOOOF.fit() (without printing anything) fm.fit(freqs, spectrum, freq_range) # After fitting, plotting and parameter fitting can be called independently: # fm.print_results() # fm.plot()
_____no_output_____
Apache-2.0
doc/auto_tutorials/plot_02-FOOOF.ipynb
varman-m/eeg_notebooks_doc
Model Parameters~~~~~~~~~~~~~~~~Once the power spectrum model has been calculated, the model fit parameters are storedas object attributes that can be accessed after fitting.Following the sklearn convention, attributes that are fit as a result ofthe model have a trailing underscore, for example:- ``aperiodic_params_``-...
# Aperiodic parameters print('Aperiodic parameters: \n', fm.aperiodic_params_, '\n') # Peak parameters print('Peak parameters: \n', fm.peak_params_, '\n') # Goodness of fit measures print('Goodness of fit:') print(' Error - ', fm.error_) print(' R^2 - ', fm.r_squared_, '\n') # Check how many peaks were fit print('...
_____no_output_____
Apache-2.0
doc/auto_tutorials/plot_02-FOOOF.ipynb
varman-m/eeg_notebooks_doc
Selecting Parameters~~~~~~~~~~~~~~~~~~~~You can also select parameters using the :meth:`~fooof.FOOOF.get_params`method, which can be used to specify which parameters you want to extract.
# Extract a model parameter with `get_params` err = fm.get_params('error') # Extract parameters, indicating sub-selections of parameter exp = fm.get_params('aperiodic_params', 'exponent') cfs = fm.get_params('peak_params', 'CF') # Print out a custom parameter report template = ("With an error level of {error:1.2f}, F...
_____no_output_____
Apache-2.0
doc/auto_tutorials/plot_02-FOOOF.ipynb
varman-m/eeg_notebooks_doc
For a full description of how you can access data with :meth:`~fooof.FOOOF.get_params`,check the method's documentation.As a reminder, you can access the documentation for a function using '?' in aJupyter notebook (ex: `fm.get_params?`), or more generally with the `help` functionin general Python (ex: `help(get_params)...
# Compare the 'peak_params_' to the underlying gaussian parameters print(' Peak Parameters \t Gaussian Parameters') for peak, gauss in zip(fm.peak_params_, fm.gaussian_params_): print('{:5.2f} {:5.2f} {:5.2f} \t {:5.2f} {:5.2f} {:5.2f}'.format(*peak, *gauss))
_____no_output_____
Apache-2.0
doc/auto_tutorials/plot_02-FOOOF.ipynb
varman-m/eeg_notebooks_doc
FOOOFResults~~~~~~~~~~~~There is also a convenience method to return all model fit results::func:`~fooof.FOOOF.get_results`.This method returns all the model fit parameters, including the underlying Gaussianparameters, collected together into a FOOOFResults object.The FOOOFResults object, which in Python terms is a nam...
# Grab each model fit result with `get_results` to gather all results together # Note that this returns a FOOOFResult object fres = fm.get_results() # You can also unpack all fit parameters when using `get_results` ap_params, peak_params, r_squared, fit_error, gauss_params = fm.get_results() # Print out the FOOOFRes...
_____no_output_____
Apache-2.0
doc/auto_tutorials/plot_02-FOOOF.ipynb
varman-m/eeg_notebooks_doc
DiscretisationDiscretisation is the process of transforming continuous variables into discrete variables by creating a set of contiguous intervals that span the range of the variable's values. Discretisation is also called **binning**, where bin is an alternative name for interval. Discretisation helps handle outliers...
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.preprocessing import KBinsDiscretizer from feature_engine.discretisers import EqualWidthDiscretiser # load the numerical variables of the Titanic Dataset data = pd.read_csv('../t...
_____no_output_____
BSD-3-Clause
Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb
cym3509/FeatureEngineering
The variables Age and fare contain missing data, that I will fill by extracting a random sample of the variable.
def impute_na(data, variable): df = data.copy() # random sampling df[variable + '_random'] = df[variable] # extract the random sample to fill the na random_sample = X_train[variable].dropna().sample( df[variable].isnull().sum(), random_state=0) # pandas needs to have the same index i...
_____no_output_____
BSD-3-Clause
Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb
cym3509/FeatureEngineering
Equal width discretisation with pandas and NumPyFirst we need to determine the intervals' edges or limits.
# let's capture the range of the variable age age_range = X_train['age'].max() - X_train['age'].min() age_range # let's divide the range into 10 equal width bins age_range / 10
_____no_output_____
BSD-3-Clause
Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb
cym3509/FeatureEngineering
The range or width of our intervals will be 7 years.
# now let's capture the lower and upper boundaries min_value = int(np.floor( X_train['age'].min())) max_value = int(np.ceil( X_train['age'].max())) # let's round the bin width inter_value = int(np.round(age_range / 10)) min_value, max_value, inter_value # let's capture the interval limits, so we can pass them to the...
_____no_output_____
BSD-3-Clause
Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb
cym3509/FeatureEngineering
We can see in the above output how by discretising using equal width, we placed each Age observation within one interval / bin. For example, age=13 was placed in the 7-14 interval, whereas age 30 was placed into the 28-35 interval.When performing equal width discretisation, we guarantee that the intervals are all of th...
X_train.groupby('Age_disc')['age'].count() X_train.groupby('Age_disc')['age'].count().plot.bar() plt.xticks(rotation=45) plt.ylabel('Number of observations per bin')
_____no_output_____
BSD-3-Clause
Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb
cym3509/FeatureEngineering
The majority of people on the Titanic were between 14-42 years of age.Now, we can discretise Age in the test set, using the same interval boundaries that we calculated for the train set:
X_test['Age_disc_labels'] = pd.cut(x=X_test['age'], bins=intervals, labels=labels, include_lowest=True) X_test['Age_disc'] = pd.cut(x=X_test['age'], bins=intervals, ...
_____no_output_____
BSD-3-Clause
Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb
cym3509/FeatureEngineering
Equal width discretisation with Feature-Engine
# Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split( data[['age', 'fare']], data['survived'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # replace NA in both train and test sets X_train['age'] = impute_na(data, 'age') X_test['age'] = impute_na...
_____no_output_____
BSD-3-Clause
Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb
cym3509/FeatureEngineering
We can see quite clearly, that equal width discretisation does not improve the value spread. The original variable Fare was skewed, and the discrete variable is also skewed. Equal width discretisation with Scikit-learn
# Let's separate into train and test set X_train, X_test, y_train, y_test = train_test_split( data[['age', 'fare']], data['survived'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # replace NA in both train and test sets X_train['age'] = impute_na(data, 'age') X_test['age'] = impute_na...
_____no_output_____
BSD-3-Clause
Section-08-Discretisation/08.01-Equal-width-discretisation.ipynb
cym3509/FeatureEngineering
Obligatory imports
import numpy as np import matplotlib.pyplot as plt import seaborn as sns import sklearn import matplotlib %matplotlib inline matplotlib.rcParams['figure.figsize'] = (12,8) matplotlib.rcParams['font.size']=20 matplotlib.rcParams['lines.linewidth']=4 matplotlib.rcParams['xtick.major.size'] = 10 matplotlib.rcParams['ytick...
_____no_output_____
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
We use the MNIST Dataset again
import IPython url = 'http://yann.lecun.com/exdb/mnist/' iframe = '<iframe src=' + url + ' width=80% height=400px></iframe>' IPython.display.HTML(iframe)
_____no_output_____
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
Fetch the data
from sklearn.datasets import fetch_mldata mnist = fetch_mldata('MNIST original', data_home='../day4/data/') allimages = mnist.data allimages.shape all_image_labels = mnist.target set(all_image_labels)
_____no_output_____
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
check out the data
digit1 = mnist.data[0,:].reshape(28,-1) # arr.reshape(4, -1) is equivalent to arr.reshape(4, 7), is arr has size 28 fig, ax = plt.subplots(figsize=(1.5, 1.5)) ax.imshow(digit1, vmin=0, vmax=1)
_____no_output_____
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
Theoretical background**Warning: math ahead** Taking logistic regression a step further: neural networks How (artificial) neural networks predict a label from features?* The *input layer* has **dimention = number of features.*** For each training example, each feature value is "fed" into the input layer. * Each "neu...
len(allimages)
_____no_output_____
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
Sample the data, 70000 is too many images to handle on a single PC
len(allimages) size_desired_dataset = 2000 sample_idx = np.random.choice(len(allimages), size_desired_dataset) images = allimages[sample_idx, :] image_labels = all_image_labels[sample_idx] set(image_labels) image_labels.shape
_____no_output_____
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
Partition into training and test set *randomly* **As a rule of thumb, 80/20 split between training/test dataset is often recommended.**See below for cross validation and how that changes this thumbrule.
from scipy.stats import itemfreq from sklearn.model_selection import train_test_split training_data, test_data, training_labels, test_labels = train_test_split(images, image_labels, train_size=0.8)
/home/dmanik/venvs/teaching/lib/python3.5/site-packages/sklearn/model_selection/_split.py:2026: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified. FutureWarning)
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
** Importance of normalization** If Feature A is in the range [0,1] and Feature B is in [10000,50000], SVM (in fact, most of the classifiers) will suffer inaccuracy.The solution is to *normalize* (AKA "feature scaling") each feature to the same interval e.g. [0,1] or [-1, 1].**scipy provides a standard function for th...
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() # Fit only to the training data: IMPORTANT scaler.fit(training_data) from sklearn.neural_network import MLPClassifier clf = MLPClassifier(hidden_layer_sizes=(50,), max_iter = 5000) clf.fit(scaler.transform(training_data), training_labels) clf.sc...
/home/dmanik/venvs/teaching/lib/python3.5/site-packages/sklearn/utils/validation.py:475: DataConversionWarning: Data with input dtype uint8 was converted to float64 by StandardScaler. warnings.warn(msg, DataConversionWarning)
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
Visualize the hidden layer:
# source: # #http://scikit-learn.org/stable/auto_examples/neural_networks/plot_mnist_filters.html fig, axes = plt.subplots(4, 4, figsize=(15,15)) # use global min / max to ensure all weights are shown on the same scale vmin, vmax = clf.coefs_[0].min(), clf.coefs_[0].max() for coef, ax in zip(clf.coefs_[0].T, axes.rav...
_____no_output_____
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
Not bad, but is it better than Logistic regression? Check out with Learning curves:
from sklearn.model_selection import learning_curve import pandas as pd curve = learning_curve(clf, scaler.transform(images), image_labels) train_sizes, train_scores, test_scores = curve train_scores = pd.DataFrame(train_scores) train_scores.loc[:,'train_size'] = train_sizes test_scores = pd.DataFrame(test_scores) test...
/home/dmanik/venvs/teaching/lib/python3.5/site-packages/seaborn/timeseries.py:183: UserWarning: The tsplot function is deprecated and will be removed or replaced (in a substantially altered version) in a future release. warnings.warn(msg, UserWarning)
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
Not really, we can try to improve it with parameter space search. Parameter space search with `GridSearchCV`
from sklearn.model_selection import GridSearchCV clr = MLPClassifier() clf = GridSearchCV(clr, {'alpha':np.logspace(-8, -1, 2)}) clf.fit(scaler.transform(images), image_labels) clf.best_params_ clf.best_score_ nn_tuned = clf.best_estimator_ nn_tuned.fit(scaler.transform(training_data), training_labels) curve = learning...
/home/dmanik/venvs/teaching/lib/python3.5/site-packages/seaborn/timeseries.py:183: UserWarning: The tsplot function is deprecated and will be removed or replaced (in a substantially altered version) in a future release. warnings.warn(msg, UserWarning) No handles with labels found to put in legend.
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
The increase in accuracy is miniscule. Multi layered NN's
from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(images) images_normed = scaler.transform(images) clr = MLPClassifier(hidden_layer_sizes=(25,25)) clf = GridSearchCV(clr, {'alpha':np.logspace(-80, -1, 3)}) clf.fit(images_normed, image_labels) clf.best_score_ clf.best_params_ nn_...
/home/dmanik/venvs/teaching/lib/python3.5/site-packages/seaborn/timeseries.py:183: UserWarning: The tsplot function is deprecated and will be removed or replaced (in a substantially altered version) in a future release. warnings.warn(msg, UserWarning) No handles with labels found to put in legend.
CC-BY-4.0
day5/02-NN.ipynb
JanaLasser/data-science-course
Numerical Methods -- Assignment 5 Problem1 -- Energy density The matter and radiation density of the universe at redshift $z$ is$$\Omega_m(z) = \Omega_{m,0}(1+z)^3$$$$\Omega_r(z) = \Omega_{r,0}(1+z)^4$$where $\Omega_{m,0}=0.315$ and $\Omega_r = 9.28656 \times 10^{-5}$ (a) Plot
%config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import numpy as np z = np.linspace(-1000,4000,10000) O_m0 = 0.315 O_r0 = 9.28656e-5 O_m = O_m0*np.power(z+1,3) O_r = O_r0*np.power(z+1,4) #define where the roots are x1 = -1; x2 = O_m0/O_r0 y1 = O_m0*np.power(x1+1,3) y2 = O_m0*np.power(x2+...
_____no_output_____
MIT
numerical5.ipynb
fatginger1024/NumericalMethods
(b) Analytical solution An analytical solution can be found by equating the two equations. Since $z$ denotes for the redshift and it has a physical meaning, so it must take a real value for it to have a meaning. Thus\begin{align*}\Omega_m(z) &= \Omega_r(z)\\\Omega_{m,0}(1+z)^3 &= \Omega_{r,0}(1+z)^4\\(1+z)^3(0.315-9.2...
from scipy.optimize import bisect def f(z): O_m0 = 0.315 O_r0 = 9.28656e-5 O_m = O_m0*np.power(z+1,3) O_r = O_r0*np.power(z+1,4) return O_m -O_r z1 = bisect(f,-1000,0,xtol=1e-10) z2 = bisect(f,0,4000,xtol=1e-10) print "The roots are found to be:",z1,z2
The roots are found to be: -1.00000000003 3390.9987595
MIT
numerical5.ipynb
fatginger1024/NumericalMethods
(d) Secant method The $\textit{secant method}$ uses secant lines to find the root. A secant line is a straight line that intersects two points of a curve. In the secant method, a line is drawn between two points on the continuous function such that it extends and intersects the $x$ axis. A secant line $y$ is drawn fr...
def secant(f, x0, x1, eps): f_x0 = f(x0) f_x1 = f(x1) iteration_counter = 0 while abs(f_x1) > eps and iteration_counter < 100: try: denominator = float(f_x1 - f_x0)/(x1 - x0) x = x1 - float(f_x1)/denominator except ZeroDivisionError: print "Error! - d...
The roots are found to be: -0.999466618551 3390.9987595
MIT
numerical5.ipynb
fatginger1024/NumericalMethods
(e) Newton-Raphson method In numerical methods, $\textit{Newton-Raphson method}$ is a method for finding successively better approximations to the roots of a real-valued function. The algorithm is as follows:* Starting with a function $f$ defined over the real number $x$, the function's derivative $f'$, and an initia...
def fprime(z): O_m0 = 0.315 O_r0 = 9.28656e-5 O_m = O_m0*np.power(z+1,2) O_r = O_r0*np.power(z+1,3) return 3*O_m -4*O_r def Newton(f, dfdx, x, eps): f_value = f(x) iteration_counter = 0 while abs(f_value) > eps and iteration_counter < 100: try: x = x - float(f_value)...
The roots are found to be: -0.9993234602 3390.9987595
MIT
numerical5.ipynb
fatginger1024/NumericalMethods
Now, change the initial guess far from the values obtained from (b). And test how the three algorithms perform respectively.
#test how the bisection method perform import time start1 = time.time() z1 = bisect(f,-1000,1000,xtol=1e-10) end1 = time.time() start2 = time.time() z2 = bisect(f,3000,10000,xtol=1e-10) end2 = time.time() err1 = abs((z1-(-1))/(-1)) err2 = abs((z2-(O_m0/O_r0-1))/(O_m0/O_r0-1)) print "The roots are found to be:",z1,z2 pr...
The roots are found to be: -1.00051824126 3390.9987595 With a deviation of: 0.000518241260632 0.0 Time used are: 0.000991821289062 0.000278949737549 Roots found after 18 and 7 loops
MIT
numerical5.ipynb
fatginger1024/NumericalMethods
It is not difficult to find out that tested with the function given, bisection method is the fastest and the most reliable method in finding the first root; however, in determining the second root, both the secant method and Newton's method showed better performance, with zero deviation from the actual value, and a muc...
import matplotlib.pyplot as plt import numpy as np from scipy.optimize import newton from scipy.integrate import quad from math import * r = np.array([7.80500, 15.6100,31.2200,78.0500,156.100]) #r in kpc vt = np.array([139.234,125.304,94.6439,84.5818,62.8640]) # vt in km/s vr = np.array([-15.4704,53.7018,-283.932,-44...
The pericentre is found to be: 52.2586723359 kpc for the NFW profile The pericentre is found to be: 55.9497763757 kpc for the Hernquist profile
MIT
numerical5.ipynb
fatginger1024/NumericalMethods
The table below lists all the parameters of the five stars. Problem 3 -- System of equations $$f(x,y) = x^2+y^2-50=0$$$$g(x,y) = x \times y -25 = 0$$ (a) Analytical solution First $f(x,y)-2g(x,y)$,we find:\begin{align*}x^2+y^2-2xy &=0\\(x-y)^2 &= 0\\x&=y\end{align*}Then $f(x,y)+2g(x,y)$,we find:\begin{align*}x^2+y^2...
from scipy.optimize import fsolve import numpy as np f1 = lambda x: [x[0]**2+x[1]**2-50,x[0]*x[1]-25] #the Jacobian needed to implement Newton's method fd = lambda x: np.array([[2*x[0],2*x[1]],[x[1],x[0]]]).reshape(2,2) #define the domain where we want to find the solution (x,y) a = np.linspace(-10,10,100) b = a #for ...
The sets of solutions are found to be: [[-5. -5.] [ 5. 5.]]
MIT
numerical5.ipynb
fatginger1024/NumericalMethods
From above we learn that the solutions are indeed left with $(x,y) = (5,5)$ or $(x,y) = (-5,-5)$ (c) Convergence
%config InlineBackend.figure_format = 'retina' import numpy as np from scipy.optimize import fsolve import matplotlib.pyplot as plt def f(x, y): return x**2+y**2-50; def g(x, y): return x*y-25 x = np.linspace(-6, 6, 500) @np.vectorize def fy(x): x0 = 0.0 def tmp(y): return f(x, y) y1, ...
_____no_output_____
MIT
numerical5.ipynb
fatginger1024/NumericalMethods
(d) Maximum iterations Now also apply the Jacobian. The jacobian of the system of equation is simply as follows$$\mathbf{J} = \begin{bmatrix}\frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\\frac{\partial g}{\partial x} & \frac{\partial g}{\partial y}\end{bmatrix}$$$$=\begin{bmatrix}2x & 2y \\y & x\en...
fd = lambda x: np.array([[2*x[0],2*x[1]],[x[1],x[0]]]).reshape(2,2) i =1 I = np.array([]) F = np.array([]) G = np.array([]) X_std = np.array([]) Y_std = np.array([]) while i<50: x_result = fsolve(f1,[-100,-100],fprime=fd,maxfev=i) f_result = f(x_result[0],x_result[1]) g_result = g(x_result[0],x_result[1])...
_____no_output_____
MIT
numerical5.ipynb
fatginger1024/NumericalMethods
Exploratory Data Analysis
from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession from pyspark.sql.types import * from pyspark.sql import functions as F spark = SparkSession.builder.master('local[1]').appName("Jupyter").getOrCreate() sc = spark.sparkContext #test if this works import pandas as pd import numpy as np im...
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
Load Data
#collisions = spark.read.csv('data/accidents.csv', header='true', inferSchema = True) #collisions.show(2) df_new = spark.read.csv('data/accidents_new.csv', header='true', inferSchema = True)
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
Data Perspective_____ * One variable * Numeric variables: * continuous * discrete * Categorical variables: * ordinal * nominal* Multiple variables: * Numeric x Numeric * Categorical x Numeric * Categorical x Categorical____________________ Overview
print('The total number of rows : ', df_new.count(), '\nThe total number of columns :', len(df_new.columns))
The total number of rows : 128647 The total number of columns : 40
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
Data SchemaPrint the data schema for our dataset - SAAQ Accident Information
df_new.printSchema() # Create temporary table query with SQL df_new.createOrReplaceTempView('AccidentData') accidents_limit_10 = spark.sql( ''' SELECT * FROM AccidentData LIMIT 10 ''' ).toPandas() accidents_limit_10
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
One Variable__________ a. Numeric - Data TotalsTotals for various accident records
from pyspark.sql import functions as func #df_new.agg(func.sum("NB_BLESSES_VELO").alias('Velo'),func.sum("NB_VICTIMES_MOTO"),func.sum("NB_VEH_IMPLIQUES_ACCDN")).show() df_new.agg(func.sum("NB_VEH_IMPLIQUES_ACCDN").alias('Ttl Cars In Accidents')).show() df_new.agg(func.sum("NB_VICTIMES_TOTAL").alias('Ttl Victims')).sh...
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
b. Categorical GRAVITE - severity of the accident
gravite_levels = spark.sql( ''' SELECT GRAVITE, COUNT(*) as Total FROM AccidentData GROUP BY GRAVITE ORDER BY Total DESC ''' ).toPandas() gravite_levels # Pie Chart fig,ax = plt.subplots(1,1,figsize=(12,6)) wedges, texts, autotexts = ax.pie(gravite_levels['Total'], radius=2, #labeldistance=2, pctdistance=1.1, ...
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
METEO - Weather Conditions
meteo_conditions = spark.sql( ''' SELECT METEO, COUNT(*) as Total FROM AccidentData GROUP BY METEO ORDER BY Total DESC ''' ).toPandas() meteo_conditions['METEO'] = meteo_conditions['METEO'].replace( {11:'Clear',12:'Overcast: cloudy/dark',13:'Fog/mist', 14...
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
Multiple Variables____________ Numeric X Categorical 1. Accident Victims by Municipality
victims_by_municipality = spark.sql( ''' SELECT MUNCP, SUM(NB_VICTIMES_TOTAL) as Total FROM AccidentData GROUP BY MUNCP ORDER BY Total DESC ''' ).toPandas() victims_by_municipality fig,ax = plt.subplots(1,1,figsize=(10,6)) victims_by_municipality.plot(x = 'MUNCP', y = 'Total', kind = 'barh', color = 'C0', ax = ax, l...
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
2. Total Collisions by Day of Week
collisions_by_day = spark.sql( ''' SELECT WEEK_DAY, COUNT(WEEK_DAY) as Number_of_Collisions FROM AccidentData GROUP BY WEEK_DAY ORDER BY Number_of_Collisions DESC ''' ).toPandas() collisions_by_day fig,ax = plt.subplots(1,1,figsize=(10,6)) collisions_by_day.plot(x = 'WEEK_DAY', y = 'Number_of_Collisions', kind = 'ba...
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
"VE", Friday has the highest number of collisions. 3. Top 10 Accidents by street
accidents_by_street = spark.sql( ''' SELECT STREET, COUNT(STREET) as Number_of_Accidents FROM AccidentData GROUP BY STREET ORDER BY Number_of_Accidents DESC LIMIT 10 ''' ).toPandas() fig,ax = plt.subplots(1,1,figsize=(10,6)) #accidents_by_street.plot(x = 'STREET', y = 'Number_of_Accidents', kind = 'barh', color = 'C0'...
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
Numeric X Numeric Correlation HeatmapIllustrates the corellation between numeric variables of the dataset.
plot_df = spark.sql( ''' SELECT METEO, SURFACE, LIGHT, TYPE_ACCDN, NB_MORTS, NB_BLESSES_GRAVES, NB_VEH_IMPLIQUES_ACCDN, NB_VICTIMES_TOTAL FROM AccidentData ''' ).toPandas() corrmat = plot_df.corr() f, ax = plt.subplots(figsize=(10, 7)) sns.heatmap(corrmat, vmax=.8, square=True) plt.savefig('figures/heatmap.png') pl...
_____no_output_____
MIT
code/data_EDA.ipynb
ArwaSheraky/Montreal-Collisions
Hist plot for all values (even though 0 is actually useless)
fig, ax = plt.subplots(figsize=(10,10)) ax.hist(expressions, alpha=0.5, label=expressions.columns) ax.legend()
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Filter out all values that are equal to 0
expressions.value_counts(sort=True)
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
===> columns contempt, unkown and NF can be dropped
expressions_drop = expressions.drop(columns=["unknown", "contempt", "NF"]) exp_nan = expressions_drop.replace(0, np.NaN) exp_stacked = exp_nan.stack(dropna=True) exp_unstacked = exp_stacked.reset_index(level=1) expressions_single = exp_unstacked.rename(columns={"level_1": "expression"}).drop(columns=[0]) expressions_si...
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Append expressions to expw
expw["expression"] = expressions_single["expression"] expw.head()
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Remove unnecessary columns
expw_minimal = expw.drop(expw.columns[1:-1], axis=1) expw_minimal.loc[:, "Image name"] = data_dir + "/" + expw_minimal["Image name"].astype(str) expw_minimal.shape
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Histogram of expression distribution
x_ticks = [f"{idx} = {expr}, count: {count}" for idx, (expr, count) in enumerate(zip(list(expressions_single.value_counts().index.get_level_values(0)), expressions_single.value_counts().values))] x_ticks ax = expressions_single.value_counts().plot(kind='barh') ax.set_yticklabels(x_ticks)
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Create a csv file with all absolute image paths for annotating with FairFace
col_name = "img_path" image_names = expw[["Image name"]] image_names.head() image_names.rename(columns={"Image name": "img_path"}, inplace=True) image_names.loc[:, "img_path"] = data_dir + "/" + image_names["img_path"].astype(str) save_path = "/home/steffi/dev/independent_study/FairFace/expw_image_paths.csv" image_name...
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Filter only img_paths which contain "black", "African", "chinese", "asian"
black = image_names.loc[image_names.img_path.str.contains('(black)'), :] african = image_names.loc[image_names.img_path.str.contains('(African)'), :] asian = image_names.loc[image_names.img_path.str.contains('(asian)'), :] chinese = image_names.loc[image_names.img_path.str.contains('(chinese)'), :] filtered = pd.concat...
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Filter and save subgroups Anger
black_angry_annoyed = black.loc[image_names.img_path.str.contains('(angry)|(annoyed)'), :] black_angry_annoyed.to_csv("/home/steffi/dev/independent_study/FairFace/black_angry_annoyed.csv", index=False) black_angry_annoyed.head() african_angry_annoyed = african.loc[image_names.img_path.str.contains('(angry)|(annoyed)'),...
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Surprise
black_awe_astound_amazed = black.loc[image_names.img_path.str.contains('(awe)|(astound)|(amazed)'), :] black_awe_astound_amazed black_awe_astound_amazed.to_csv("/home/steffi/dev/independent_study/FairFace/black_awe_astound_amazed.csv", index=False) african_awe = african.loc[image_names.img_path.str.contains('(awe)'), :...
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Fear
black_fear = black.loc[image_names.img_path.str.contains('(fear)|(frightened)|(anxious)|(shocked)'), :] black_fear.shape african_fear = african.loc[image_names.img_path.str.contains('(fear)|(frightened)|(anxious)|(shocked)'), :] black_african_fear = pd.concat([african_fear, black_fear]) black_african_fear.shape black_a...
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Disgust
black_disgust = black.loc[image_names.img_path.str.contains('(distaste)|(disgust)'), :] african_digsust = african.loc[image_names.img_path.str.contains('(distaste)|(disgust)'), :] african_digsust.shape black_african_disgust = pd.concat([black_disgust, african_digsust]) pd.set_option('display.max_colwidth', -1) black_af...
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
Saving all filtered to csv
filtered_save_path = "/home/steffi/dev/independent_study/FairFace/filtered_expw_image_paths.csv" filtered.to_csv(filtered_save_path, index=False)
_____no_output_____
MIT
notebooks/ExpW EDA.ipynb
StefanieStoppel/InferFace
T M V A_Tutorial_Classification_Tmva_AppTMVA example, for classification with following objectives: * Apply a BDT with TMVA**Author:** Lailin XU This notebook tutorial was automatically generated with ROOTBOOK-izer from the macro found in the ROOT repository on Tuesday, April 27, 2021 at 01:21 AM.
from ROOT import TMVA, TFile, TTree, TCut, TH1F, TCanvas, gROOT, TLegend from subprocess import call from os.path import isfile from array import array gROOT.SetStyle("ATLAS")
Welcome to JupyROOT 6.22/07
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial