markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
merge validation pdfs created so far
# ! pip install PyPDF2 # ! ls -lh /home/jovyan/*.pdf pdf_list = ['/home/jovyan/global_mean_tasmax_370.pdf', '/home/jovyan/tasmax_max_bias_corrected.pdf', '/home/jovyan/tasmax_max_cmip6.pdf', '/home/jovyan/tasmax_max_downscaled.pdf', '/home/jovyan/tasmax_mean_bias_corr...
_____no_output_____
MIT
notebooks/downscaling_pipeline/global_validation.ipynb
brews/downscaleCMIP6
CSX46 - Class 19 - MCODEIn this notebook, we will analyze a simple graph (`test.dot`) and then the Krogran network using the MCODE community detection algorithm.
import pygraphviz import igraph import numpy import pandas import sys from collections import defaultdict test_graph = FILL IN HERE nodes = test_graph.nodes() edges = FILL IN HERE test_igraph = FILL IN HERE test_igraph.summary() igraph.drawing.plot(FILL IN HERE)
_____no_output_____
Apache-2.0
class19_MCODE_python3_template.ipynb
curiositymap/Networks-in-Computational-Biology
Function `mcode` takes a graph adjacency list `adj_list` and a float parameter `vwp` (vertex weight probability), and returns a list of cluster assignments (of length equal to the number of clusters). Original code from True Price at UNC Chapel Hill [link to original code](https://github.com/trueprice/python-graph-clu...
def mcode(adj_list, vwp): # Stage 1: Vertex Weighting N = len(adj_list) edges = [[]]*N weights = dict((v, 1.) for v in range(0,N)) edges=defaultdict(set) for i in range(0,N): edges[i] = # MAKE A SET FROM adj_list[i] res_clusters = [] for i,v in enumerate(edges): neighborhood = # union ...
_____no_output_____
Apache-2.0
class19_MCODE_python3_template.ipynb
curiositymap/Networks-in-Computational-Biology
Run mcode on the adjacency list for your toy graph, with vwp=0.8. How many clusters did it find? Do the cluster memberships make sense? Load the Krogan et al. network edge-list data as a Pandas data frame
edge_list = pandas.read_csv("shared/krogan.sif", sep="\t", names=["protein1","protein2"])
_____no_output_____
Apache-2.0
class19_MCODE_python3_template.ipynb
curiositymap/Networks-in-Computational-Biology
Make an igraph graph and print its summary
krogan_graph = FILL IN HERE krogan_graph.summary()
_____no_output_____
Apache-2.0
class19_MCODE_python3_template.ipynb
curiositymap/Networks-in-Computational-Biology
Run mcode on your graph with vwp=0.1
res = FILL IN HERE
_____no_output_____
Apache-2.0
class19_MCODE_python3_template.ipynb
curiositymap/Networks-in-Computational-Biology
Get the cluster sizes
FILL IN HERE
_____no_output_____
Apache-2.0
class19_MCODE_python3_template.ipynb
curiositymap/Networks-in-Computational-Biology
Test Hypothesis by Simulating Statistics Mini-Lab 1: Hypothesis Testing Welcome to your next mini-lab! Go ahead an run the following cell to get started. You can do that by clicking on the cell and then clickcing `Run` on the top bar. You can also just press `Shift` + `Enter` to run the cell.
from datascience import * import numpy as np import otter import matplotlib %matplotlib inline import matplotlib.pyplot as plots plots.style.use('fivethirtyeight') grader = otter.Notebook("m7_l1_tests")
_____no_output_____
MIT
minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb
garath/inferentialthinking
In the previous two labs we've analyzed some data regarding COVID-19 test cases. Let's continue to analyze this data, specifically _claims_ about this data. Once again, we'll be be using ficitious statistics from Blockeley University.Let's say that Blockeley data science faculty are looking at the spread of COVID-19 ac...
test_results = Table.read_table("../datasets/covid19_village_tests.csv") test_results.show(5) ...
_____no_output_____
MIT
minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb
garath/inferentialthinking
From here we can formulate our **Null Hypothesis** and **Alternate Hypothesis** Our *null hypothesis* is that this village truly has a 26% infection rate amongst the populations. Our *alternate hypothesis* is that this village does not in actuality have a 26% infection rate - it's way too low. Now we need our test stat...
def proportion_positive(test_results): numerator = ... denominator = ... return numerator / denominator grader.check("q1")
_____no_output_____
MIT
minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb
garath/inferentialthinking
If you grouped by `Village Number` before, you would realize that there are roughly 3000 tests per village. Let's now create functions that will randomly take 3000 tests from the `test_results` table and to apply our test statistic. Complete the `sample_population` and `apply_statistic` functions below!The `sample_popu...
def sample_population(population_table): sampled_population = ... return sampled_population def apply_statistic(sample_table, column_name, statistic_function): return statistic_function(...) grader.check("q2")
_____no_output_____
MIT
minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb
garath/inferentialthinking
Now for the simulation portion! Complete the for loop below and fill in a reasonable number for the `iterations` variable. The `iterations` variable will determine just how many random samples that we will take in order to test our hypotheses. There is also code that will visualize your simulation and give you data reg...
# Simulation code below. Fill out this portion! iterations = ... simulations = make_array() for iteration in np.arange(iterations): sample_table = ... test_statistic = ... simulations = np.append(simulations, test_statistic) # This code is to tell you what percentage of our simulations are at or bel...
_____no_output_____
MIT
minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb
garath/inferentialthinking
Given our hypothesis test, what can you conclude about the village that reports having a 26% COVID-19 infection rate? Has your hypothesis changed before? Do you now trust or distrust these numbers? And if you do distrust these numbers, what do you think went wrong in the reporting? Congratulations on finishing! Run the...
grader.check_all()
_____no_output_____
MIT
minilabs/test-hypothesis-by-simulating-statistics/m7_l1.ipynb
garath/inferentialthinking
Given running cost $g(x_t,u_t)$ and terminal cost $h(x_T)$ the finite horizon $(t=0 \ldots T)$ optimal control problem seeks to find the optimal control, $$u^*_{1:T} = \text{argmin}_{u_{1:T}} L(x_{1:T},u_{1:T})$$ $$u^*_{1:T} = \text{argmin}_{u_{1:T}} h(x_T) + \sum_{t=0}^T g(x_t,u_t)$$subject to the dynamics constraint:...
# NN parameters Nsamples = 10000 epochs = 500 latent_dim = 1024 batch_size = 8 lr = 3e-4 # Torch environment wrapping gym pendulum torch_env = Pendulum() # Test parameters Nsteps = 100 # Set up model (fully connected neural network) model = FCN(latent_dim=latent_dim,d=torch_env.d,ud=torch_env.ud) optimizer = torch....
_____no_output_____
MIT
Model-based-OC-shooting.ipynb
mgb45/OC-notebooks
Генерация заголовков научных статей: слабый baseline Источник: https://github.com/bentrevett/pytorch-seq2seq
# Если Вы запускаете ноутбук на colab, # выполните следующие строчки, чтобы подгрузить библиотеку dlnlputils: # !git clone https://github.com/Samsung-IT-Academy/stepik-dl-nlp.git # import sys; sys.path.append('/content/stepik-dl-nlp') import torch import torch.nn as nn import torch.optim as optim import torch.nn.funct...
_____no_output_____
MIT
task11_kaggle/lstm_baseline.ipynb
yupopov/stepik-dl-nlp
Обучение модели
import matplotlib matplotlib.rcParams.update({'figure.figsize': (16, 12), 'font.size': 14}) import matplotlib.pyplot as plt %matplotlib inline from IPython.display import clear_output def train(model, iterator, optimizer, criterion, clip, train_history=None, valid_history=None): model.train() epoch_...
_____no_output_____
MIT
task11_kaggle/lstm_baseline.ipynb
yupopov/stepik-dl-nlp
Finally, we load the parameters from our best validation loss and get our results on the test set.
# for cpu usage model.load_state_dict(torch.load(MODEL_NAME, map_location=torch.device('cpu'))) # for gpu usage # model.load_state_dict(torch.load(MODEL_NAME), map_location=torch.device('cpu')) test_loss = evaluate(model, test_iterator, criterion) print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss...
_____no_output_____
MIT
task11_kaggle/lstm_baseline.ipynb
yupopov/stepik-dl-nlp
Генерация заголовков
def translate_sentence(model, tokenized_sentence): model.eval() tokenized_sentence = ['<sos>'] + [t.lower() for t in tokenized_sentence] + ['<eos>'] numericalized = [TEXT.vocab.stoi[t] for t in tokenized_sentence] sentence_length = torch.LongTensor([len(numericalized)]).to(device) tensor = torch.L...
_____no_output_____
MIT
task11_kaggle/lstm_baseline.ipynb
yupopov/stepik-dl-nlp
Считаем BLEU на train.csv
import nltk n_gram_weights = [0.3334, 0.3333, 0.3333] test_len = len(test_data) original_texts = [] generated_texts = [] macro_bleu = 0 for example_idx in range(test_len): src = vars(test_data.examples[example_idx])['src'] trg = vars(test_data.examples[example_idx])['trg'] translation, _ = translate_sente...
_____no_output_____
MIT
task11_kaggle/lstm_baseline.ipynb
yupopov/stepik-dl-nlp
Делаем submission в Kaggle
import pandas as pd submission_data = pd.read_csv('datasets/test.csv') abstracts = submission_data['abstract'].values
_____no_output_____
MIT
task11_kaggle/lstm_baseline.ipynb
yupopov/stepik-dl-nlp
Генерация заголовков для тестовых данных:
titles = [] for abstract in abstracts: title, _ = translate_sentence(model, abstract.split()) titles.append(' '.join(title).replace('<unk>', ''))
_____no_output_____
MIT
task11_kaggle/lstm_baseline.ipynb
yupopov/stepik-dl-nlp
Записываем полученные заголовки в файл формата `,`:
submission_df = pd.DataFrame({'abstract': abstracts, 'title': titles}) submission_df.to_csv('datasets/predicted_titles.csv', index=False)
_____no_output_____
MIT
task11_kaggle/lstm_baseline.ipynb
yupopov/stepik-dl-nlp
С помощью скрипта `generate_csv` приводим файл `submission_prediction.csv` в формат, необходимый для посылки в соревнование на Kaggle:
from create_submission import generate_csv generate_csv('datasets/predicted_titles.csv', 'datasets/kaggle_pred.csv', 'datasets/vocs.pkl') !wc -l datasets/kaggle_pred.csv !head datasets/kaggle_pred.csv
_____no_output_____
MIT
task11_kaggle/lstm_baseline.ipynb
yupopov/stepik-dl-nlp
Basic Apach Spark Analysis - Ref: https://timw.info/ply - Notebook tutorial: https://timw.info/ekt
# Load NYC Taxi data df = spark.read.load('abfss://defaultfs@twsynapsedls.dfs.core.windows.net/NYCTripSmall.parquet', format='parquet') display(df.limit(10)) # View the dataframe schema df.printSchema() # Load the NYC Taxi data into the Spark nyctaxi database spark.sql("CREATE DATABASE IF NOT EXISTS nyctaxi") df....
_____no_output_____
MIT
demo-resources/Spark-Pool-Notebook.ipynb
vijrqrr9/dp203
Germany: LK Kleve (Nordrhein-Westfalen)* Homepage of project: https://oscovida.github.io* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb)
import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview(country="Germany", subregion="LK Kleve"); # load the data cases, deaths, region...
_____no_output_____
CC-BY-4.0
ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb
RobertRosca/oscovida.github.io
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyte...
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}")
_____no_output_____
CC-BY-4.0
ipynb/Germany-Nordrhein-Westfalen-LK-Kleve.ipynb
RobertRosca/oscovida.github.io
MNIST learns your handwritingThis is a small project on using a GAN to generate numbers that look as someone else's handwriting when not trained on all numbers written by this person. For example say we had someone write the number 273 and we now want to write 481 in their own handwriting.The main inspiration for thi...
import tensorflow as tf from tensorflow_addons.layers import InstanceNormalization import numpy as np import tensorflow.keras.layers as layers import time from tensorflow.keras.datasets.mnist import load_data import sys import os import datetime
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
LayersThere are a few layers that were custom made. More importantly it is udeful to make this custom layers for the layers that try to incorporate style. This is as the inputs themselves are custom as you are inputing an image and a vector representing the style.ResBlk is short for Residual Block, where it is predict...
class ResBlk(tf.keras.Model): def __init__(self, dim_in, dim_out, actv=layers.LeakyReLU(), normalize=False, downsample=False): super(ResBlk, self).__init__() self.actv = actv self.normalize = normalize self.downsample = downsample self.learned_sc = dim_in != dim_out self._buil...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
AdaIN stands for Adaptive Instance Normalization. It is a type of normalization that allows to 'mix' two inputs. In this case we use the style vector to mix with our input x which is the image or part of the process of constructing this image.
class AdaIn(tf.keras.Model): def __init__(self, style_dim, num_features): super(AdaIn,self).__init__() self.norm = InstanceNormalization() self.lin = layers.Dense(num_features*2) def call(self, x, s): h=self.lin(s) h=tf.reshape(h, [1, tf.shape(h)[0], 1, tf.shape(h)[1]]) gamma,beta=tf.split(...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Generator ClassIn the generator we have two steps one for encoding the image into lower level information and one to decode back to the image. In this particular architecture the decoding uses the style to build back the image as it is an important part of the process. The decoding does not do this as we have the styl...
class Generator(tf.keras.Model): def __init__(self, img_size=28, style_dim=24, dim_in=8, max_conv_dim=128, repeat_num=2): super(Generator, self).__init__() self.img_size=img_size self.from_bw=layers.Conv2D(dim_in, 3, padding='same', input_shape=(1,img_size,img_size,1)) self.encode=[] self.decode=[...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Mapping NetworkThe Mapping Network and the Style encoder are the parts of this architecture that make a difference in allowing style to be analyzed and put into our images. The mapping network will take as an input a latent code (represents images as a vector in a high dimensional space) and the domain in this case th...
class MappingNetwork(tf.keras.Model): def __init__(self, latent_dim=16, style_dim=24, num_domains=10): super(MappingNetwork,self).__init__() map_layers = [layers.Dense(128)] map_layers += [layers.ReLU()] for _ in range(2): map_layers += [layers.Dense(128)] map_layers += [layers.ReLU()] ...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Style EncoderAn important thing to notice from the style encoder is that it takes as an input an image and outputs a style vector. Looking at the dimensions of these we notice we need to flatten out the image through the layers. This can usually be done in two ways. By flattening a 2 dimensional input to a 1 dimension...
class StyleEncoder(tf.keras.Model): def __init__(self, img_size=28, style_dim=24, dim_in=16, num_domains=10, max_conv_dim=128, repeat_num=5): super(StyleEncoder,self).__init__() blocks = [layers.Conv2D(dim_in, 3, padding='same')] for _ in range(repeat_num): #repetition 1 sends to (b,14,14,d) 2 ...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Discriminator ClassSimilarly to the Style encoder the input of the discriminator is an image and we need to downsample it until it is one dimensional.
class Discriminator(tf.keras.Model): def __init__(self, img_size=28, dim_in=16, num_domains=10, max_conv_dim=128, repeat_num=5): super(Discriminator, self).__init__() blocks = [layers.Conv2D(dim_in, 3, padding='same')] for _ in range(repeat_num): #repetition 1 sends to (b,14,14,d) 2 to (b,7,7,d) 3 ...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Loss FunctionsThe loss functions used are an important part of this model as it describes our goal when training and how to perform gradient descent. The discriminator loss function is the regular adversarial loss L_adv used in a GAN architecture. But furthermore we have three loss functions added.For this loss functi...
def moving_average(model, model_test, beta=0.999): for i in range(len(model.weights)): model_test.weights[i] = (1-beta)*model.weights[i] + beta*model_test.weights[i] def adv_loss(logits, target): assert target in [1, 0] targets = tf.fill(tf.shape(logits), target) loss = tf.keras.losses.BinaryCross...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
The ModelHere we introduce the class Solver which is the most important class as this will represent our whole model. It will initiate all of our neural networks as well as train our network.
class Solver(tf.keras.Model): def __init__(self, args): super(Solver, self).__init__() self.args = args self.step=0 self.nets, self.nets_ema = self.build_model(self.args) # below setattrs are to make networks be children of Solver, e.g., for self.to(self.device) for name in self.nets.keys(): ...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Data Loading and Preprocessing
(trainX, trainy), (valX, valy) = load_data() trainX=tf.reshape(trainX, (60000,1,28,28,1)) valX=tf.reshape(valX, (10000,1,28,28,1)) inputs=[] latent_dim=8 for i in range(6000): i=i+36000 if i % 2000==1999: print(i+1) input={} input['x_src']=tf.cast(trainX[i],tf.float32) input['y_src']=int(trainy[i]) n=...
38000 40000 42000
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
ParametersThis dictionary contains the different parameters we use to run the model.
args={'img_size':28, 'style_dim':24, 'latent_dim':16, 'num_domains':10, 'lambda_reg':1, 'lambda_ds':1, 'lambda_sty':10, 'lambda_cyc':10, 'hidden_dim':128, 'resume_iter':0, 'ds_iter':6000, 'total_iters':6000, 'batch_size':8, 'val_batch_size...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Load Model
solv=Solver(args) solv.build_model(args) solv.load(96000)
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Training
with tf.device('/device:GPU:0'): solv.train(inputs, inputs)
Start training... WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the co...
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
ResultsIn this first cell we show an image where the rows represent a source image and the columns the style they are trying to mimic. We can see in this case that that the image still highly resembles the source image but has obtained some characteristics depending on the style of our reference. In most cases this st...
import matplotlib.pyplot as pyplot for i in range(4): pyplot.subplot(5,5,2+i) pyplot.axis('off') pyplot.imshow(np.reshape(inputs[i]['x_ref'],[28,28]), cmap='gray_r') for i in range(4): pyplot.subplot(5, 5, 5*(i+1) + 1) pyplot.axis('off') pyplot.imshow(np.reshape(inputs[i]['x_src'], [28,28]), cmap='gray_r') for j...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Below we generate random styles and see the output it generates. We notice that it is quite likely the images are distorted in this case, compared to when using the style of an already existing image it seems it would usually have a good quality.
for i in range(5): pyplot.subplot(5,5,1+i) pyplot.axis('off') pyplot.imshow(np.reshape(solv.nets['generator'](inputs[0]['x_src'],tf.random.normal((1,24))).numpy(), [28,28]), cmap='gray_r')
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
Here we can see the process of how the image transforms into the target. In these small images there is not too much that is changing but we can still appreciate the process.
s1=solv.nets['style_encoder'](inputs[3]['x_src'],inputs[3]['y_src']) s2=solv.nets['style_encoder'](inputs[3]['x_ref'],inputs[3]['y_ref']) for i in range(5): pyplot.subplot(5,5,1+i) pyplot.axis('off') s=(1-i/5)*s1+i/5*s2 pyplot.imshow(np.reshape(solv.nets['generator'](inputs[3]['x_src'],s).numpy(), [28,28]), cma...
_____no_output_____
MIT
AI-data-Projects/MNIST_GAN/MNIST_Style_GAN_v2.ipynb
nk555/AI-Projects
git-bakup
USER='tonybutzer' API_TOKEN='ATOKEN' GIT_API_URL='https://api.github.com' def get_api(url): try: request = urllib2.Request(GIT_API_URL + url) base64string = base64.encodestring('%s/token:%s' % (USER, API_TOKEN)).replace('\n', '') request.add_header("Authorization", "Basic %s" % base64string...
active-fire
MIT
Attic/repo/git-bakup.ipynb
tonybutzer/etscrum
Euler Problem 94================It is easily proved that no equilateral triangle exists with integral length sides and integral area. However, the almost equilateral triangle 5-5-6 has an area of 12 square units.We shall define an almost equilateral triangle to be a triangle for which two sides are equal and the third ...
a, b, p, s = 1, 0, 0, 0 while p <= 10**9: s += p a, b = 2*a + 3*b, a + 2*b p = 4*a*a a, b, p = 1, 1, 0 while p <= 10**9: s += p a, b = 2*a + 3*b, a + 2*b p = 2*a*a print(s)
518408346
MIT
Euler 094 - Almost equilateral triangles.ipynb
Radcliffe/project-euler
Now You Code 4: Temperature ConversionWrite a python program which will convert temperatures from Celcius to Fahrenheight.The program should take a temperature in degrees Celcius as input and output a temperature in degrees Fahrenheight.Example:```Enter the temperature in Celcius: 100100 Celcius is 212 Fahrenheight```...
celcius = float(input("enter the temperature in celcius: ")) fahrenhieght=(celcius*9/5)+32 print("fahrenhieght equals " "%.2f" %fahrenhieght)
enter the temperature in celcius: 100 fahrenhieght equals 212.00
MIT
content/lessons/03/Now-You-Code/NYC4-Temperature-Conversion.ipynb
MahopacHS/spring2019-Christian64Aguilar
Cross Validation
from keras.models import Sequential from keras.layers import Dense from sklearn.model_selection import StratifiedKFold import numpy as np seed = 7 np.random.seed(seed) dataset = np.loadtxt('pima-indians-diabetes.data', delimiter=',') X = dataset[:, 0:8] Y = dataset[:, 8] X.shape kfold = StratifiedKFold(n_splits=5, shuf...
_____no_output_____
MIT
keras/170605-cross-validation.ipynb
aidiary/notebooks
Convolutional Neural Networks with Keras In this lab, we will learn how to use the Keras library to build convolutional neural networks. We will also use the popular MNIST dataset and we will compare our results to using a conventional neural network. Convolutional Neural Networks with KerasObjective for this Notebook...
import tensorflow.keras from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.utils import to_categorical
_____no_output_____
MIT
2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb
aqafridi/AI-Engineering-Specialization
When working with convolutional neural networks in particular, we will need additional packages.
from tensorflow.keras.layers import Conv2D # to add convolutional layers from tensorflow.keras.layers import MaxPooling2D # to add pooling layers from tensorflow.keras.layers import Flatten # to flatten data for fully connected layers
_____no_output_____
MIT
2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb
aqafridi/AI-Engineering-Specialization
Convolutional Layer with One set of convolutional and pooling layers
# import data from tensorflow.keras.datasets import mnist # load data (X_train, y_train), (X_test, y_test) = mnist.load_data() # reshape to be [samples][pixels][width][height] X_train = X_train.reshape(X_train.shape[0], 28, 28, 1).astype('float32') X_test = X_test.reshape(X_test.shape[0], 28, 28, 1).astype('float32')
_____no_output_____
MIT
2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb
aqafridi/AI-Engineering-Specialization
Let's normalize the pixel values to be between 0 and 1
X_train = X_train / 255 # normalize training data X_test = X_test / 255 # normalize test data
_____no_output_____
MIT
2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb
aqafridi/AI-Engineering-Specialization
Next, let's convert the target variable into binary categories
y_train = to_categorical(y_train) y_test = to_categorical(y_test) num_classes = y_test.shape[1] # number of categories
_____no_output_____
MIT
2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb
aqafridi/AI-Engineering-Specialization
Next, let's define a function that creates our model. Let's start with one set of convolutional and pooling layers.
def convolutional_model(): # create model model = Sequential() model.add(Conv2D(16, (5, 5), strides=(1, 1), activation='relu', input_shape=(28, 28, 1))) model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) model.add(Flatten()) model.add(Dense(100, activation='relu')) model.add...
_____no_output_____
MIT
2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb
aqafridi/AI-Engineering-Specialization
Finally, let's call the function to create the model, and then let's train it and evaluate it.
# build the model model = convolutional_model() # fit the model model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2) # evaluate the model scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: {} \n Error: {}".format(scores[1], 100-scores[1]*100))
WARNING:tensorflow:From /home/jupyterlab/conda/envs/python/lib/python3.7/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version. Instructions for updating: Call initializer instance wit...
MIT
2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb
aqafridi/AI-Engineering-Specialization
* * * Convolutional Layer with two sets of convolutional and pooling layers Let's redefine our convolutional model so that it has two convolutional and pooling layers instead of just one layer of each.
def convolutional_model(): # create model model = Sequential() model.add(Conv2D(16, (5, 5), activation='relu', input_shape=(28, 28, 1))) model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2))) model.add(Conv2D(8, (2, 2), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2), st...
_____no_output_____
MIT
2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb
aqafridi/AI-Engineering-Specialization
Now, let's call the function to create our new convolutional neural network, and then let's train it and evaluate it.
# build the model model = convolutional_model() # fit the model model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=10, batch_size=200, verbose=2) # evaluate the model scores = model.evaluate(X_test, y_test, verbose=0) print("Accuracy: {} \n Error: {}".format(scores[1], 100-scores[1]*100))
Train on 60000 samples, validate on 10000 samples Epoch 1/10 60000/60000 - 47s - loss: 0.4901 - acc: 0.8633 - val_loss: 0.1385 - val_acc: 0.9570 Epoch 2/10 60000/60000 - 47s - loss: 0.1185 - acc: 0.9642 - val_loss: 0.0848 - val_acc: 0.9728 Epoch 3/10 60000/60000 - 47s - loss: 0.0831 - acc: 0.9740 - val_loss: 0.0633 - v...
MIT
2. Intro to Deep Learning & Neural Networks with Keras/4. Convolutional Neural Learning/Convolutional-Neural-Networks-with-Keras-py-v1.0.ipynb
aqafridi/AI-Engineering-Specialization
多元函数微分法及其应用只有一个自变量的函数叫做一元函数。在很多实际问题中往往牵涉到多方面的因素,反映到数学上,就是一个变量依赖于多个变量的情形。这就提出了多元函数以及多元函数的微分和积分问题。本章将在一元函数微分学的基础上,讨论多元函数的微分法及其应用。讨论中我们以二元函数为主,因为从一元函数到二元函数会产生新的问题,而从二元函数到二元以上的多元函数则可以类推。本节包括以下内容:1. 多元函数的基本概念2. 偏导数3. 全微分4. 多元复合函数的求导法则5. 隐函数的求导公式6. 多元函数微分学的几何应用7. 方向导数和梯度8. 多元函数的极值及其求法9. 二元函数的泰勒公式10. 最小二乘法 1. 多元函数的基本概念 1.1 ...
import numpy as np import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D %matplotlib inline @np.vectorize def f(x, y): return x * y / (x ** 2 + y ** 2) step = 0.05 x_min, x_max = -1, 1 y_min, y_max = -1, 1 x_range, y_range = np.arange(x_min, x_max + step, step), np.ara...
_____no_output_____
MIT
Multivariable Differential Calculus and its Application.ipynb
reata/Calculus
重点考察 $(0, 0)$ 这个点:**极限**显然当点 $P(x, y)$ 沿 $x$ 轴趋于点 $(0,0)$ 时$$ \lim_{\begin{split}(x,y)\rightarrow (0,0) \\ y=0 \end{split}}f(x,y) = \lim_{x \rightarrow 0}f(x,0) =\lim_{x \rightarrow 0}0 = 0$$又当点 $P(x, y)$ 沿 $y$ 轴趋于点 $(0,0)$ 时$$ \lim_{\begin{split}(x,y)\rightarrow (0,0) \\ x=0 \end{split}}f(x,y) = \lim_{y \rightarrow 0}...
@np.vectorize def f(x, y): return x * y / np.sqrt(x ** 2 + y ** 2) step = 0.05 x_min, x_max = -1, 1 y_min, y_max = -1, 1 x_range, y_range = np.arange(x_min, x_max + step, step), np.arange(y_min, y_max + step, step) x_mat, y_mat = np.meshgrid(x_range, y_range) z = f(x_mat.reshape(-1), y_mat.reshape(-1)).reshape(x_m...
_____no_output_____
MIT
Multivariable Differential Calculus and its Application.ipynb
reata/Calculus
Data Import and Check
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns; sns.set() from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve from sklearn.linear_model import LogisticRegression from sklearn.metrics import classification_report from sklearn.model_selection impo...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
* I import data and drop duplicates* I had tried to set the user id as index. Expectedly, it did not work as a user can have multiple trips. However the user - trip combination did not work either which revealed the entire rows duplicated* Once the duplicates are removed, the count of use -trip combinations reveal they...
hoppi = pd.read_csv('C:/Users/gurkaali/Documents/Info/Ben/Hop/WatchesTable.csv', sep=",") hoppi.drop_duplicates(inplace = True) hoppi.groupby(['user_id', 'trip_id'])['user_id']\ .count() \ .reset_index(name='count')\ .sort_values(['count'], ascending = False)\ .head(5)
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Now that I am sure, I can set the index:
hoppi.set_index(['user_id', 'trip_id'], inplace = True)
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Pandas has great features for date calculations. I set the related field types as datetime in case I need those features
hoppi['departure_date'] = pd.to_datetime(hoppi['departure_date'], format = '%m/%d/%y') hoppi['return_date'] = pd.to_datetime(hoppi['return_date'], format = '%m/%d/%y') hoppi['first_search_dt'] = pd.to_datetime(hoppi['first_search_dt'], format = '%m/%d/%y %H:%M') hoppi['watch_added_dt'] = pd.to_datetime(hoppi['watch_add...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
The explanations in the assignment do not cover all fields but field names and the content enable further data verification* Stay should be the difference between departure and return dates. Based on that assumption, the query below should return no records i.e. the 1st item in the tuple returned by shape should be 0:
hoppi['stay2'] = pd.to_timedelta(hoppi['stay'], unit = 'D') hoppi['stay_check'] = hoppi['return_date'] - hoppi['departure_date'] hoppi.loc[(hoppi['stay_check'] != hoppi['stay2']) & (hoppi['return_date'].isnull() == False), \ ['stay2', 'stay_check', 'return_date', 'departure_date']].shape
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
The following date fields must not be before the first search date. Therefore the queries below should reveal no records* watch_added_dt* latest_status_change_dt* first_buy_dt* last_notif_dt* forecast_last_warning_date* forecast_last_danger_date
hoppi.loc[(hoppi['watch_added_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'watch_added_dt']].shape hoppi.loc[(hoppi['latest_status_change_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'latest_status_change_dt']].shape
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
33 records have a first buy suggestion datetime earlier than the user's first search.
hoppi.loc[(hoppi['first_buy_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'first_buy_dt']].shape
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
While the difference is just minutes in most cases, I don't have an explanation to justify it. Given the limited number of cases, I prefer removing them
hoppi.loc[(hoppi['first_buy_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'first_buy_dt']].head() hoppi = hoppi.loc[~(hoppi['first_buy_dt'] < hoppi['first_search_dt'])]
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
There are also 2 records where the last notification is done before the user's first search. I remove those as well
hoppi.loc[(hoppi['last_notif_dt'] < hoppi['first_search_dt']), ['first_search_dt', 'last_notif_dt']] hoppi = hoppi.loc[~(hoppi['last_notif_dt'] < hoppi['first_search_dt'])]
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Same checks on last warning and last danger dates show 362K + and 98K + suspicious records. As the quantitiy is large and descriptions sent with the assignment do not contain details on these 2 fields, I prefer to keep them while taking a note here in case something provides with additional argument to delete them duri...
hoppi.loc[(hoppi['forecast_last_warning_date'] < hoppi['first_search_dt']), \ ['first_search_dt', 'forecast_last_warning_date']].shape hoppi.loc[(hoppi['forecast_last_danger_date'] < hoppi['first_search_dt']), \ ['first_search_dt', 'forecast_last_danger_date']].shape
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Check outliers I reshape the columns in a way that will make working with seaborn easier:
hoppi_box_components = [hoppi[['first_advance']].assign(measurement_type = 'first_advance').reset_index(). \ rename(columns = {'first_advance': 'measurement'}), hoppi[['watch_advance']].assign(measurement_type = 'watch_advance').reset_index(). \ re...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
While several observations look like outliers on the boxplots, the histograms below show that the data is highly skewed. Therefore I do not consider them as outliers
f, axes = plt.subplots(1, 3, figsize=(15, 5), sharex=True) sns.distplot(hoppi['first_advance'], kde=False, color="#FA6866", ax=axes[0]) sns.distplot(hoppi.loc[hoppi['watch_advance'].isnull() == False, 'watch_advance'], kde=False, color="#01AAE4", ax=axes[1]) sns.distplot(hoppi.loc[hoppi['current_advance'].isnull() == F...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Question 1 Given the business model of Hopper, we should understand who is more likely to buy a ticket eventually. Logistic Regression constitutes a convenient way of conducting such analysis. It runs faster than SVN and is easier to interpret, making it ideal for a task like this one: I prepare categorical variables ...
one_hot_trip_type = pd.get_dummies(hoppi['trip_type']) hoppi2 = hoppi.join(one_hot_trip_type)
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
I believe the city / airport distinction in origin and destination fields refer to the fact that some airports are more central such as the difference between Toronto Billy Bishop and Pearson airports. I also checked some airport codes, they do corresponds to cities where there are multiple airports with one or more be...
origin_cols = hoppi2['origin'].str.split("/", n = 1, expand = True) hoppi2['origin_code'] = origin_cols[1] hoppi2['origin_type'] = origin_cols[0] destination_cols = hoppi2['destination'].str.split("/", n = 1, expand = True) hoppi2['destination_code'] = destination_cols[1] hoppi2['destination_type'] = destination_col...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
I prepare categorical variables for whether a watch is placed or not:
hoppi4.loc[hoppi3['watch_added_dt'].isnull() == True, 'watch_bin'] = 0 hoppi4.loc[hoppi3['watch_added_dt'].isnull() == False, 'watch_bin'] = 1
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Given the user - trip combination being unique across the data file, we do not have information on the changes for a user who has updated his trip status. As the data looks like covering the last status of a trip, I prefer to focus analyses on concluded queries i.e. trips either expired or booked. I exclude:* actives: ...
hoppi4.loc[hoppi3['status_latest'] == 'expired', 'result'] = 0 hoppi4.loc[hoppi3['status_latest'] == 'booked', 'result'] = 1
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
A person might be prompted to buy once the price falls because it makes sense or maybe he buys as soon as it starts increasing to avoid further increase. Whatever the case, it makes sense to compare the price at different time points with respect to the original price at first search. For that, I create columns to meas...
hoppi4['dif_last_first'] = hoppi4['last_total'] - hoppi4['first_total'] hoppi4['dif_buy_first'] = hoppi4['first_buy_total'] - hoppi4['first_total'] hoppi4['dif_lowest_first'] = hoppi4['lowest_total'] - hoppi4['first_total']
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
I create a categorical variable for the last recommendation as well to check whether a buy recommendation makes user to book:
one_hot_last_rec = pd.get_dummies(hoppi4['last_rec']) # this create s 2 columns: buy and wait hoppi5 = hoppi4.join(one_hot_last_rec) hoppi5.loc[hoppi5['last_rec'].isnull(), 'buy'] = np.nan # originally null values are given 0. I undo that manipulation here
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
I make a table with rows containing certain results that I want to focus on i.e. expired and booked
hoppi6 = hoppi5.loc[hoppi5['result'].isnull() == False, ['round_trip', 'destination_city', 'origin_city', 'weekend', 'filter_no_lcc', 'filter_non_stop', 'filter_short_layover', 'status_updates', 'watch_bin', 'total_notifs', 'total_buy_notifs', 'buy', 'dif_last_first', ...
<class 'pandas.core.frame.DataFrame'> MultiIndex: 45237 entries, (e42e7c15cde08c19905ee12200fad7cb5af36d1fe3a3310b5f94f95c47ae51cd, 05d59806e67fa9a5b2747bc1b24842189bba0c45e49d3714549fc5df9838ed20) to (d414b1c72a16512dbd7b3859c9c9f574633578acef74d120490625d9010103c7, 3a363a2456b6b7605347e06d2879162b3008004370f73a68f525...
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Some rows have null values such as the price difference between the buy moment and the first price as some users may not have gt the buy recommendation yet. To cover these features, I get only non-null rows:
df = hoppi6.dropna() df.info() X = df[['round_trip', 'destination_city', 'origin_city', 'weekend', 'filter_non_stop', 'filter_short_layover', 'status_updates', 'filter_no_lcc', 'watch_bin', 'total_notifs', 'buy', 'total_buy_notifs', 'dif_lowest_first', 'dif_last_first', ...
precision recall f1-score support 0.0 0.99 1.00 0.99 32564 1.0 0.97 0.87 0.92 2743 micro avg 0.99 0.99 0.99 35307 macro avg 0.98 0.93 0.96 35307 weighted avg 0.99 0.99 0.99 ...
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Data driven insights:The model shows a good level of accuracy. However given the imbalance of data (only 8% of data corresponds to an actual booking) it is crucial to check recall which also shows a high value i.e. false negatives are limited.Now that we know the model looks robust, we can make the following data-driv...
hoppi5.loc[(hoppi5['watch_bin'] == 1.0) & (hoppi5['result'] == 0)].info() pareto_watch_0 = hoppi5.loc[(hoppi5['watch_bin'] == 1.0) & (hoppi5['result'] == 0.0), ['origin_code', 'destination_code']] pareto_watch_0.loc[pareto_watch_0['origin_code'] < pareto_watch_0['destination_code'], \ 'itinerary'] = ...
All observations where the user watched the price but did not book, cover 11697 itineraries. Out of these, 4236 constitute 80% of the whole observation set. That is around 36.2 % of the whole set.
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
The list gives the biggest airports. This result reassures that it is additionally critical to make reliable estimations for these itineraries. The top 10 itineraries consist only of US destinations showing the importance of the US market. As we have seen in the previous question, a user setting the watch on is a good ...
dfw = hoppi5.loc[hoppi5['result'].isnull() == False, ['first_advance', 'first_total', 'watch_bin', 'result']] dfw = dfw.dropna() dfw.groupby('watch_bin').agg({'first_advance': np.mean, 'first_total': np.mean})
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
* As the data is skewed using non-parametrical tests makes more sense. I use the Mann Whitney test for that purpose* The test reveal significant difference between watched and non-watched itineraries at 0.1 level in terms of the number of days between the departure and the first search. Those who place a watch have a w...
stat, p = mannwhitneyu(dfw.loc[dfw['watch_bin'] == 1, 'first_advance'], dfw.loc[dfw['watch_bin'] == 0, 'first_advance']) print('Statistics=%.3f, p=%.3f' % (stat, p))
Statistics=41274130.000, p=0.074
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
* The test on same user groups (those watching vs those who don't) show that they differ in terms of the price they get at their first search. The difference is highly significant given the p-value. * Those who watch have a trip cost of USD125 more on average.* There might be a growth opportunity in budget passengers. ...
stat, p = mannwhitneyu(dfw.loc[dfw['watch_bin'] == 1, 'first_total'], dfw.loc[dfw['watch_bin'] == 0, 'first_total']) print('Statistics=%.3f, p=%.3f' % (stat, p))
Statistics=29810391.000, p=0.000
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Question 3 Chart 1: What is the situation as of now compared to PY?* Note that from the current advance field in the data, I see that we are on April 10th 2018* Expired: Watch is on + Current Date > Departure Date* Inactive: Watch is off + Current Date can be before or after Current Date* Active: Watch is on + Current...
date_range = pd.date_range(start='1/1/2018', end='04/10/2018', freq='D') df_date = pd.DataFrame(date_range, columns = ['date_range']) df_date.set_index('date_range', inplace = True)
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
incoming traffic counts the number of first time searches each day:
hoppi5['first_search_dt_dateonly'] = hoppi5['first_search_dt'].dt.date incoming_traffic = hoppi5.groupby(['first_search_dt_dateonly']) \ .size().reset_index() \ .rename(columns = {0: 'count'}) incoming_traffic.set_index('first_search_dt_dateonly', inplace = True)
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
outgoing traffic counts the number of trips with departure within the same day, each day. Until a trip is considered 'outgoing' there is a chance that it can be converted to booking:
outgoing_traffic = hoppi5.groupby(['departure_date']) \ .size().reset_index() \ .rename(columns = {0: 'count'}) outgoing_traffic.set_index('departure_date', inplace = True)
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
converted traffic is the numbe rof bookings that took place each day i.e. conversions:
hoppi5['latest_status_change_dt_dateonly'] = hoppi5['first_search_dt'].dt.date converted_traffic = hoppi5.loc[hoppi5['status_latest'] == 'booked'].groupby(['latest_status_change_dt_dateonly']) \ .size().reset_index() \ .rename(columns = {0: 'count'}) converted_traffic.set_index('latest_...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
I join counts on the date range index created above:
df_chart1 = pd.merge(df_date, incoming_traffic, left_index = True, right_index = True, how='left') df_chart1.rename(columns = {'count': 'incoming_count'}, inplace = True) df_chart2 = pd.merge(df_chart1, outgoing_traffic, left_index = True, right_index = True, how='left') df_chart2.rename(columns = {'count': 'outgoing_c...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
I plot the chart here below. Note that the data collection seems to have started as of 2018 start. Therefore the outgoing count do not reflect the reality in the early periods of the chart. Also the number of trips whose departure is in the future at a given time could be shown as well. That would show the pool of trip...
sns.set_style('dark') fig, ax1 = plt.subplots(figsize=(15,10)) ax2 = ax1.twinx() sns.lineplot(x=df_chart3['day'], y=df_chart3['incoming_count'], color='#6FC28B', marker = "X", ax=ax1) sns.lineplot(x=df_chart3['day'], y=df_chart3['outgoing_count'], ...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Chart 2: KPIs Affecting Conversion - Categorical KPIs Categorical variables that turned out to have an impact on conversion are worth following daily. As I sugegsted for the 1st chart, it makes more sense to compare these with prior year same period figures.In this chart we follow the % of people who* look for a round...
df_chart_perc1 = hoppi5.loc[hoppi5['departure_date'] >= '04-10-2018'].describe() # describe() gives the mean per vcategory. # As they were binary, it gives the % df_chart_perc2 = df_chart_perc1.loc[['mean'], ['round_trip', 'weekend', 'filter_short_layover', 'watch_bin', 'buy']] df_chart_perc2 = df_chart_perc2.transpos...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Chart 3: KPIs Affecting Conversion - Ordinal KPIs In a similar vein to the Chart2, I look at KPIs having an impact on conversion here as well. This time I check ordinal variables. Again, it would make more sense to compare with prior year same period figures.In this chart we follow * average difference between the low...
df_chart_abs1 = hoppi5.loc[hoppi5['departure_date'] >= '04-10-2018'].describe() df_chart_abs2 = df_chart_abs1.loc[['mean'], ['dif_lowest_first', 'dif_last_first', 'dif_buy_first', 'first_advance']] df_chart_abs3 = df_chart_abs2.tra...
_____no_output_____
MIT
Watch Bookings/Watches Table Analytics Exercise.ipynb
nediyonbe/Data-Challenge
Background This project deals with artificial advertising data set, indicating whether or not a particular internet user clicked on an Advertisement. This dataset can be explored to train a model that can predict whether or not the new users will click on an ad based on their various low-level features.This data set c...
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set_style('white') df_ad = pd.read_csv('Data/advertising.csv') df_ad.head(3) df_ad.info() df_ad.isnull().any() df_ad.describe()
_____no_output_____
MIT
Mini capstone projects/Ad click prediction_Logistic Regression.ipynb
sungsujaing/DataScience_MachineLearning_Portfolio
EDA Age distribution of the dataset
sns.set_context('notebook',font_scale=1.5) sns.distplot(df_ad.Age,bins=30,kde=False,color='red') plt.show()
_____no_output_____
MIT
Mini capstone projects/Ad click prediction_Logistic Regression.ipynb
sungsujaing/DataScience_MachineLearning_Portfolio
pairplot of dataset defined by `Clicked on Ad`
import warnings warnings.filterwarnings('ignore') #### since the target variable is numeric, the joint plot by the target variable generates the warning. sns.pairplot(df_ad,hue='Clicked on Ad')
_____no_output_____
MIT
Mini capstone projects/Ad click prediction_Logistic Regression.ipynb
sungsujaing/DataScience_MachineLearning_Portfolio
Model training: Basic Logistic Regression
from sklearn.model_selection import train_test_split X = df_ad[['Daily Time Spent on Site', 'Age', 'Area Income', 'Daily Internet Usage', 'Male']] y = df_ad['Clicked on Ad'] X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=100)
_____no_output_____
MIT
Mini capstone projects/Ad click prediction_Logistic Regression.ipynb
sungsujaing/DataScience_MachineLearning_Portfolio
training
from sklearn.linear_model import LogisticRegression lr = LogisticRegression().fit(X_train,y_train)
_____no_output_____
MIT
Mini capstone projects/Ad click prediction_Logistic Regression.ipynb
sungsujaing/DataScience_MachineLearning_Portfolio
Predictions and Evaluations
from sklearn.metrics import classification_report,confusion_matrix y_predict = lr.predict(X_test) pd.DataFrame(confusion_matrix(y_test,y_predict),index=['True 0','True 1'], columns=['Predicted 0','Predicted 1']) print(classification_report(y_test,y_predict))
precision recall f1-score support 0 0.86 0.92 0.89 119 1 0.93 0.86 0.89 131 micro avg 0.89 0.89 0.89 250 macro avg 0.89 0.89 0.89 250 weighted avg 0.89 0.89 0.89 ...
MIT
Mini capstone projects/Ad click prediction_Logistic Regression.ipynb
sungsujaing/DataScience_MachineLearning_Portfolio
Model training: Optimized Logistic Regression
from sklearn.preprocessing import StandardScaler from sklearn.model_selection import GridSearchCV scaler = StandardScaler().fit(X_train) X_train_scaled = scaler.transform(X_train) x_test_scaled = scaler.transform(X_test)
_____no_output_____
MIT
Mini capstone projects/Ad click prediction_Logistic Regression.ipynb
sungsujaing/DataScience_MachineLearning_Portfolio
3-fold CV grid search
grid_param = {'C':[0.01,0.03,0.1,0.3,1,3,10]} grid_lr = GridSearchCV(LogisticRegression(),grid_param,cv=3).fit(X_train_scaled,y_train) print('best regularization parameter: {}'.format(grid_lr.best_params_)) print('best CV score: {}'.format(grid_lr.best_score_.round(3)))
best regularization parameter: {'C': 0.3} best CV score: 0.971
MIT
Mini capstone projects/Ad click prediction_Logistic Regression.ipynb
sungsujaing/DataScience_MachineLearning_Portfolio
Predictions and Evaluations
y_predict_2 = grid_lr.predict(x_test_scaled) pd.DataFrame(confusion_matrix(y_test,y_predict_2),index=['True 0','True 1'], columns=['Predicted 0','Predicted 1']) print(classification_report(y_test,y_predict_2))
precision recall f1-score support 0 0.94 1.00 0.97 119 1 1.00 0.94 0.97 131 micro avg 0.97 0.97 0.97 250 macro avg 0.97 0.97 0.97 250 weighted avg 0.97 0.97 0.97 ...
MIT
Mini capstone projects/Ad click prediction_Logistic Regression.ipynb
sungsujaing/DataScience_MachineLearning_Portfolio
Demo Notebook: The Continuous-Function Estimator Tophat and Spline bases on a periodic boxHello! In this notebook we'll show you how to use the continuous-function estimator to estimate the 2-point correlation function (2pcf) with a method that produces, well, continuous correlation functions. Load in data We'll demo...
x, y, z = read_lognormal_catalog(n='3e-4') boxsize = 750.0 nd = len(x) print("Number of data points:",nd)
Number of data points: 125342
MIT
example_theory.ipynb
abbyw24/Corrfunc
We'll also want a random catalog, that's a bit bigger than our data:
nr = 3*nd x_rand = np.random.uniform(0, boxsize, nr) y_rand = np.random.uniform(0, boxsize, nr) z_rand = np.random.uniform(0, boxsize, nr) print("Number of random points:",nr) print(x) print(x_rand)
[1.13136184e+00 4.30035293e-01 2.08324015e-01 ... 7.49666077e+02 7.49922791e+02 7.49938477e+02] [567.62600303 166.85340522 461.79238824 ... 577.65066275 9.85155819 581.1525008 ]
MIT
example_theory.ipynb
abbyw24/Corrfunc