code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from utils import *
SPIDS = ['511285','800096','511706','636669','511754','636713','511793','636805','253495','253168',
'42005','62437','75334','96749','122860','133160','160688','188490','215206','243199','258525','290941'
,'312624','345508','383952','564376','860435','1008992','1076034','1148170','1277741','1390334',
'1571941','1835449','2009916','2348283','2484028','2600428','2766462','2911451','3077136']
# +
# This scripts srapes the ACM DL and downloads all the proceedings,
# and then from each proceedings it downloads all the published articles
# appearing in the proceeings (keynotes, full papers, short papers, demos, tutorial)
# Note that in the 1990 proceedings, it is labelled as being published in 1989.
# I manually changed the HTML for this one to have the correct year in it.
spfiles = []
# Only downloads papers that are not already/previously downloaded
for spid in SPIDS:
sigir_url = "http://dl.acm.org/citation.cfm?id={0}&preflayout=flat".format(spid)
sigir_head_file = make_sp_name(spid)
spfiles.append(sigir_head_file)
if not os.path.isfile(sigir_head_file):
print("Downloading " + sigir_head_file)
download(sigir_url, sigir_head_file)
time.sleep(1)
spc = []
count = 0
for spid in SPIDS:
sigir_head_file = make_sp_name(spid)
papers = parse_out_papers(sigir_head_file, spid)
year = parse_out_year(sigir_head_file)
# for each paper in the proceedings, download the paper
for p in papers:
pid = get_paper_id(p)
paper_file = 'data/{0}.html'.format(pid)
if not os.path.isfile(paper_file):
download(p, paper_file)
time.sleep(2)
print("Downloaded: {0} papers from proceedings: {1} from year: {2} no.{3}".format(len(papers), spid, year, count))
count += 1
# +
# Process each proceedings and extract out the year, refs and cites for each paper.
counts = []
spc = []
for spid in SPIDS:
sigir_head_file = make_sp_name(spid)
papers = parse_out_papers(sigir_head_file, spid)
# for each paper in the proceedings, download the paper
for p in papers:
pid = get_paper_id(p)
paper_file = 'data/{0}.html'.format(pid)
[year, refs,cites] = count_references(paper_file)
counts.append([pid, year, refs, cites])
spc.append(len(papers))
# Save the counts data to file
with open("data/counts.txt", "w") as f:
for c in counts:
f.write("{0} {1} {2} {3}\n".format(c[0], c[1], c[2], c[3] ))
# -
| Scrape ACM DL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # "Revisiting Brown patch dataset and benchmark"
# > "How to create useful development set"
# - toc: false
# - image: images/brown_phototour_revisited.jpg
# - branch: master
# - badges: true
# - comments: true
# - hide: false
# - search_exclude: false
# ### In this post
#
# 1. Why one needs good development set? What is wrong with existing sets for local patch descriptor learning?
# 2. One should validate in the same way, as it is used in production.
# 3. Brown patch revisited -- implementation details
# 4. Local patch descriptors evaluation results.
# ## Really quick intro into local patch descriptors
#
#
# Local patch descriptor is the thing, which helps you to automatically decide, if two patches in the pair of images correspond to the same point in a real world, or not. It should be robust to illumination, viewpoint and other changes.
#
# 
#
# There are lots of ways how to implement a local patch descriptor: engineered and learned.
# Local patch descriptor is the crucial component of the [wide baseline stereo pipeline](https://ducha-aiki.github.io/wide-baseline-stereo-blog/2020/03/27/intro.html) and a popular computer vision research topic.
# ## Why do you need development set?
#
# Good data is crucial for any machine learning problem -- everyone now knows that.
# One needs high quality training set for training a good model. One also needs good test set, to know, what is _real_ performance. However, there is one more, often forgotten, crucial component -- **validation** or **development** set. We use it to decide hyperparameters and validate design choices we make. It should be different from both training and test sets, yet, be good predictor of test set performance.
# Moreover, it should allow fast iterations, so be not too small.
#
# While such set is commonly called [validation set](https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets), I do like Andrew Ng's term "[development](https://cs230.stanford.edu/files/C2M1.pdf)" set more - because it helps to *develop* your model.
#
#
# # Existing datasets for local patch descriptors
# So, what are the development set options for local patch descriptors?
#
# ### Brown PhotoTourism.
#
# 
#
# The most commonly and successfully used dataset for local descriptor learning is PhotoTourism, created in 2008. Here is its [description by authors](http://matthewalunbrown.com/patchdata/patchdata.html):
#
#
# >The dataset consists of corresponding patches sampled from 3D reconstructions of the Statue of Liberty (New York), Notre Dame (Paris) and Half Dome (Yosemite).
#
# It also comes with evaluation protocol: patch pairs are labeled as "same" or "different" and the false positive rate at recall of 95% (FPR95) is reported. The variable, used to build ROC curve is descriptor distance between two patches.
#
# Advantages:
#
# - It contains local patches, extracted for two types of local feature detector -- DoG (SIFT) and Harris corners.
# - It is extracted from images, contraining non-planar structures and the geometrical noise present is caused by the local feature detector, not added artificially.
# - Descriptors, trained on the dataset, show very good performance \cite{IMW2020}, therefore the data itself is good.
#
#
# Disadvantages:
#
# - when used as a benchmark, it shows unrealistic results: SIFT is 40x worse than deep learned descriptor. In practice, the difference is much smaller.
# ### HPatches
#
# [HPatches](https://github.com/hpatches/hpatches-dataset), where H stands for the "[homography](https://en.wikipedia.org/wiki/Homography)" was proposed to overcome the problem of unrealisting metric and, seemingly, too easy data, used in Phototourism dataset.
#
# It was constructed in a different way than a Phototourism. First, local features were detected in the "reference" image and then reprojected to other images in sequences. Reprojection is prossible, because all the images are photographies of the planes -- graffity, drawing, print, etc, or are all taken from the same position.
# After the reprojection, some amount of geometrical noise -- rotation, translation, scaling, was added to the local features and the patches were extracted.
#
# This process is illustration on the picture below (both taken from the [HPatches website](https://github.com/hpatches/hpatches-dataset)).
#
#
# 
# 
#
#
# HPAtches also provide 3 testing protocol, evaluating mean average precision (mAP) for 3 different tasks: patch verification -- similar to Brown Phototourism, image matching and patch retrieval. The variable, used to build mAP is descriptor distance between two patches.
#
# Advantages:
#
# - Unlike PhotoTourism patch verification, image matching and patch retrieval tasks are not saturated.
# - HPatches contains illumination split, allowing the evaluation of descriptor robustness to illumination changes.
#
# Disadvantages:
#
# - patches "misregistration" noise is of artificial nature, although paper claims that it has similar statistics
# - no non-planar structure
# - performance in HPaptches does not really correlate with the downstream performance \cite{IMW2020}
#
# 
# +
#hide
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
size=15
params = {'legend.fontsize': 'large',
'figure.figsize': (8,4),
'axes.labelsize': size,
'axes.titlesize': size,
'xtick.labelsize': size,
'ytick.labelsize': size,
'axes.titlepad': 25}
plt.rcParams.update(params)
# From https://arxiv.org/pdf/1904.05019.pdf
results_hp = {}
results_hp['SIFT'] = 24.42
results_hp['RootSIFT'] =27.22
results_hp['ORB'] = 15.32
results_hp['L2Net'] = 45.04
results_hp['HardNet+'] = 52.76
results_hp['SoSNet+'] = 53.75
results_hp['HardNetPS'] = 58.16
results_hp['GeoDesc'] = 59.15
results_IMC = {}
results_IMC['ORB'] = 16.74
results_IMC['SIFT'] = 45.84
results_IMC['RootSIFT'] = 49.30
results_IMC['L2Net'] = 52.95
results_IMC['HardNet+'] = 55.43
results_IMC['HardNetPS'] = 50.51
results_IMC['SoSNet+'] = 55.87
results_IMC['GeoDesc'] = 51.11
leg = []
for k, v in results_hp.items():
plt.plot(v, results_IMC[k], 'o', label=k, markersize=10)
leg.append(k)
plt.legend(leg)
plt.xlabel('matching mAP on HPatches')
plt.ylabel('mAA at IMC')
plt.ylim([0,60.])
plt.grid('on')
# -
# ### AMOSPatches
#
# [AMOS patches](https://github.com/pultarmi/AMOS_patches) is "HPatches illumination on steroids, without geometrical noise". It has the same advantanges and disadvantages, as HPatches and is mostly focused on illumination and weather changes.
#
#
# 
# 
#
#
# ### PhotoSynth
#
#
# [PhotoSynth](https://github.com/rmitra/PS-Dataset) can be described and something in the middle between Phototour and HPatches. It contains patches, sampled from planar scenes, as well as from non-planar scenes.
#
# At first glance, it should be great for the test and training purposes. However, there are several issues with it.
# First, pre-trained HardNetPS descriptor, released together with the dataset, works well on HPatches, but poor in practice\cite{pultar2020improving}.
#
# Second, a couple of colleagues has tried to train the descriptor on top of it, as it was significantly worse than the authors reference model. Moreover, there is no testing/training code protocol available together with dataset.
#
# So, while PhotoSynth might be a good dataset in principle, it definitely needs more love and work.
#
# 
#
# 
# ## Designing the evaluation protocol
#
# Classical local descriptor matching consists of two parts: finding nearest neighbors and filtering unreliable ones based on some criterion.
# I have wrote a [blogpost, describing the matching strategies in details](https://medium.com/@ducha.aiki/how-to-match-to-learn-or-not-to-learn-part-2-1ab52ede2022).
#
#
# The most used in practice criterion is the first to second nearest neighbor distance (Lowe's) ratio threshold for filtering false positive matches. It is shown in the figure below.
#
# The intuition is simple: if two candidates are too similar, then the match is unreliable and it is better to drop it.
#
# 
#
# Somehow, none of the local patch evaluation protocols does not take such filtering criterion in mind, although it greatly influences the overall performance.
#
# So, let's do the following:
#
# 1. Take the patches, which are extracted from only two images.
# 2. For the each patch, calculate the descriptor distance to the correct match and to the hardnest (closest) non-match. Calculate the Lowe's ratio between this two.
# 3. Calculate accuracy for each of such triplets. If the correct match has smaller distance, score 1, if not - 0.
# 4. Sort the ratios from smallest to biggest and calculate [mean average precision](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Mean_average_precision) (mAP).
# ## Brown PhotoTour Revisied: implementation details
#
#
# We have designed the protocol, now time for data. We could spend several month collecting and cleaning it...or we can just re-use great Brown PhotoTourism dataset. Re-visiting labeling and/or evaluation protocol of the time-tested dataset is a great idea.
#
# Just couple of examples: [ImageNette](https://github.com/fastai/imagenette) created by [<NAME>](https://twitter.com/jeremyphoward) from ImageNet, [Revisited Oxford 5k](http://cmp.felk.cvut.cz/revisitop/) by [<NAME>](https://filipradenovic.github.io/) and so on.
#
# For the protocol we designed above we need the information about the image id, where the patch was extracted from. Unfortunately, there is no such information in the Brown PhotoTourism, but there is suitable alternative -- the image id, where the reference patch was detected. What does it mean?
#
# Suppose, we have 4 images and 5 keypoints. All the keypoints present in all images, which gives us 20 patches.
# 3 keypoints were first detected in Image 1 and 2 in image 2.
# That means that we will have 12 patches labeled image 1 and 8 patches labeled image 2.
#
# So, we will have results for image 1 and image 2. Let's consider image 1. There are 12 patches, splitted in 3 "classes", 4 patches in each class.
#
# Then, for the each of those 12 patches we:
#
# - pick each of the corresponding patched as positives, so 3 positives. $P_1$, $P_2$, $P_3$
# - find the closest negative N.
# - add triplets (A, $P_1$, N), (A, $P_2$, N), (A, $P_3$, N) to the evaluation.
#
# Repeat the same for the image 2.
# That mimics the two-view matching process as close, as possible, given the data available to us.
# ## Installation
#
# `pip install brown_phototour_revisited`
#
# ## How to use
#
# There is a single function, which does everything for you: `full_evaluation`. The original Brown benchmark consider evaluation, similar to cross-validation: train descriptor on one subset, evaluate on two others, repeat for all, so 6 evaluations are required. For the handcrafted descriptors, or those, that are trained on 3rd party datasets, only 3 evaluations are necessary. We are following it here as well.
#
# However, if you need to run some tests separately, or reuse some functions -- we will cover the usage below.
# In the following example we will show how to use `full_evaluation` to evaluate SIFT descriptor as implemented in kornia.
#
# ```
# # # !pip install kornia
# ```
#
# ```
# import torch
# import kornia
# from IPython.display import clear_output
# from brown_phototour_revisited.benchmarking import *
# patch_size = 65
#
# model = kornia.feature.SIFTDescriptor(patch_size, rootsift=True).eval()
#
# descs_out_dir = 'data/descriptors'
# download_dataset_to = 'data/dataset'
# results_dir = 'data/mAP'
#
# results_dict = {}
# results_dict['Kornia RootSIFT'] = full_evaluation(model,
# 'Kornia RootSIFT',
# path_to_save_dataset = download_dataset_to,
# path_to_save_descriptors = descs_out_dir,
# path_to_save_mAP = results_dir,
# patch_size = patch_size,
# device = torch.device('cuda:0'),
# distance='euclidean',
# backend='pytorch-cuda')
# clear_output()
# print_results_table(results_dict)
# ```
# ------------------------------------------------------------------------------
# Mean Average Precision wrt Lowe SNN ratio criterion on UBC Phototour Revisited
# ------------------------------------------------------------------------------
# trained on liberty notredame liberty yosemite notredame yosemite
# tested on yosemite notredame liberty
# ------------------------------------------------------------------------------
# Kornia RootSIFT 56.70 47.71 48.09
# ------------------------------------------------------------------------------
# ## Results
#
# So, let's check how it goes. The latest results and implementation are in the following notebooks:
#
# - [Deep descriptors](https://github.com/ducha-aiki/brown_phototour_revisited/blob/master/examples/evaluate_deep_descriptors.ipynb)
# - [Non-deep descriptors](https://github.com/ducha-aiki/brown_phototour_revisited/blob/master/examples/evaluate_non_deep_descriptors.ipynb)
#
# The results are the following:
#
#
# ------------------------------------------------------------------------------
# Mean Average Precision wrt Lowe SNN ratio criterion on UBC Phototour Revisited
# ------------------------------------------------------------------------------
# trained on liberty notredame liberty yosemite notredame yosemite
# tested on yosemite notredame liberty
# ------------------------------------------------------------------------------
# Kornia RootSIFT 32px 58.24 49.07 49.65
# HardNet 32px 70.64 70.31 61.93 59.56 63.06 61.64
# SOSNet 32px 70.03 70.19 62.09 59.68 63.16 61.65
# TFeat 32px 65.45 65.77 54.99 54.69 56.55 56.24
# SoftMargin 32px 69.29 69.20 61.82 58.61 62.37 60.63
# HardNetPS 32px 55.56 49.70 49.12
# R2D2_center_grayscal 61.47 53.18 54.98
# R2D2_MeanCenter_gray 62.73 54.10 56.17
# ------------------------------------------------------------------------------
#
# ------------------------------------------------------------------------------
# Mean Average Precision wrt Lowe SNN ratio criterion on UBC Phototour Revisited
# ------------------------------------------------------------------------------
# trained on liberty notredame liberty yosemite notredame yosemite
# tested on yosemite notredame liberty
# ------------------------------------------------------------------------------
# Kornia SIFT 32px 58.47 47.76 48.70
# OpenCV_SIFT 32px 53.16 45.93 46.00
# Kornia RootSIFT 32px 58.24 49.07 49.65
# OpenCV_RootSIFT 32px 53.50 47.16 47.37
# OpenCV_LATCH 65px ----- ----- ----- 37.26 ----- 39.08
# OpenCV_LUCID 32px 20.37 23.08 27.24
# skimage_BRIEF 65px 52.68 44.82 46.56
# Kornia RootSIFTPCA 3 60.73 60.64 50.80 50.24 52.46 52.02
# MKD-concat-lw-32 32p 72.27 71.95 60.88 58.78 60.68 59.10
# ------------------------------------------------------------------------------
#
# So far - in agreement with IMC benchmark: SIFT and RootSIFT are good, but not the best, SOSNet and HardNet are the leaders, but within tens of percents, not by orders of magnitude.
#
#
# 
#
#
# ### Disclaimer 1: don't trust this tables fully
#
#
# I haven't (yet!) checked if all the deep descriptors models, trained on Brown, were trained with flip-rotation 90 degrees augmentation. In the code below I assume that they were, however, it might not be true -- and the comparison might not be completely fair. I will do my best to check it, but if you know that I have used wrong weights - please [open an issue](https://github.com/ducha-aiki/brown_phototour_revisited/issues). Thank you.
#
#
# ### Disclaimer 2: it is not "benchmark".
#
#
# The intended usage of the package is not to test and report the numbers in the paper. Instead think about is as cross-validation tool, helping the development. Thus, one CAN tune hyperparameters based on the benchmark results instead of doing so on [HPatches](https://github.com/hpatches/hpatches-benchmark). After you have finished tuning, please, evaluate your local descriptors on some downstream task like [IMC image matching benchmark](https://github.com/vcg-uvic/image-matching-benchmark) or [visual localization](https://www.visuallocalization.net/).
# ## Summary
#
# It really pays off, to spend time designing a proper evaluation pipeline and gathering the data for it. If you can re-use existing work - great.
# But don't blindly trust anything, even super-popular and widely adopted benchmarks. You need always check if the the protocol and data makes sense for your use-case personally.
#
# Thanks for the reading, see you soon!
# ## Citation
#
# If you use the benchmark/development set in an academic work, please cite it.
#
# @misc{BrownRevisited2020,
# title={UBC PhotoTour Revisied},
# author={<NAME>},
# year={2020},
# url = {https://github.com/ducha-aiki/brown_phototour_revisited}
# }
# # References
#
# [<a id="cit-IMW2020" href="#call-IMW2020">IMW2020</a>] <NAME>, <NAME>, <NAME> <em>et al.</em>, ``_Image Matching across Wide Baselines: From Paper to Practice_'', arXiv preprint arXiv:2003.01587, vol. , number , pp. , 2020.
#
# [<a id="cit-pultar2020improving" href="#call-pultar2020improving">pultar2020improving</a>] <NAME>, ``_Improving the HardNet Descriptor_'', arXiv ePrint:2007.09699, vol. , number , pp. , 2020.
#
#
| _notebooks/2020-09-23-local-descriptors-validation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CNN: 120C5-200C3-MP2-200N-10N
# DATA
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
# TRUE data for each iteration
import tensorflow as tf
x = tf.placeholder(tf.float32, shape=[None, 784])# input
y_ = tf.placeholder(tf.float32, shape=[None, 10])
# +
# Define functions
# weight init
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
# layers
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')# "SAME": output size is the same with input size
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# +
# MODEL
# First Layer
W_conv1 = weight_variable([5, 5, 1, 120]) # 120C5
b_conv1 = bias_variable([120])#
# reshape to 4d tensor
x_image = tf.reshape(x, [-1, 28, 28, 1])# 28x28 image shape, with color channels=1
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)# 120C5
# Second layer
W_conv2 = weight_variable([3,3, 120, 200])# 200C3
b_conv2 = bias_variable([200])
h_conv2 = tf.nn.relu(conv2d(h_conv1, W_conv2) + b_conv2)# 200C3
h_pool2 = max_pool_2x2(h_conv2)# MP2
# fully-connected layer with 1024 neurons
W_fc1 = weight_variable([14 * 14 * 200, 100]) # 100N
b_fc1 = bias_variable([100])
h_pool2_flat = tf.reshape(h_pool2, [-1, 14*14*200])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# Dropout
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# Readout Layer
W_fc2 = weight_variable([100, 10])# 100N to 10 classes
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
# +
# TRAIN
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(200):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x: batch[0], y_: batch[1], keep_prob: 1.0})
print('step %d, training accuracy %g' % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
# -
| tensorflow/models/tf_cnn_120C5-200C3-MP2-100N-10N.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .ps1
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PowerShell
# name: powershell
# ---
# + [markdown] azdata_cell_guid="0752ac77-f1c8-4c4d-8033-4debcbaa03b5"
# # About pwsh
#
# ## Short Description
# Explains how to use the **pwsh** command-line tool. Displays the syntax and
# describes the command-line switches.
#
# ## Long Description
#
# ## Syntax
#
# ```
# pwsh[.exe]
# [[-File] <filePath> [args]]
# [-Command { - | <script-block> [-args <arg-array>]
# | <string> [<CommandParameters>] } ]
# [-ConfigurationName <string>]
# [-CustomPipeName <string>]
# [-EncodedCommand <Base64EncodedCommand>]
# [-ExecutionPolicy <ExecutionPolicy>]
# [-InputFormat {Text | XML}]
# [-Interactive]
# [-Login]
# [-MTA]
# [-NoExit]
# [-NoLogo]
# [-NonInteractive]
# [-NoProfile]
# [-OutputFormat {Text | XML}]
# [-SettingsFile <SettingsFilePath>]
# [-STA]
# [-Version]
# [-WindowStyle <style>]
# [-WorkingDirectory <directoryPath>]
#
# pwsh[.exe] -h | -Help | -? | /?
# ```
#
# ## Parameters
#
# All parameters are case-insensitive.
#
# ### -File | -f
#
# If the value of **File** is `-`, the command text is read from standard input.
# Running `pwsh -File -` without redirected standard input starts a regular
# session. This is the same as not specifying the **File** parameter at all.
#
# This is the default parameter if no parameters are present but values are
# present in the command line. The specified script runs in the local scope
# ("dot-sourced"), so that the functions and variables that the script creates
# are available in the current session. Enter the script file path and any
# parameters. File must be the last parameter in the command, because all
# characters typed after the File parameter name are interpreted as the script
# file path followed by the script parameters.
#
# Typically, the switch parameters of a script are either included or omitted.
# For example, the following command uses the All parameter of the
# Get-Script.ps1 script file: `-File .\Get-Script.ps1 -All`
#
# In rare cases, you might need to provide a Boolean value for a switch
# parameter. To provide a Boolean value for a switch parameter in the value of
# the File parameter, Use the parameter normally followed immediately by a colon
# and the boolean value, such as the following:
# `-File .\Get-Script.ps1 -All:$False`.
#
# Parameters passed to the script are passed as literal strings, after
# interpretation by the current shell. For example, if you are in **cmd.exe** and
# want to pass an environment variable value, you would use the **cmd.exe**
# syntax: `pwsh -File .\test.ps1 -TestParam %windir%`
#
# In contrast, running `pwsh -File .\test.ps1 -TestParam $env:windir`
# in **cmd.exe** results in the script receiving the literal string `$env:windir`
# because it has no special meaning to the current **cmd.exe** shell. The
# `$env:windir` style of environment variable reference _can_ be used inside a
# **Command** parameter, since there it is interpreted as PowerShell code.
#
# Similarly, if you want to execute the same command from a **Batch script**, you
# would use `%~dp0` instead of `.\` or `$PSScriptRoot` to represent the current
# execution directory: `pwsh -File %~dp0test.ps1 -TestParam %windir%`.
# If you instead used `.\test.ps1`, PowerShell would throw an error because it
# cannot find the literal path `.\test.ps1`
#
# ### -Command | -c
#
# Executes the specified commands (and any parameters) as though they were typed
# at the PowerShell command prompt, and then exits, unless the **NoExit**
# parameter is specified.
#
# The value of **Command** can be `-`, a script block, or a string. If the value
# of **Command** is `-`, the command text is read from standard input.
#
# The **Command** parameter only accepts a script block for execution when it can
# recognize the value passed to **Command** as a **ScriptBlock** type. This is
# _only_ possible when running **pwsh** from another PowerShell host. The
# **ScriptBlock** type may be contained in an existing variable, returned from an
# expression, or parsed by the PowerShell host as a literal script block enclosed
# in curly braces `{}`, before being passed to **pwsh**.
#
#
# + azdata_cell_guid="0d3820c7-1001-4718-8608-097ec028ec46"
pwsh -Command {Get-WinEvent -LogName security}
# + [markdown] azdata_cell_guid="a9ceee1b-340f-45ee-845e-c80d82bf1b3d"
# In **cmd.exe**, there is no such thing as a script block (or **ScriptBlock**
# type), so the value passed to **Command** will _always_ be a string. You can
# write a script block inside the string, but instead of being executed it will
# behave exactly as though you typed it at a typical PowerShell prompt, printing
# the contents of the script block back out to you.
#
# A string passed to **Command** will still be executed as PowerShell, so the
# script block curly braces are often not required in the first place when
# running from **cmd.exe**. To execute an inline script block defined inside a
# string, the [call operator](about_operators.md#special-operators) `&` can be used:
# + azdata_cell_guid="bf516af4-1f53-4df5-a528-3b95da588d0d"
pwsh -Command "& {Get-WinEvent -LogName security}"
# + [markdown] azdata_cell_guid="5ed85efb-1615-422f-8fc6-0190e02d0d30"
# If the value of **Command** is a string, **Command** must be the last parameter
# for pwsh, because all arguments following it are interpreted as part of the
# command to execute.
#
# The results are returned to the parent shell as deserialized XML objects, not
# live objects.
#
# If the value of **Command** is "-", the command text is read from standard
# input. You must redirect standard input when using the **Command** parameter
# with standard input. For example:
# + azdata_cell_guid="07dde2a6-aca1-416f-8263-3b8ddf8accbe"
@'
"in"
"hi" |
% { "$_ there" }
"out"
'@ | powershell -NoProfile -Command -
# + [markdown] azdata_cell_guid="58d63e4c-7c01-468c-8806-640fb8100b47"
# ### -ConfigurationName | -config
#
# Specifies a configuration endpoint in which PowerShell is run. This can be any
# endpoint registered on the local machine including the default PowerShell
# remoting endpoints or a custom endpoint having specific user role capabilities.
#
# Example: `pwsh -ConfigurationName AdminRoles`
#
# ### -CustomPipeName
#
# Specifies the name to use for an additional IPC server (named pipe) used for
# debugging and other cross-process communication. This offers a predictable
# mechanism for connecting to other PowerShell instances. Typically used with the
# **CustomPipeName** parameter on `Enter-PSHostProcess`.
#
# This parameter was introduced in PowerShell 6.2.
#
# For example:
#
# ```powershell
# # PowerShell instance 1
# pwsh -CustomPipeName mydebugpipe
# # PowerShell instance 2
# Enter-PSHostProcess -CustomPipeName mydebugpipe
# ```
#
# ### -EncodedCommand | -e | -ec
#
# Accepts a base64-encoded string version of a command. Use this parameter to
# submit commands to PowerShell that require complex quotation marks or curly
# braces. The string must be formatted using UTF-16 character encoding.
#
# For example:
# + azdata_cell_guid="174c528c-aaa0-4718-825e-48f08d9b41d7"
$command = 'dir "c:\program files" '
$bytes = [System.Text.Encoding]::Unicode.GetBytes($command)
$encodedCommand = [Convert]::ToBase64String($bytes)
pwsh -encodedcommand $encodedCommand
# + [markdown] azdata_cell_guid="3376f31f-389d-4376-99e5-4a93cad36793"
# ### -InputFormat | -in | -if
#
# Describes the format of data sent to PowerShell. Valid values are "Text" (text
# strings) or "XML" (serialized CLIXML format).
#
# ### -Interactive | -i
#
# Present an interactive prompt to the user. Inverse for NonInteractive
# parameter.
#
# ### -Login | -l
#
# On Linux and macOS, starts PowerShell as a login shell,
# using /bin/sh to execute login profiles such as /etc/profile and ~/.profile.
# On Windows, this switch does nothing.
#
# > [!IMPORTANT]
# > This parameter must come first to start PowerShell as a login shell.
# > Passing this parameter in another position will be ignored.
#
# To set up `pwsh` as the login shell on UNIX-like operating systems:
#
# - Verify that the full absolute path to `pwsh` is listed under `/etc/shells`
# - This path is usually something like `/usr/bin/pwsh` on Linux or
# `/usr/local/bin/pwsh` on macOS
# - With some installation methods, this entry will be added automatically at installation time
# - If `pwsh` is not present in `/etc/shells`, use an editor to append the path
# to `pwsh` on the last line. This requires elevated privileges to edit.
# - Use the [chsh](https://linux.die.net/man/1/chsh) utility to set your current
# user's shell to `pwsh`:
#
# ```sh
# chsh -s /usr/bin/pwsh
# ```
#
# > [!WARNING]
# > Setting `pwsh` as the login shell is currently not supported on Windows
# > Subsystem for Linux (WSL), and attempting to set `pwsh` as the login shell
# > there may lead to being unable to start WSL interactively.
#
# ### -MTA
#
# Start PowerShell using a multi-threaded apartment. This switch is only
# available on Windows.
#
# ### -NoExit | -noe
#
# Does not exit after running startup commands.
#
# Example: `pwsh -NoExit -Command Get-Date`
#
# ### -NoLogo | -nol
#
# Hides the copyright banner at startup.
#
# ### -NonInteractive | -noni
#
# Does not present an interactive prompt to the user.
#
# ### -NoProfile | -nop
#
# Does not load the PowerShell profile.
#
# ### -OutputFormat | -o | -of
#
# Determines how output from PowerShell is formatted. Valid values are "Text"
# (text strings) or "XML" (serialized CLIXML format).
#
# Example: `pwsh -o XML -c Get-Date`
#
# ### -SettingsFile | -settings
#
# Overrides the system-wide `powershell.config.json` settings file for the
# session. By default, system-wide settings are read from the
# `powershell.config.json` in the `$PSHOME` directory.
#
# Note that these settings are not used by the endpoint specified by the
# `-ConfigurationName` argument.
#
# Example: `pwsh -SettingsFile c:\myproject\powershell.config.json`
#
# ### -STA
#
# Start PowerShell using a single-threaded apartment. This is the default. This
# switch is only available on Windows.
#
# ### -Version | -v
#
# Displays the version of PowerShell. Additional parameters are ignored.
#
# ### -WindowStyle | -w
#
# Sets the window style for the session. Valid values are Normal, Minimized,
# Maximized and Hidden.
#
# ### -WorkingDirectory | -wd
#
# Sets the initial working directory by executing at startup. Any valid
# PowerShell file path is supported.
#
# To start PowerShell in your home directory, use: `pwsh -WorkingDirectory ~`
#
# ### -Help, -?, /?
#
# Displays help for **pwsh**. If you are typing a pwsh command in PowerShell,
# prepend the command parameters with a hyphen (`-`), not a forward slash (`/`).
# You can use either a hyphen or forward slash in Cmd.exe.
| about_pwsh.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # About this Notebook
#
# Bayesian Probabilistic Matrix Factorization - Autoregressive (BPMF-AR) model for spatiotemporal short-term prediction.
# +
import numpy as np
from numpy import linalg as LA
from numpy.random import multivariate_normal
from scipy.stats import wishart
def Normal_Wishart(mu_0, lamb, W, nu, seed = None):
"""Function drawing a Gaussian-Wishart random variable"""
Lambda = wishart(df = nu, scale = W, seed = seed).rvs()
cov = np.linalg.inv(lamb * Lambda)
mu = multivariate_normal(mu_0, cov)
return mu, Lambda
# -
# # Matrix Computation Concepts
#
# ## Kronecker product
#
# - **Definition**:
#
# Given two matrices $A\in\mathbb{R}^{m_1\times n_1}$ and $B\in\mathbb{R}^{m_2\times n_2}$, then, the **Kronecker product** between these two matrices is defined as
#
# $$A\otimes B=\left[ \begin{array}{cccc} a_{11}B & a_{12}B & \cdots & a_{1m_2}B \\ a_{21}B & a_{22}B & \cdots & a_{2m_2}B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m_11}B & a_{m_12}B & \cdots & a_{m_1m_2}B \\ \end{array} \right]$$
# where the symbol $\otimes$ denotes Kronecker product, and the size of resulted $A\otimes B$ is $(m_1m_2)\times (n_1n_2)$ (i.e., $m_1\times m_2$ columns and $n_1\times n_2$ rows).
#
# - **Example**:
#
# If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]$ and $B=\left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10 \\ \end{array} \right]$, then, we have
#
# $$A\otimes B=\left[ \begin{array}{cc} 1\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 2\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ 3\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 4\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ \end{array} \right]$$
#
# $$=\left[ \begin{array}{cccccc} 5 & 6 & 7 & 10 & 12 & 14 \\ 8 & 9 & 10 & 16 & 18 & 20 \\ 15 & 18 & 21 & 20 & 24 & 28 \\ 24 & 27 & 30 & 32 & 36 & 40 \\ \end{array} \right]\in\mathbb{R}^{4\times 6}.$$
#
# ## Khatri-Rao product (`kr_prod`)
#
# - **Definition**:
#
# Given two matrices $A=\left( \boldsymbol{a}_1,\boldsymbol{a}_2,...,\boldsymbol{a}_r \right)\in\mathbb{R}^{m\times r}$ and $B=\left( \boldsymbol{b}_1,\boldsymbol{b}_2,...,\boldsymbol{b}_r \right)\in\mathbb{R}^{n\times r}$ with same number of columns, then, the **Khatri-Rao product** (or **column-wise Kronecker product**) between $A$ and $B$ is given as follows,
#
# $$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2,...,\boldsymbol{a}_r\otimes \boldsymbol{b}_r \right)\in\mathbb{R}^{(mn)\times r}$$
# where the symbol $\odot$ denotes Khatri-Rao product, and $\otimes$ denotes Kronecker product.
#
# - **Example**:
#
# If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]=\left( \boldsymbol{a}_1,\boldsymbol{a}_2 \right) $ and $B=\left[ \begin{array}{cc} 5 & 6 \\ 7 & 8 \\ 9 & 10 \\ \end{array} \right]=\left( \boldsymbol{b}_1,\boldsymbol{b}_2 \right) $, then, we have
#
# $$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2 \right) $$
#
# $$=\left[ \begin{array}{cc} \left[ \begin{array}{c} 1 \\ 3 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 5 \\ 7 \\ 9 \\ \end{array} \right] & \left[ \begin{array}{c} 2 \\ 4 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 6 \\ 8 \\ 10 \\ \end{array} \right] \\ \end{array} \right]$$
#
# $$=\left[ \begin{array}{cc} 5 & 12 \\ 7 & 16 \\ 9 & 20 \\ 15 & 24 \\ 21 & 32 \\ 27 & 40 \\ \end{array} \right]\in\mathbb{R}^{6\times 2}.$$
def kr_prod(a, b):
return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1)
A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8], [9, 10]])
print(kr_prod(A, B))
def BPMF(dense_mat, sparse_mat, binary_mat, W, X, maxiter1, maxiter2):
dim1 = sparse_mat.shape[0]
dim2 = sparse_mat.shape[1]
rank = W.shape[1]
pos = np.where((dense_mat>0) & (sparse_mat==0))
position = np.where(sparse_mat > 0)
beta0 = 1
nu0 = rank
mu0 = np.zeros((rank))
tau = 1
a0 = 1e-6
b0 = 1e-6
W0 = np.eye(rank)
for iter in range(maxiter1):
W_bar = np.mean(W, axis = 0)
var_mu0 = (dim1 * W_bar + beta0 * mu0)/(dim1 + beta0)
var_nu = dim1 + nu0
var_W = np.linalg.inv(np.linalg.inv(W0)
+ dim1 * np.cov(W.T) + dim1 * beta0/(dim1 + beta0)
* np.outer(W_bar - mu0, W_bar - mu0))
var_W = (var_W + var_W.T)/2
var_mu0, var_Lambda0 = Normal_Wishart(var_mu0, dim1 + beta0, var_W, var_nu, seed = None)
var1 = X.T
var2 = kr_prod(var1, var1)
var3 = tau * np.matmul(var2, binary_mat.T).reshape([rank, rank, dim1]) + np.dstack([var_Lambda0] * dim1)
var4 = tau * np.matmul(var1, sparse_mat.T) + np.dstack([np.matmul(var_Lambda0, var_mu0)] * dim1)[0, :, :]
for i in range(dim1):
var_Lambda1 = var3[ :, :, i]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
var_mu = np.matmul(inv_var_Lambda1, var4[:, i])
W[i, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda1)
X_bar = np.mean(X, axis = 0)
var_mu0 = (dim2 * X_bar + beta0 * mu0)/(dim2 + beta0)
var_nu = dim2 + nu0
var_X = np.linalg.inv(np.linalg.inv(W0)
+ dim2 * np.cov(X.T) + dim2 * beta0/(dim2 + beta0)
* np.outer(X_bar - mu0, X_bar - mu0))
var_X = (var_X + var_X.T)/2
var_mu0, var_Lambda0 = Normal_Wishart(var_mu0, dim2 + beta0, var_X, var_nu, seed = None)
var1 = W.T
var2 = kr_prod(var1, var1)
var3 = tau * np.matmul(var2, binary_mat).reshape([rank, rank, dim2]) + np.dstack([var_Lambda0] * dim2)
var4 = tau * np.matmul(var1, sparse_mat) + np.dstack([np.matmul(var_Lambda0, var_mu0)] * dim2)[0, :, :]
for t in range(dim2):
var_Lambda1 = var3[ :, :, t]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
var_mu = np.matmul(inv_var_Lambda1, var4[:, t])
X[t, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda1)
mat_hat = np.matmul(W, X.T)
rmse = np.sqrt(np.sum((dense_mat[pos] - mat_hat[pos])**2)/dense_mat[pos].shape[0])
var_a = a0 + 0.5 * sparse_mat[position].shape[0]
error = sparse_mat - mat_hat
var_b = b0 + 0.5 * np.sum(error[position]**2)
tau = np.random.gamma(var_a, 1/var_b)
if (iter + 1) % 100 == 0:
print('Iter: {}'.format(iter + 1))
print('RMSE: {:.6}'.format(rmse))
print()
W_plus = np.zeros((dim1, rank))
X_plus = np.zeros((dim2, rank))
mat_hat_plus = np.zeros((dim1, dim2))
for iters in range(maxiter2):
W_bar = np.mean(W, axis = 0)
var_mu0 = (dim1 * W_bar + beta0 * mu0)/(dim1 + beta0)
var_nu = dim1 + nu0
var_W = np.linalg.inv(np.linalg.inv(W0)
+ dim1 * np.cov(W.T) + dim1 * beta0/(dim1 + beta0)
* np.outer(W_bar - mu0, W_bar - mu0))
var_W = (var_W + var_W.T)/2
var_mu0, var_Lambda0 = Normal_Wishart(var_mu0, dim1 + beta0, var_W, var_nu, seed = None)
var1 = X.T
var2 = kr_prod(var1, var1)
var3 = tau * np.matmul(var2, binary_mat.T).reshape([rank, rank, dim1]) + np.dstack([var_Lambda0] * dim1)
var4 = tau * np.matmul(var1, sparse_mat.T) + np.dstack([np.matmul(var_Lambda0, var_mu0)] * dim1)[0, :, :]
for i in range(dim1):
var_Lambda1 = var3[ :, :, i]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
var_mu = np.matmul(inv_var_Lambda1, var4[:, i])
W[i, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda1)
W_plus += W
X_bar = np.mean(X, axis = 0)
var_mu0 = (dim2 * X_bar + beta0 * mu0)/(dim2 + beta0)
var_nu = dim2 + nu0
var_X = np.linalg.inv(np.linalg.inv(W0)
+ dim2 * np.cov(X.T) + dim2 * beta0/(dim2 + beta0)
* np.outer(X_bar - mu0, X_bar - mu0))
var_X = (var_X + var_X.T)/2
var_mu0, var_Lambda0 = Normal_Wishart(var_mu0, dim2 + beta0, var_X, var_nu, seed = None)
var1 = W.T
var2 = kr_prod(var1, var1)
var3 = tau * np.matmul(var2, binary_mat).reshape([rank, rank, dim2]) + np.dstack([var_Lambda0] * dim2)
var4 = tau * np.matmul(var1, sparse_mat) + np.dstack([np.matmul(var_Lambda0, var_mu0)] * dim2)[0, :, :]
for t in range(dim2):
var_Lambda1 = var3[ :, :, t]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
var_mu = np.matmul(inv_var_Lambda1, var4[:, t])
X[t, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda1)
X_plus += X
mat_hat = np.matmul(W, X.T)
mat_hat_plus += mat_hat
var_a = a0 + 0.5 * sparse_mat[position].shape[0]
error = sparse_mat - mat_hat
var_b = b0 + 0.5 * np.sum(error[position]**2)
tau = np.random.gamma(var_a, 1/var_b)
W = W_plus/maxiter2
X = X_plus/maxiter2
mat_hat = mat_hat_plus/maxiter2
final_mape = np.sum(np.abs(dense_mat[pos] -
mat_hat[pos])/dense_mat[pos])/dense_mat[pos].shape[0]
final_rmse = np.sqrt(np.sum((dense_mat[pos] -
mat_hat[pos])**2)/dense_mat[pos].shape[0])
print('Final MAPE: {:.6}'.format(final_mape))
print('Final RMSE: {:.6}'.format(final_rmse))
print()
return mat_hat, W, X
# ## Data Organization
#
# ### Part 1: Matrix Structure
#
# We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{f},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We express spatio-temporal dataset as a matrix $Y\in\mathbb{R}^{m\times f}$ with $m$ rows (e.g., locations) and $f$ columns (e.g., discrete time intervals),
#
# $$Y=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{m1} & y_{m2} & \cdots & y_{mf} \\ \end{array} \right]\in\mathbb{R}^{m\times f}.$$
#
# ### Part 2: Tensor Structure
#
# We consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{nf},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We partition each time series into intervals of predifined length $f$. We express each partitioned time series as a matrix $Y_{i}$ with $n$ rows (e.g., days) and $f$ columns (e.g., discrete time intervals per day),
#
# $$Y_{i}=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n1} & y_{n2} & \cdots & y_{nf} \\ \end{array} \right]\in\mathbb{R}^{n\times f},i=1,2,...,m,$$
#
# therefore, the resulting structure is a tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$.
# **How to transform a data set into something we can use for time series prediction?**
#
# +
import scipy.io
tensor = scipy.io.loadmat('Birmingham-data-set/tensor.mat')
tensor = tensor['tensor']
random_matrix = scipy.io.loadmat('Birmingham-data-set/random_matrix.mat')
random_matrix = random_matrix['random_matrix']
random_tensor = scipy.io.loadmat('Birmingham-data-set/random_tensor.mat')
random_tensor = random_tensor['random_tensor']
dense_mat = tensor.reshape([tensor.shape[0], tensor.shape[1] * tensor.shape[2]])
missing_rate = 0.3
# =============================================================================
### Random missing (RM) scenario:
### Set the RM scenario by:
binary_mat = np.round(random_tensor + 0.5 - missing_rate).reshape([random_tensor.shape[0],
random_tensor.shape[1]
* random_tensor.shape[2]])
# =============================================================================
# =============================================================================
### Non-random missing (NM) scenario:
### Set the NM scenario by:
# binary_tensor = np.zeros(tensor.shape)
# for i1 in range(tensor.shape[0]):
# for i2 in range(tensor.shape[1]):
# binary_tensor[i1,i2,:] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate)
# binary_mat = binary_tensor.reshape([binary_tensor.shape[0], binary_tensor.shape[1]
# * binary_tensor.shape[2]])
# =============================================================================
sparse_mat = np.multiply(dense_mat, binary_mat)
# -
# # Rolling Spatiotemporal Prediction
#
# **Using clear explanations**: If we have a partially observed matrix $Y\in\mathbb{R}^{m\times T}$, then how to do a single-step rolling prediction starting at the time interval $f+1$ and ending at the time interval $T$?
#
# The mechanism is:
#
# 1. First learn spatial factors $W\in\mathbb{R}^{m\times r}$, temporal factors $X\in\mathbb{R}^{f\times r}$, and AR coefficients $\boldsymbol{\theta}_{s}\in\mathbb{R}^{d},s=1,2,...,r$ from partially observed matrix $Y\in\mathbb{R}^{m\times f}$.
#
# 2. Predict $\boldsymbol{x}_{f+1}$ by
# $$\hat{\boldsymbol{x}}_{f+1}=\sum_{k=1}^{d}\boldsymbol{\theta}_{k}\circledast\boldsymbol{x}_{f+1-h_k}.$$
#
# 3. Load partially observed matrix $Y_{f}\in\mathbb{R}^{m\times b}$ ($b$ is the number of back steps) and fix spatial factors $W\in\mathbb{R}^{m\times T}$ and AR coefficients $\boldsymbol{\theta}_{s}\in\mathbb{R}^{d},s=1,2,...,r$, then learn temporal factors $X\in\mathbb{R}^{b\times r}$.
#
# 4. Compute the AR coefficients $\boldsymbol{\theta}_{s}\in\mathbb{R}^{d},s=1,2,...,r$ by
#
#
# 5. Predict $\boldsymbol{x}_{f+2}$ by
# $$\hat{\boldsymbol{x}}_{f+2}=\sum_{k=1}^{d}\boldsymbol{\theta}_{k}\circledast\boldsymbol{x}_{b+1-h_k}.$$
#
# 6. Make prediction iteratively until the time step $T$.
# ## How to estimate AR coefficients?
#
# $$\hat{\boldsymbol{\theta}}=\left(Q^\top\Sigma_{\eta}Q+\Sigma_{\theta}^{-1}\right)^{-1}Q^\top\Sigma_{\eta}^{-1}P$$
# where
# $$Q=[\tilde{\boldsymbol{x}}_{h_d+1},\cdots,\tilde{\boldsymbol{x}}_{T}]^{\top}\in\mathbb{R}^{T'\times d}$$
# and
# $$P=[x_{h_d+1},\cdots,x_{T}]^{\top}.$$
def OfflineBPMF(sparse_mat, init, time_lags, maxiter1, maxiter2):
"""Offline Bayesain Temporal Matrix Factorization"""
W = init["W"]
X = init["X"]
d=time_lags.shape[0]
dim1 = sparse_mat.shape[0]
dim2 = sparse_mat.shape[1]
rank = W.shape[1]
position = np.where(sparse_mat > 0)
binary_mat = np.zeros((dim1, dim2))
binary_mat[position] = 1
tau = 1
alpha = 1e-6
beta = 1e-6
beta0 = 1
nu0 = rank
mu0 = np.zeros((rank))
W0 = np.eye(rank)
for iter in range(maxiter1):
X_bar = np.mean(X, axis = 0)
var_mu0 = (dim2 * X_bar + beta0 * mu0)/(dim2 + beta0)
var_nu = dim2 + nu0
var_X = np.linalg.inv(np.linalg.inv(W0)
+ dim2 * np.cov(X.T) + dim2 * beta0/(dim2 + beta0)
* np.outer(X_bar - mu0, X_bar - mu0))
var_X = (var_X + var_X.T)/2
var_mu0, var_Lambda0 = Normal_Wishart(var_mu0, dim2 + beta0, var_X, var_nu, seed = None)
var1 = W.T
var2 = kr_prod(var1, var1)
var3 = tau * np.matmul(var2, binary_mat).reshape([rank, rank, dim2]) + np.dstack([var_Lambda0] * dim2)
var4 = tau * np.matmul(var1, sparse_mat) + np.dstack([np.matmul(var_Lambda0, var_mu0)] * dim2)[0, :, :]
for t in range(dim2):
var_Lambda1 = var3[ :, :, t]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
var_mu = np.matmul(inv_var_Lambda1, var4[:, t])
X[t, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda1)
mat_hat = np.matmul(W, X.T)
var_alpha = alpha + 0.5 * sparse_mat[position].shape[0]
error = sparse_mat - mat_hat
var_beta = beta + 0.5 * np.sum(error[position] ** 2)
tau = np.random.gamma(var_alpha, 1/var_beta)
X_plus = np.zeros((dim2, rank))
for iter in range(maxiter2):
X_bar = np.mean(X, axis = 0)
var_mu0 = (dim2 * X_bar + beta0 * mu0)/(dim2 + beta0)
var_nu = dim2 + nu0
var_X = np.linalg.inv(np.linalg.inv(W0)
+ dim2 * np.cov(X.T) + dim2 * beta0/(dim2 + beta0)
* np.outer(X_bar - mu0, X_bar - mu0))
var_X = (var_X + var_X.T)/2
var_mu0, var_Lambda0 = Normal_Wishart(var_mu0, dim2 + beta0, var_X, var_nu, seed = None)
var1 = W.T
var2 = kr_prod(var1, var1)
var3 = tau * np.matmul(var2, binary_mat).reshape([rank, rank, dim2]) + np.dstack([var_Lambda0] * dim2)
var4 = tau * np.matmul(var1, sparse_mat) + np.dstack([np.matmul(var_Lambda0, var_mu0)] * dim2)[0, :, :]
for t in range(dim2):
var_Lambda1 = var3[ :, :, t]
inv_var_Lambda1 = np.linalg.inv((var_Lambda1 + var_Lambda1.T)/2)
var_mu = np.matmul(inv_var_Lambda1, var4[:, t])
X[t, :] = np.random.multivariate_normal(var_mu, inv_var_Lambda1)
X_plus += X
mat_hat = np.matmul(W, X.T)
var_alpha = alpha + 0.5 * sparse_mat[position].shape[0]
error = sparse_mat - mat_hat
var_beta = beta + 0.5 * np.sum(error[position] ** 2)
tau = np.random.gamma(var_alpha, 1/var_beta)
X = X_plus/maxiter2
Sigma_eta = np.eye(dim2 - np.max(time_lags))
Sigma_theta = np.eye(time_lags.shape[0])
theta = np.zeros((time_lags.shape[0], rank))
for s in range(rank):
P = X[np.max(time_lags) : dim2, s]
Q = np.zeros((dim2 - np.max(time_lags), time_lags.shape[0]))
for t in range(np.max(time_lags), dim2):
Q[t - np.max(time_lags), :] = X[t - time_lags, s]
theta[:, s] = np.matmul(np.matmul(np.matmul(np.linalg.inv(np.matmul(np.matmul(Q.T, Sigma_eta), Q)
+ np.linalg.inv(Sigma_theta)),
Q.T), np.linalg.inv(Sigma_eta)), P)
return X, theta
def st_prediction(dense_mat, sparse_mat, pred_time_steps, back_steps, rank, time_lags, maxiter):
start_time = dense_mat.shape[1] - pred_time_steps
dense_mat0 = dense_mat[:, 0 : start_time]
sparse_mat0 = sparse_mat[:, 0 : start_time]
binary_mat0 = np.zeros((sparse_mat0.shape[0], sparse_mat0.shape[1]))
binary_mat0[np.where(sparse_mat0 > 0)] = 1
dim1 = sparse_mat0.shape[0]
dim2 = sparse_mat0.shape[1]
mat_hat = np.zeros((dim1, pred_time_steps))
init = {"W": np.random.rand(dim1, rank),
"X": np.random.rand(dim2, rank)}
mat_hat, W, X = BPMF(dense_mat0, sparse_mat0, binary_mat0, init["W"], init["X"], maxiter[0], maxiter[1])
init["W"] = W.copy()
Sigma_eta = np.eye(dim2 - np.max(time_lags))
Sigma_theta = np.eye(time_lags.shape[0])
theta = np.zeros((time_lags.shape[0], rank))
for s in range(rank):
P = X[np.max(time_lags) : dim2, s]
Q = np.zeros((dim2 - np.max(time_lags), time_lags.shape[0]))
for t in range(np.max(time_lags), dim2):
Q[t - np.max(time_lags), :] = X[t - time_lags, s]
theta[:, s] = np.matmul(np.matmul(np.matmul(np.linalg.inv(np.matmul(np.matmul(Q.T, Sigma_eta), Q)
+ np.linalg.inv(Sigma_theta)),
Q.T), np.linalg.inv(Sigma_eta)), P)
X0 = np.zeros((dim2 + 1, rank))
X0[0 : dim2, :] = X.copy()
X0[dim2, :] = np.einsum('ij, ij -> j', theta, X0[dim2 - time_lags, :])
init["X"] = X0[X0.shape[0] - back_steps : X0.shape[0], :]
mat_hat[:, 0] = np.matmul(W, X0[dim2, :])
for t in range(1, pred_time_steps):
dense_mat1 = dense_mat[:, start_time - back_steps + t : start_time + t]
sparse_mat1 = sparse_mat[:, start_time - back_steps + t : start_time + t]
X, theta = OfflineBPMF(sparse_mat1, init, time_lags, maxiter[2], maxiter[3])
X0 = np.zeros((back_steps + 1, rank))
X0[0 : back_steps, :] = X.copy()
X0[back_steps, :] = np.einsum('ij, ij -> j', theta, X0[back_steps - time_lags, :])
init["X"] = X0[1: back_steps + 1, :]
mat_hat[:, t] = np.matmul(W, X0[back_steps, :])
if (t + 1) % 40 == 0:
print('Time step: {}'.format(t + 1))
small_dense_mat = dense_mat[:, start_time : dense_mat.shape[1]]
pos = np.where(small_dense_mat > 0)
final_mape = np.sum(np.abs(small_dense_mat[pos] -
mat_hat[pos])/small_dense_mat[pos])/small_dense_mat[pos].shape[0]
final_rmse = np.sqrt(np.sum((small_dense_mat[pos] -
mat_hat[pos]) ** 2)/small_dense_mat[pos].shape[0])
print('Final MAPE: {:.6}'.format(final_mape))
print('Final RMSE: {:.6}'.format(final_rmse))
print()
return mat_hat
# The main influential factors for such prediction are:
#
# - The number of back steps $b$ (`back_steps`).
#
# - `rank`.
#
# - `maxiter`.
#
# - `time_lags`.
import time
start = time.time()
pred_time_steps = 18 * 7
back_steps = 18 * 4 * 1
rank = 10
time_lags = np.array([1, 2, 18])
maxiter = np.array([1000, 500, 100, 100])
small_dense_mat = dense_mat[:, dense_mat.shape[1] - pred_time_steps : dense_mat.shape[1]]
mat_hat = st_prediction(dense_mat, sparse_mat, pred_time_steps, back_steps, rank, time_lags, maxiter)
end = time.time()
print('Running time: %d seconds'%(end - start))
# **Experiment results** of short-term traffic prediction with missing values using Bayesian temporal matrix factorization (BTMF):
#
# | scenario |`back_steps`|`rank`|`time_lags`| `maxiter` | mape | rmse |
# |:----------|-----:|-----:|---------:|---------:|-----------:|----------:|
# |**Original data**| $18\times 4$ | 10 | (1,2,18) | (1000,500,10,20) | 0.3548 | **246.326**|
# |**10%, RM**| $18\times 4$ | 10 | (1,2,18) | (1000,500,100,100) | 0.3581 | **213.166**|
# |**30%, RM**| $18\times 4$ | 10 | (1,2,18) | (1000,500,100,100) | 0.3437 | **223.304**|
# |**10%, NM**| $18\times 4$ | 10 | (1,2,18) | (1000,500,100,100) | 0.3795 | **234.416**|
# |**30%, NM**| $18\times 4$ | 10 | (1,2,18) | (1000,500,100,100) | 0.5138 | **301.161**|
#
| toy-examples/Prediction-ST-BPMF-AR-Bdata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import yt
ds = yt.load('IsolatedGalaxy/galaxy0030/galaxy0030')
slc = yt.SlicePlot(ds, 'x', 'density')
slc
# `PlotWindow` plots are containers for plots, keyed to field names. Below, we get a copy of the plot for the `Density` field.
plot = slc.plots['density']
# The plot has a few attributes that point to underlying `matplotlib` plot primites. For example, the `colorbar` object corresponds to the `cb` attribute of the plot.
colorbar = plot.cb
# Next, we call `_setup_plots()` to ensure the plot is properly initialized. Without this, the custom tickmarks we are adding will be ignored.
slc._setup_plots()
# To set custom tickmarks, simply call the `matplotlib` [`set_ticks`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticks) and [`set_ticklabels`](http://matplotlib.org/api/colorbar_api.html#matplotlib.colorbar.ColorbarBase.set_ticklabels) functions.
colorbar.set_ticks([1e-28])
colorbar.set_ticklabels(['$10^{-28}$'])
slc
| doc/source/cookbook/custom_colorbar_tickmarks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cassidyhanna/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="11OzdxWTM7UR" colab_type="text"
# ## Assignment - Build a confidence interval
#
# A confidence interval refers to a neighborhood around some point estimate, the size of which is determined by the desired p-value. For instance, we might say that 52% of Americans prefer tacos to burritos, with a 95% confidence interval of +/- 5%.
#
# 52% (0.52) is the point estimate, and +/- 5% (the interval $[0.47, 0.57]$) is the confidence interval. "95% confidence" means a p-value $\leq 1 - 0.95 = 0.05$.
#
# In this case, the confidence interval includes $0.5$ - which is the natural null hypothesis (that half of Americans prefer tacos and half burritos, thus there is no clear favorite). So in this case, we could use the confidence interval to report that we've failed to reject the null hypothesis.
#
# But providing the full analysis with a confidence interval, including a graphical representation of it, can be a helpful and powerful way to tell your story. Done well, it is also more intuitive to a layperson than simply saying "fail to reject the null hypothesis" - it shows that in fact the data does *not* give a single clear result (the point estimate) but a whole range of possibilities.
#
# How is a confidence interval built, and how should it be interpreted? It does *not* mean that 95% of the data lies in that interval - instead, the frequentist interpretation is "if we were to repeat this experiment 100 times, we would expect the average result to lie in this interval ~95 times."
#
# For a 95% confidence interval and a normal(-ish) distribution, you can simply remember that +/-2 standard deviations contains 95% of the probability mass, and so the 95% confidence interval based on a given sample is centered at the mean (point estimate) and has a range of +/- 2 (or technically 1.96) standard deviations.
#
# Different distributions/assumptions (90% confidence, 99% confidence) will require different math, but the overall process and interpretation (with a frequentist approach) will be the same.
#
# Your assignment - using the data from the prior module ([congressional voting records](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records)):
#
#
# ### Confidence Intervals:
# 1. Generate and numerically represent a confidence interval
# 2. Graphically (with a plot) represent the confidence interval
# 3. Interpret the confidence interval - what does it tell you about the data and its distribution?
#
# ### Chi-squared tests:
# 4. Take a dataset that we have used in the past in class that has **categorical** variables. Pick two of those categorical variables and run a chi-squared tests on that data
# - By hand using Numpy
# - In a single line using Scipy
#
# + id="Ckcr4A4FM7cs" colab_type="code" colab={}
# TODO - your code!
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
from scipy.stats import chisquare
from scipy.stats import ks_2samp
from matplotlib import style
from scipy.stats import ttest_ind, ttest_ind_from_stats, ttest_rel
data = ('https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data')
df = pd.read_csv(data, names = ['class name','handicapped-infants', 'water-project-cost-sharing','adoption-of-the-budget-resolution','physician-fee-freeze','el-salvador-aid','religious-groups-in-schools','anti-satellite-test-ban','aid-to-nicaraguan-contras','mx-missile',
'immigration','synfuels-corporation-cutback','education-spending','superfund-right-to-sue','crime','duty-free-exports','export-administration-act-south-africa'])
# + id="OMRr5Yb7PVnO" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 275} outputId="b64c92c9-c0f7-45cd-8ce7-64bf9bc9fc3f"
df = df.replace({'y': 1.0, 'n': 0.0, '?': np.nan})
df = df.dropna()
df.head()
# + id="oHg_Fa7tQmjK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="732caef8-9bf0-483d-89b5-47bf9b271863"
df.shape
# + id="41mQV4DYPi0d" colab_type="code" colab={}
demo = df[df['class name'] == 'democrat']
# + id="jJKRQfi2Pm1M" colab_type="code" colab={}
rep = df[df['class name'] == 'republican']
# + id="tAyNcFm_wxnS" colab_type="code" colab={}
demo_crime = demo['crime']
# + id="8E1vC4wTNFqq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b90c6434-9846-48a2-d4d4-657c786c9f17"
sum(demo_crime)
# + id="oT6MO47R5zc4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="0e10a927-d1ae-4f71-d998-8c0c41f3f463"
demo_crime.head()
# + id="BJ7nlR43QD9Y" colab_type="code" colab={}
rep_crime = rep['crime']
# + id="BxdCClPGQ_gh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="adc5d82e-9dd3-4627-a89e-c31e3d42a8e1"
rep_crime.head()
# + id="iYe-0fX-QSbt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="fa10447c-d9b9-4d65-a015-35c80cad0a35"
ttest_ind(demo_crime, rep_crime, nan_policy='omit')
# + id="tYSyZ_JzQkAn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="d9940fbc-e8e1-4c10-aa70-8d08938f0630"
plt.hist(demo_crime, color ='b', alpha=0.5)
plt.hist(rep_crime, color ='r', alpha=0.5);
# + id="BoRUPZDB3qA7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="44ea3603-293d-45ab-c0d7-c814cf2dd232"
sns.distplot(rep_crime, color ='r')
sns.distplot(demo_crime, color ='b');
plt.xlim(-.1,1.1)
plt.ylim(0,60)
# + [markdown] id="ikIN02rvuUic" colab_type="text"
# # **Generate and numerically represent a confidence interval**
# + id="QgCtSWseuMwM" colab_type="code" colab={}
def confidence_interval(data, confidence =.95):
data = np.array(data)
n = len(data)
mean = np.mean(data)
stderr = stats.sem(data)
t = stats.t.ppf((1 + confidence)/2.0, n-1)
interval = stderr * t
return(mean,mean-interval,mean+interval)
# + id="VNmO4oNFv5yP" colab_type="code" colab={}
rep_crime_data = confidence_interval(rep_crime, confidence =.95)
demo_crime_data = confidence_interval(demo_crime, confidence =.95)
# + id="8xegpWWSDMFb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="9f937223-4308-487b-9ad7-4fcba3112dc2"
demo_crime_data
# + id="fqtBKP9Ixa0y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 300} outputId="69ccbf10-8f5f-4aa8-e76e-a6a642aac4d9"
sns.distplot(rep_crime, color ='r')
sns.distplot(rep_crime_data, color ='b');
plt.xlim(-.1,1.1)
plt.ylim(0,60)
# + id="nnDGN6V171ns" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="f7f158b8-e34d-486d-e9ef-c40fa300bb83"
sns.distplot(rep_crime_data, color ='r');
# + id="r3r7vv9T26pY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="a1534f76-93ff-44e8-f52f-36a3a5ae1dc0"
sns.distplot(demo_crime_data, color ='c')
# + id="vZ7bOmho3Hci" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} outputId="73bff25d-8acb-4ea0-f619-8674ed6095d3"
sns.distplot(rep_crime_data, color ='r')
sns.distplot(demo_crime_data, color ='c');
# + [markdown] id="VHtKHHMc6dm9" colab_type="text"
#
# + id="w8HNrjDLOitx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="649f5591-121c-4490-a196-190ef0c84fef"
print(chisquare(rep_crime))
# + id="HxHXh_2rjJ7q" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="2a892458-7270-456b-f23e-b5d113d3887c"
#ks_2samp(rep_crime,demo_crime)
# + [markdown] id="4ohsJhQUmEuS" colab_type="text"
# ## Stretch goals:
#
# 1. Write a summary of your findings, mixing prose and math/code/results. *Note* - yes, this is by definition a political topic. It is challenging but important to keep your writing voice *neutral* and stick to the facts of the data. Data science often involves considering controversial issues, so it's important to be sensitive about them (especially if you want to publish).
# 2. Apply the techniques you learned today to your project data or other data of your choice, and write/discuss your findings here.
# 3. Refactor your code so it is elegant, readable, and can be easily run for all issues.
# + [markdown] id="nyJ3ySr7R2k9" colab_type="text"
# ## Resources
#
# - [Interactive visualize the Chi-Squared test](https://homepage.divms.uiowa.edu/~mbognar/applets/chisq.html)
# - [Calculation of Chi-Squared test statistic](https://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test)
# - [Visualization of a confidence interval generated by R code](https://commons.wikimedia.org/wiki/File:Confidence-interval.svg)
# - [Expected value of a squared standard normal](https://math.stackexchange.com/questions/264061/expected-value-calculation-for-squared-normal-distribution) (it's 1 - which is why the expected value of a Chi-Squared with $n$ degrees of freedom is $n$, as it's the sum of $n$ squared standard normals)
| LS_DS_132_Sampling_Confidence_Intervals_and_Hypothesis_Testing_Assignment.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # Comparing TensorFlow (original) and PyTorch models
#
# You can use this small notebook to check the conversion of the model's weights from the TensorFlow model to the PyTorch model. In the following, we compare the weights of the last layer on a simple example (in `input.txt`) but both models returns all the hidden layers so you can check every stage of the model.
#
# To run this notebook, follow these instructions:
# - make sure that your Python environment has both TensorFlow and PyTorch installed,
# - download the original TensorFlow implementation,
# - download a pre-trained TensorFlow model as indicaded in the TensorFlow implementation readme,
# - run the script `convert_tf_checkpoint_to_pytorch.py` as indicated in the `README` to convert the pre-trained TensorFlow model to PyTorch.
#
# If needed change the relative paths indicated in this notebook (at the beggining of Sections 1 and 2) to point to the relevent models and code.
import os
os.chdir('../')
# ## 1/ TensorFlow code
# +
original_tf_inplem_dir = "./tensorflow_code/"
model_dir = "../google_models/uncased_L-12_H-768_A-12/"
vocab_file = model_dir + "vocab.txt"
bert_config_file = model_dir + "bert_config.json"
init_checkpoint = model_dir + "bert_model.ckpt"
input_file = "./samples/input.txt"
max_seq_length = 128
max_predictions_per_seq = 20
masked_lm_positions = [6]
# +
import importlib.util
import sys
import tensorflow as tf
import pytorch_transformers as ppb
def del_all_flags(FLAGS):
flags_dict = FLAGS._flags()
keys_list = [keys for keys in flags_dict]
for keys in keys_list:
FLAGS.__delattr__(keys)
del_all_flags(tf.flags.FLAGS)
import tensorflow_code.extract_features as ef
del_all_flags(tf.flags.FLAGS)
import tensorflow_code.modeling as tfm
del_all_flags(tf.flags.FLAGS)
import tensorflow_code.tokenization as tft
del_all_flags(tf.flags.FLAGS)
import tensorflow_code.run_pretraining as rp
del_all_flags(tf.flags.FLAGS)
import tensorflow_code.create_pretraining_data as cpp
# + code_folding=[15.0]
import re
class InputExample(object):
"""A single instance example."""
def __init__(self, tokens, segment_ids, masked_lm_positions,
masked_lm_labels, is_random_next):
self.tokens = tokens
self.segment_ids = segment_ids
self.masked_lm_positions = masked_lm_positions
self.masked_lm_labels = masked_lm_labels
self.is_random_next = is_random_next
def __repr__(self):
return '\n'.join(k + ":" + str(v) for k, v in self.__dict__.items())
def read_examples(input_file, tokenizer, masked_lm_positions):
"""Read a list of `InputExample`s from an input file."""
examples = []
unique_id = 0
with tf.gfile.GFile(input_file, "r") as reader:
while True:
line = reader.readline()
if not line:
break
line = line.strip()
text_a = None
text_b = None
m = re.match(r"^(.*) \|\|\| (.*)$", line)
if m is None:
text_a = line
else:
text_a = m.group(1)
text_b = m.group(2)
tokens_a = tokenizer.tokenize(text_a)
tokens_b = None
if text_b:
tokens_b = tokenizer.tokenize(text_b)
tokens = tokens_a + tokens_b
masked_lm_labels = []
for m_pos in masked_lm_positions:
masked_lm_labels.append(tokens[m_pos])
tokens[m_pos] = '[MASK]'
examples.append(
InputExample(
tokens = tokens,
segment_ids = [0] * len(tokens_a) + [1] * len(tokens_b),
masked_lm_positions = masked_lm_positions,
masked_lm_labels = masked_lm_labels,
is_random_next = False))
unique_id += 1
return examples
# +
bert_config = tfm.BertConfig.from_json_file(bert_config_file)
tokenizer = ppb.BertTokenizer(
vocab_file=vocab_file, do_lower_case=True)
examples = read_examples(input_file, tokenizer, masked_lm_positions=masked_lm_positions)
print(examples[0])
# + code_folding=[16.0]
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, input_ids, input_mask, segment_ids, masked_lm_positions,
masked_lm_ids, masked_lm_weights, next_sentence_label):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.masked_lm_positions = masked_lm_positions
self.masked_lm_ids = masked_lm_ids
self.masked_lm_weights = masked_lm_weights
self.next_sentence_labels = next_sentence_label
def __repr__(self):
return '\n'.join(k + ":" + str(v) for k, v in self.__dict__.items())
def pretraining_convert_examples_to_features(instances, tokenizer, max_seq_length,
max_predictions_per_seq):
"""Create TF example files from `TrainingInstance`s."""
features = []
for (inst_index, instance) in enumerate(instances):
input_ids = tokenizer.convert_tokens_to_ids(instance.tokens)
input_mask = [1] * len(input_ids)
segment_ids = list(instance.segment_ids)
assert len(input_ids) <= max_seq_length
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
masked_lm_positions = list(instance.masked_lm_positions)
masked_lm_ids = tokenizer.convert_tokens_to_ids(instance.masked_lm_labels)
masked_lm_weights = [1.0] * len(masked_lm_ids)
while len(masked_lm_positions) < max_predictions_per_seq:
masked_lm_positions.append(0)
masked_lm_ids.append(0)
masked_lm_weights.append(0.0)
next_sentence_label = 1 if instance.is_random_next else 0
features.append(
InputFeatures(input_ids, input_mask, segment_ids,
masked_lm_positions, masked_lm_ids,
masked_lm_weights, next_sentence_label))
if inst_index < 5:
tf.logging.info("*** Example ***")
tf.logging.info("tokens: %s" % " ".join(
[str(x) for x in instance.tokens]))
tf.logging.info("features: %s" % str(features[-1]))
return features
# -
features = pretraining_convert_examples_to_features(
instances=examples, max_seq_length=max_seq_length,
max_predictions_per_seq=max_predictions_per_seq, tokenizer=tokenizer)
def input_fn_builder(features, seq_length, max_predictions_per_seq, tokenizer):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
all_input_ids = []
all_input_mask = []
all_segment_ids = []
all_masked_lm_positions = []
all_masked_lm_ids = []
all_masked_lm_weights = []
all_next_sentence_labels = []
for feature in features:
all_input_ids.append(feature.input_ids)
all_input_mask.append(feature.input_mask)
all_segment_ids.append(feature.segment_ids)
all_masked_lm_positions.append(feature.masked_lm_positions)
all_masked_lm_ids.append(feature.masked_lm_ids)
all_masked_lm_weights.append(feature.masked_lm_weights)
all_next_sentence_labels.append(feature.next_sentence_labels)
def input_fn(params):
"""The actual input function."""
batch_size = params["batch_size"]
num_examples = len(features)
# This is for demo purposes and does NOT scale to large data sets. We do
# not use Dataset.from_generator() because that uses tf.py_func which is
# not TPU compatible. The right way to load data is with TFRecordReader.
d = tf.data.Dataset.from_tensor_slices({
"input_ids":
tf.constant(
all_input_ids, shape=[num_examples, seq_length],
dtype=tf.int32),
"input_mask":
tf.constant(
all_input_mask,
shape=[num_examples, seq_length],
dtype=tf.int32),
"segment_ids":
tf.constant(
all_segment_ids,
shape=[num_examples, seq_length],
dtype=tf.int32),
"masked_lm_positions":
tf.constant(
all_masked_lm_positions,
shape=[num_examples, max_predictions_per_seq],
dtype=tf.int32),
"masked_lm_ids":
tf.constant(
all_masked_lm_ids,
shape=[num_examples, max_predictions_per_seq],
dtype=tf.int32),
"masked_lm_weights":
tf.constant(
all_masked_lm_weights,
shape=[num_examples, max_predictions_per_seq],
dtype=tf.float32),
"next_sentence_labels":
tf.constant(
all_next_sentence_labels,
shape=[num_examples, 1],
dtype=tf.int32),
})
d = d.batch(batch_size=batch_size, drop_remainder=False)
return d
return input_fn
# + code_folding=[64.0, 77.0]
def model_fn_builder(bert_config, init_checkpoint, learning_rate,
num_train_steps, num_warmup_steps, use_tpu,
use_one_hot_embeddings):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
tf.logging.info("*** Features ***")
for name in sorted(features.keys()):
tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
masked_lm_positions = features["masked_lm_positions"]
masked_lm_ids = features["masked_lm_ids"]
masked_lm_weights = features["masked_lm_weights"]
next_sentence_labels = features["next_sentence_labels"]
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
model = tfm.BertModel(
config=bert_config,
is_training=is_training,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=segment_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
(masked_lm_loss,
masked_lm_example_loss, masked_lm_log_probs) = rp.get_masked_lm_output(
bert_config, model.get_sequence_output(), model.get_embedding_table(),
masked_lm_positions, masked_lm_ids, masked_lm_weights)
(next_sentence_loss, next_sentence_example_loss,
next_sentence_log_probs) = rp.get_next_sentence_output(
bert_config, model.get_pooled_output(), next_sentence_labels)
total_loss = masked_lm_loss + next_sentence_loss
tvars = tf.trainable_variables()
initialized_variable_names = {}
scaffold_fn = None
if init_checkpoint:
(assignment_map,
initialized_variable_names) = tfm.get_assigment_map_from_checkpoint(
tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
return tf.train.Scaffold()
scaffold_fn = tpu_scaffold
else:
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
tf.logging.info("**** Trainable Variables ****")
for var in tvars:
init_string = ""
if var.name in initialized_variable_names:
init_string = ", *INIT_FROM_CKPT*"
tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
init_string)
output_spec = None
if mode == tf.estimator.ModeKeys.TRAIN:
masked_lm_positions = features["masked_lm_positions"]
masked_lm_ids = features["masked_lm_ids"]
masked_lm_weights = features["masked_lm_weights"]
next_sentence_labels = features["next_sentence_labels"]
train_op = optimization.create_optimizer(
total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
loss=total_loss,
train_op=train_op,
scaffold_fn=scaffold_fn)
elif mode == tf.estimator.ModeKeys.EVAL:
masked_lm_positions = features["masked_lm_positions"]
masked_lm_ids = features["masked_lm_ids"]
masked_lm_weights = features["masked_lm_weights"]
next_sentence_labels = features["next_sentence_labels"]
def metric_fn(masked_lm_example_loss, masked_lm_log_probs, masked_lm_ids,
masked_lm_weights, next_sentence_example_loss,
next_sentence_log_probs, next_sentence_labels):
"""Computes the loss and accuracy of the model."""
masked_lm_log_probs = tf.reshape(masked_lm_log_probs,
[-1, masked_lm_log_probs.shape[-1]])
masked_lm_predictions = tf.argmax(
masked_lm_log_probs, axis=-1, output_type=tf.int32)
masked_lm_example_loss = tf.reshape(masked_lm_example_loss, [-1])
masked_lm_ids = tf.reshape(masked_lm_ids, [-1])
masked_lm_weights = tf.reshape(masked_lm_weights, [-1])
masked_lm_accuracy = tf.metrics.accuracy(
labels=masked_lm_ids,
predictions=masked_lm_predictions,
weights=masked_lm_weights)
masked_lm_mean_loss = tf.metrics.mean(
values=masked_lm_example_loss, weights=masked_lm_weights)
next_sentence_log_probs = tf.reshape(
next_sentence_log_probs, [-1, next_sentence_log_probs.shape[-1]])
next_sentence_predictions = tf.argmax(
next_sentence_log_probs, axis=-1, output_type=tf.int32)
next_sentence_labels = tf.reshape(next_sentence_labels, [-1])
next_sentence_accuracy = tf.metrics.accuracy(
labels=next_sentence_labels, predictions=next_sentence_predictions)
next_sentence_mean_loss = tf.metrics.mean(
values=next_sentence_example_loss)
return {
"masked_lm_accuracy": masked_lm_accuracy,
"masked_lm_loss": masked_lm_mean_loss,
"next_sentence_accuracy": next_sentence_accuracy,
"next_sentence_loss": next_sentence_mean_loss,
}
eval_metrics = (metric_fn, [
masked_lm_example_loss, masked_lm_log_probs, masked_lm_ids,
masked_lm_weights, next_sentence_example_loss,
next_sentence_log_probs, next_sentence_labels
])
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
loss=total_loss,
eval_metrics=eval_metrics,
scaffold_fn=scaffold_fn)
elif mode == tf.estimator.ModeKeys.PREDICT:
masked_lm_log_probs = tf.reshape(masked_lm_log_probs,
[-1, masked_lm_log_probs.shape[-1]])
masked_lm_predictions = tf.argmax(
masked_lm_log_probs, axis=-1, output_type=tf.int32)
next_sentence_log_probs = tf.reshape(
next_sentence_log_probs, [-1, next_sentence_log_probs.shape[-1]])
next_sentence_predictions = tf.argmax(
next_sentence_log_probs, axis=-1, output_type=tf.int32)
masked_lm_predictions = tf.reshape(masked_lm_predictions,
[1, masked_lm_positions.shape[-1]])
next_sentence_predictions = tf.reshape(next_sentence_predictions,
[1, 1])
predictions = {
"masked_lm_predictions": masked_lm_predictions,
"next_sentence_predictions": next_sentence_predictions
}
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
return output_spec
else:
raise ValueError("Only TRAIN, EVAL and PREDICT modes are supported: %s" % (mode))
return output_spec
return model_fn
# +
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
master=None,
tpu_config=tf.contrib.tpu.TPUConfig(
num_shards=1,
per_host_input_for_training=is_per_host))
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=init_checkpoint,
learning_rate=0,
num_train_steps=1,
num_warmup_steps=1,
use_tpu=False,
use_one_hot_embeddings=False)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=False,
model_fn=model_fn,
config=run_config,
predict_batch_size=1)
input_fn = input_fn_builder(
features=features, seq_length=max_seq_length, max_predictions_per_seq=max_predictions_per_seq,
tokenizer=tokenizer)
# -
tensorflow_all_out = []
for result in estimator.predict(input_fn, yield_single_examples=True):
tensorflow_all_out.append(result)
print(len(tensorflow_all_out))
print(len(tensorflow_all_out[0]))
print(tensorflow_all_out[0].keys())
print("masked_lm_predictions", tensorflow_all_out[0]['masked_lm_predictions'])
print("predicted token", tokenizer.convert_ids_to_tokens(tensorflow_all_out[0]['masked_lm_predictions']))
tensorflow_outputs = tokenizer.convert_ids_to_tokens(tensorflow_all_out[0]['masked_lm_predictions'])[:len(masked_lm_positions)]
print("tensorflow_output:", tensorflow_outputs)
# ## 2/ PyTorch code
from examples import extract_features
from examples.extract_features import *
init_checkpoint_pt = "../google_models/uncased_L-12_H-768_A-12/pytorch_model.bin"
device = torch.device("cpu")
model = ppb.BertForPreTraining.from_pretrained('bert-base-uncased')
model.to(device)
# + code_folding=[]
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
all_masked_lm_positions = torch.tensor([f.masked_lm_positions for f in features], dtype=torch.long)
eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_masked_lm_positions)
eval_sampler = SequentialSampler(eval_data)
eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=1)
model.eval()
# -
import numpy as np
pytorch_all_out = []
for input_ids, input_mask, segment_ids, tensor_masked_lm_positions in eval_dataloader:
print(input_ids)
print(input_mask)
print(segment_ids)
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
prediction_scores, _ = model(input_ids, token_type_ids=segment_ids, attention_mask=input_mask)
prediction_scores = prediction_scores[0, tensor_masked_lm_positions].detach().cpu().numpy()
print(prediction_scores.shape)
masked_lm_predictions = np.argmax(prediction_scores, axis=-1).squeeze().tolist()
print(masked_lm_predictions)
pytorch_all_out.append(masked_lm_predictions)
pytorch_outputs = tokenizer.convert_ids_to_tokens(pytorch_all_out[0])[:len(masked_lm_positions)]
print("pytorch_output:", pytorch_outputs)
print("tensorflow_output:", tensorflow_outputs)
| notebooks/Comparing-TF-and-PT-models-MLM-NSP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
sc
rdd = sc.parallelize([1,2,3,4,5,6,7,8,9,10,123,4324435])
rdd.collect()
rdd.count()
rdd0 = rdd.map(lambda x : x * x)
rdd0.collect()
rdd0.sum()
rdd0.mean()
rdd1 = rdd0.join(rdd)
rdd1 = rdd0.union(rdd)
rdd1.collect()
rdd1.collect()
sc.setCheckpointDir("/opt/spark/data")
rdd1.checkpoint()
rdd1.max()
rdd1.repartition(2)
rdd1.min()
rdd1.partitioner()
rdd1
rdd1.id
rdd1.id()
sc.sparkHome()
txts = sc.textFile("/opt/data/README.md")
txts.count()
lines = txts.flatMap(lambda line : line.split(" "))
words = lines
rdd = words.map(lambda w : (w,1))
rdd0 = rdd.reduceByKey(lambda x, y : x+y)
rdd0.collect()
rdd1 = rdd0.sortByKey()
rdd1.collect()
rdd2 = rdd1.map(lambda t :(t[1],t[0]))
rdd2
rdd2.collect()
rdd2.sortByKey().collect()
#使用value进行排序,升序
rdd2.sortBy(ascending=True,numPartitions=None, keyfunc = lambda x: x[1]).collect()
#使用key排序升序
rdd2.sortBy(ascending=True,numPartitions=None, keyfunc = lambda x: x[0]).collect()
# +
#获取分区信息,python里没有对应的方法
#rdd2.partitions.size
# -
# 这个只能是数字型
#rdd3 = rdd2.sampleStdev()
rdd2.saveAsTextFile("/opt/data/1/")
# + active=""
#
| apps/spark-sample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Calculating the Beta of a Stock
# *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).*
# Obtain data for Microsoft and S&P 500 for the period 1st of January 2012 – 31st of December 2016 from Yahoo Finance.
# +
import numpy as np
import pandas as pd
from pandas_datareader import data as wb
tickers = ['MSFT', '^GSPC']
data = pd.DataFrame()
for t in tickers:
data[t] = wb.DataReader(t, data_source='yahoo', start='2012-1-1', end='2016-12-31')['Adj Close']
# -
# Let S&P 500 act as the market.
# *****
# Calculate the beta of Microsoft.
sec_returns = np.log( data / data.shift(1) )
cov = sec_returns.cov() * 250
cov
cov_with_market = cov.iloc[0,1]
cov_with_market
market_var = sec_returns['^GSPC'].var() * 250
market_var
# ** Beta: **
# ### $$
# \beta_{pg} = \frac{\sigma_{pg,m}}{\sigma_{m}}
# $$
MSFT_beta = cov_with_market / market_var
MSFT_beta
| 23 - Python for Finance/6_The Capital Asset Pricing Model/3_Calculating the Beta of a Stock (3:38)/Calculating the Beta of a Stock - Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="x404tNDzbN1x"
# ## **SPAM CLASSIFIER USING NAIVE BAYES:**
# + id="NCeyjySCMv4C"
# Importing libraries
import pandas as pd
import re
import nltk
from nltk.stem import PorterStemmer
from nltk.corpus import stopwords
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix, plot_roc_curve
# + colab={"base_uri": "https://localhost:8080/", "height": 202} id="ejF11mAENEVf" outputId="8b2cc5f5-a869-46c6-a670-82561aa5e8da"
# loading dataset
data = pd.read_csv("/content/drive/MyDrive/Deep Learning/SMSSpamCollection.txt", sep="\t", names=['Labels', 'Messages'])
data.head()
# + id="hOtOgOWNN-yE"
# independent and dependent features
X = data.Messages
y = data.Labels
# + colab={"base_uri": "https://localhost:8080/"} id="2RX7Dt2SOTSb" outputId="23319716-1946-43af-ac03-b153fac471bc"
# 1st 5 value to show
X.head()
# + colab={"base_uri": "https://localhost:8080/"} id="inK6VWTcOUvL" outputId="cba58e70-e5fd-460a-e799-390ba8138b2d"
# downloading stopwords from nltk library
nltk.download('stopwords')
# create an object for stemmer
stemmer = PorterStemmer()
# creating empty list for storing stemming words
stored_words = []
for i in range(len(X)):
val = re.sub("[^a-zA-Z0-9]", " ", X[i]) # except a-zA-Z0-9, all text and comma, etccc are removed
val = val.lower() # lower the data
words = val.split()
words = [stemmer.stem(word) for word in words if word not in set(stopwords.words('english'))] # do stemming if words not in stopwords
words = " ".join(words)
stored_words.append(words) # appending into our empty list
print(stored_words)
# + colab={"base_uri": "https://localhost:8080/"} id="fF5SH5WNOijq" outputId="d4bd56e2-3640-4980-baf2-4c45296c298e"
# now convert text data into vectors
tf_vector = TfidfVectorizer() # create an object
X = tf_vector.fit_transform(stored_words).toarray() # convert final value into array
X
# + colab={"base_uri": "https://localhost:8080/"} id="oSJFtihkQjs-" outputId="a5f756b5-b748-4a49-f122-3a89a886b828"
# dependent feature
y.head()
# + colab={"base_uri": "https://localhost:8080/"} id="VEazBM82RZPo" outputId="078f548d-709a-4ece-d34d-a1cbd072e856"
# convert categorical value into numerical value
y = y.replace(["ham", "spam"], [1, 0]) # assign value 0 for spam and 1 for not spam
y.head()
# + id="u-fiT-MkRxS0"
# splitting dataset for training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # test data is about 20 % from overall data
# + colab={"base_uri": "https://localhost:8080/"} id="gzA7oAYwSv0h" outputId="4732a790-b6c0-4fb4-dd43-971702217bb4"
X_train
# + id="WS3rVudCSxV4"
# fit dataset into algorithm
classifier = MultinomialNB(alpha=0.5)
classifier = classifier.fit(X_train, y_train)
pred = classifier.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="dNT6Dsz0TdfF" outputId="e89363c7-5149-442d-8f00-974b641a2b15"
# evaluating our model using metrics
print(f"Accuracy score : {accuracy_score(pred, y_test)}")
print(f"Confusion matrix :\n {confusion_matrix(pred, y_test)}")
print(f"Classification report :\n {classification_report(pred, y_test)}")
| SpamClassifierModel-NBayes/SPAM_CLASSIFIER_NLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 64-bit
# name: python3
# ---
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
sns.set(style='ticks', context='notebook', rc={'figure.figsize':(7, 5)})
# +
from sklearn.datasets import *
data = load_boston()
df = pd.DataFrame(data=data.data, columns=data.feature_names)
df.drop('B', axis=1, inplace=True)
df['MEDV'] = load_boston(True)[1]
df.head()
# -
sns.regplot(x='RM', y="MEDV", data=df)
plt.title('regplot')
plt.show()
# +
import statsmodels.api as sm
df = df.copy()
df['intercept'] = 1
cols = [c for c in df.columns if c != 'MEDV']
lm=sm.OLS(df['MEDV'], df[cols])
slr_results = lm.fit()
slr_results.summary()
# -
| chapter05/03_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import warnings
import numpy as np
import pandas as pd
import scipy.stats as st
# import scipy.optimize as st
import statsmodels.api as sm
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
matplotlib.rcParams['figure.figsize'] = (10,10)
matplotlib.style.use('ggplot')
# -
# ### Getting the elnino dataset to fit a distribution to it
data = pd.Series(sm.datasets.elnino.load_pandas().data.set_index('YEAR').values.ravel())
sns.distplot(data)
# +
parameters = st.norm.fit(data)
print("mean, std.dev: ", parameters)
# -
# ### Look up the KS table
ks_table = st.kstest(data, "norm", parameters)
# ### looking at the KS table
# +
p_value = ks_table[1]/np.sqrt(len(data))
print(p_value)
# -
# ### Look up the KS table after fitting the data
# Lower the p value better the fit
# +
DISTRIBUTIONS = [
st.alpha,st.anglit,st.arcsine,st.beta,st.betaprime,st.bradford,st.burr,st.cauchy,st.chi,st.chi2,st.cosine,
st.dgamma,st.dweibull,st.erlang,st.expon,st.exponnorm,st.exponweib,st.exponpow,st.f,st.fatiguelife,st.fisk,
st.foldcauchy,st.foldnorm,st.frechet_r,st.frechet_l,st.genlogistic,st.genpareto,st.gennorm,st.genexpon,
st.genextreme,st.gausshyper,st.gamma,st.gengamma,st.genhalflogistic,st.gilbrat,st.gompertz,st.gumbel_r,
st.gumbel_l,st.halfcauchy,st.halflogistic,st.halfnorm,st.halfgennorm,st.hypsecant,st.invgamma,st.invgauss,
st.invweibull,st.johnsonsb,st.johnsonsu,st.ksone,st.kstwobign,st.laplace,st.levy,st.levy_l,st.levy_stable,
st.logistic,st.loggamma,st.loglaplace,st.lognorm,st.lomax,st.maxwell,st.mielke,st.nakagami,st.ncx2,st.ncf,
st.nct,st.norm,st.pareto,st.pearson3,st.powerlaw,st.powerlognorm,st.powernorm,st.rdist,st.reciprocal,
st.rayleigh,st.rice,st.recipinvgauss,st.semicircular,st.t,st.triang,st.truncexpon,st.truncnorm,st.tukeylambda,
st.uniform,st.vonmises,st.vonmises_line,st.wald,st.weibull_min,st.weibull_max,st.wrapcauchy
]
# putting data in 200 bins
y, x = np.histogram(data, bins=200, density=True)
x = (x + np.roll(x, -1))[:-1] / 2.0
# +
# best dist
best_distribution = st.norm
best_params = (0.0, 1.0)
best_sse = np.inf
# -
for distribution in DISTRIBUTIONS:
# Trying to fit the distribution
try:
# Ignore warnings from data that can't be fit
with warnings.catch_warnings():
warnings.filterwarnings('ignore')
# fit dist to data
params = distribution.fit(data)
arg = params[:-2]
loc = params[-2]
scale = params[-1]
# Calculate fitted PDF and error with fit in distribution
pdf = distribution.pdf(x, loc=loc, scale=scale, *arg)
sse = np.sum(np.power(y - pdf, 2.0))
# if axis pass in add to plot
try:
if ax:
pd.Series(pdf, x).plot(ax=ax)
end
except Exception:
pass
# identify if this distribution is better
if best_sse > sse > 0:
best_distribution = distribution
best_params = params
best_sse = sse
except Exception:
pass
best_distribution.name
# norm.ppf(0.95, loc=0, scale=1)
#
# Returns a 95% significance interval for a one-tail test on a standard normal distribution \
# (i.e. a special case of the normal distribution where the mean is 0 and the standard deviation is 1).
best_params
best_sse
start = st.johnsonsb.ppf(0.01, loc=loc, scale=scale, *best_params[:-2])
end = st.johnsonsb.ppf(0.99, loc=loc, scale=scale, *best_params[:-2])
number_of_samples = 10000
x = np.linspace(start, end, 10000)
y = best_distribution.pdf(x, loc=loc, scale=scale, *best_params[:-2])
pdf = pd.Series(y, x)
sns.distplot(pdf)
| Fitting distribution to Unknown data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.0 64-bit (conda)
# language: python
# name: python3
# ---
# # `abc` 抽象基类
#
# 鸭子类型
# : 一种编程风格(duck-typing),它并不依靠查找对象类型来确定其是否具有正确的接口,而是直接调用或使用其方法或属性(“看起来像鸭子,叫起来也像鸭子,那么肯定就是鸭子。”)由于强调接口而非特定类型,设计良好的代码可通过允许多态替代来提升灵活性。鸭子类型避免使用 {func}`type` 或 {func}`isinstance` 检测。(但要注意鸭子类型可以使用 抽象基类 作为补充。) 而往往会采用 {func}`hasattr` 检测或是 {term}`EAFP` 编程。
#
# 抽象基类
# : 抽象基类简称 ABC(abstract base class)是对 {dfn}`鸭子类型` 的补充。它提供了一种定义接口的新方式,相比之下其他技巧例如 {func}`hasattr` 显得过于笨拙或有微妙错误(例如使用 **魔术方法**)。ABC 引入了虚拟子类,这种类并非继承自其他类,但却仍能被 {func}`isinstance` 和 {func}`issubclass` 所认可;详见 {mod}`abc` 模块文档。Python 自带许多内置的 ABC 用于实现数据结构(在 {mod}`collections.abc` 模块中)、数字(在 {mod}`numbers` 模块中)、流(在 {mod}`io` 模块中)、导入查找器和加载器(在 {mod}`importlib.abc` 模块中)。你可以使用 {mod}`abc` 模块来创建自己的 ABC。
#
# {mod}`collections` 模块中有一些派生自 ABC 的具体类;当然这些类还可以进一步被派生。此外,{mod}`collections.abc` 子模块中有一些 ABC 可被用于测试一个类或实例是否提供特定的接口,例如它是否可哈希或它是否为映射等。
#
# {mod}`abc` 模块提供了一个元类 {class}`~abc.ABCMeta`,可以用来定义抽象类,另外还提供一个工具类 ABC,可以用它以继承的方式定义抽象基类。
#
# ## `abc.ABCMeta`
#
# {class}`~abc.ABCMeta` 用于定义抽象基类(ABC)的元类。抽象基类可以像 mix-in 类一样直接被子类继承。你也可以将不相关的具体类(包括内建类)和抽象基类注册为“抽象子类” —— 这些类以及它们的子类会被内建函数 {func}`issubclass` 识别为对应的抽象基类的子类,但是该抽象基类不会出现在其 MRO(Method Resolution Order,方法解析顺序)中,抽象基类中实现的方法也不可调用(即使通过 {func}`super` 调用也不行)。
#
# 使用 {class}`~abc.ABCMeta` 作为元类创建的类含有如下方法。
#
# ### `register(subclass)`
#
# 将 `subclass` 注册为该抽象基类的“虚拟子类”,例如:
# +
from abc import ABCMeta
class MyABC(metaclass=ABCMeta): ...
MyABC.register(tuple)
assert issubclass(tuple, MyABC)
assert isinstance((), MyABC)
# -
# 你也可以在虚基类中重载 {meth}`~abc.ABCMeta.register` 方法。
#
# ### `__subclasshook__(subclass)`
#
# ```{note}
# `__subclasshook__(subclass)` 必须定义为类方法。
# ```
#
# 检查 `subclass` 是否是该抽象基类的子类。也就是说对于那些你希望定义为该抽象基类的子类的类,你不用对每个类都调用 {meth}`~abc.ABCMeta.register` 方法了,而是可以直接自定义 `issubclass` 的行为。(这个类方法是在抽象基类的 {meth}`~abc.ABCMeta.__subclasscheck__` 方法中调用的。)
#
# 该方法必须返回 `True`、`False` 或是 `NotImplemented`。如果返回 `True`,`subclass` 就会被认为是这个抽象基类的子类。如果返回 `False`,无论正常情况是否应该认为是其子类,统一视为不是。如果返回 `NotImplemented`,子类检查会按照正常机制继续执行。
#
# ## `abc.ABC`
#
# 一个使用 {class}`~abc.ABCMeta` 作为元类的工具类。抽象基类可以通过从 ABC 派生来简单地创建,这就避免了在某些情况下会令人混淆的元类用法,例如:
# +
from abc import ABC
class MyABC(ABC): ...
# -
# 注意 {class}`~abc.ABC` 的类型仍然是 {class}`~abc.ABCMeta`,因此继承 {class}`~abc.ABC` 仍然需要关注元类使用中的注意事项,比如可能会导致元类冲突的多重继承。
#
# 此外,{mod}`abc` 模块还提供了一些装饰器。
#
# ## `@abc.abstractmethod`
#
# {meth}`abc.abstractmethod` 用于声明抽象方法的装饰器。
#
# 使用此装饰器要求类的元类是 {class}`~abc.ABCMeta` 或是从该类派生。一个具有派生自 {class}`~abc.ABCMeta` 的元类的类不可以被实例化,除非它全部的抽象方法和特征属性均已被重载。抽象方法可通过任何普通的“super”调用机制来调用。{meth}`~abc.abstractmethod` 可被用于声明特性属性和描述器的抽象方法。
#
# 动态地添加抽象方法到一个类,或尝试在方法或类被创建后修改其抽象状态等操作仅在使用 {meth}`~abc.update_abstractmethods` 函数时受到支持。{meth}`~abc.abstractmethod` 只会影响使用常规继承所派生的子类;通过 {class}`~abc.ABC` 的 {meth}`~abc.ABCMeta.register` 方法注册的“虚子类”不会受到影响。
#
# 当 {meth}`~abc.abstractmethod` 与其他方法描述符配合应用时,它应当被应用为最内层的装饰器,如以下用法示例所示:
#
# ```python
# class C(ABC):
# @abstractmethod
# def my_abstract_method(self, ...):
# ...
# @classmethod
# @abstractmethod
# def my_abstract_classmethod(cls, ...):
# ...
# @staticmethod
# @abstractmethod
# def my_abstract_staticmethod(...):
# ...
#
# @property
# @abstractmethod
# def my_abstract_property(self):
# ...
# @my_abstract_property.setter
# @abstractmethod
# def my_abstract_property(self, val):
# ...
#
# @abstractmethod
# def _get_x(self):
# ...
# @abstractmethod
# def _set_x(self, val):
# ...
# x = property(_get_x, _set_x)
# ```
#
# 为了能正确地与抽象基类机制实现互操作,描述符必须使用 `__isabstractmethod__` 将自身标识为抽象的。通常,如果被用于组成描述符的任何方法都是抽象的则此属性应当为 `True`。例如,Python 的内置 `property` 所做的就等价于:
#
# ```python
# class Descriptor:
# ...
# @property
# def __isabstractmethod__(self):
# return any(getattr(f, '__isabstractmethod__', False) for
# f in (self._fget, self._fset, self._fdel))
# ```
#
# 可以定义了一个只读特征属性:
#
# ```python
# class C(ABC):
# @property
# @abstractmethod
# def my_abstract_property(self):
# ...
# ```
#
# 也可以通过适当地将一个或多个下层方法标记为抽象的来定义可读写的抽象特征属性:
#
# ```python
# class C(ABC):
# @property
# def x(self):
# ...
#
# @x.setter
# @abstractmethod
# def x(self, val)
# ```
#
# 如果只有某些组件是抽象的,则只需更新那些组件即可在子类中创建具体的特征属性:
#
# ```python
# class D(C):
# @C.x.setter
# def x(self, val):
# ...
# ```
#
# {mod}`abc` 模块还提供了一些函数。
#
# ## `abc.get_cache_token()`
#
# 你可以使用 {func}`~abc.get_cache_token` 函数来检测对 {func}`~abc.ABCMeta.register` 的调用。
#
# - 返回当前抽象基类的缓存令牌。
# - 此令牌是一个不透明对象(支持相等性测试),用于为虚子类标识抽象基类缓存的当前版本。
# - 此令牌会在任何 {class}`~abc.ABC` 上每次调用 {func}`~abc.ABCMeta.register` 时发生更改。
#
# 为演示 `__subclasshook__(subclass)` 的概念,请看以下定义 ABC 的示例:
# +
from abc import ABC, abstractmethod
class Foo:
def __getitem__(self, index):
...
def __len__(self):
...
def get_iterator(self):
return iter(self)
class MyIterable(ABC):
@abstractmethod
def __iter__(self):
while False:
yield None
def get_iterator(self):
return self.__iter__()
@classmethod
def __subclasshook__(cls, C):
if cls is MyIterable:
if any("__iter__" in B.__dict__ for B in C.__mro__):
return True
return NotImplemented
MyIterable.register(Foo)
# -
# ABC `MyIterable` 定义了标准的迭代方法 {meth}`~iterator.__iter__` 作为一个抽象方法。这里给出的实现仍可在子类中被调用。{meth}`get_iterator` 方法也是 `MyIterable` 抽象基类的一部分,但它并非必须被非抽象派生类所重载。
#
# 这里定义的 {meth}`__subclasshook__` 类方法指明了任何在其 {attr}`~object.__dict__` (或在其通过 {attr}`~class.__mro__` 列表访问的基类)中具有 {meth}`~iterator.__iter__` 方法的类也都会被视为 `MyIterable`。
#
# 最后,末尾行使得 `Foo` 成为 `MyIterable` 的一个虚子类,即使它没有定义 {meth}`~iterator.__iter__` 方法(它使用了以 {func}`__len__` 和 `__getitem__()` 术语定义的旧式可迭代对象协议)。请注意这将不会使 {meth}`get_iterator` 成为 `Foo` 的一个可用方法,它是被另外提供的。
#
# ## `abc.update_abstractmethods(cls)`
#
# `abc.update_abstractmethods(cls)` 重新计算一个抽象类的抽象状态的函数。如果一个类的抽象方法在类被创建后被实现或被修改则应当调用此函数。通常,此函数应当在一个类装饰器内部被调用。
#
# - 返回 `cls`,使其能够用作类装饰器。
# - 如果 `cls` 不是 {class}`abc.ABCMeta` 的子类,则不做任何操作。
#
| doc/python-study/topics/abc.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
from pathlib import Path
import xarray as xr
import numpy as np
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import seaborn as sb
# This notebook takes the raw/downloaded information and pre-processes it into a data frame. This pre-processing procedure DOES NOT assume all gridded data is in the same spatio-temporal resolution.
# Decide whether to interpolate all the grids to the same spatial resolution (True) or
# point-inspect grids at the original resolution (False).
use_interpolation = True
# # Outcome variable (or predictand)
#
# The outcome variable is the dry matter per unit area, defined as CC * AGB * BA / CellArea
# ## Area (Km^2)
"""Load area data into an xarray dataset"""
area_data = xr.open_mfdataset("/data1/downloaded/area.nc")
area_data = area_data.layer
area_data.plot()
# ## Combustion completeness (CC)
"""Load CC data into an xarray dataset"""
cc_data = xr.open_dataset("/data1/downloaded/GFED4_basis_regions_cc_all.nc")
cc_data = cc_data.layer
cc_data.plot()
# ## Above Ground Biomass (AGB)
"""Load AGB data into an xarray dataset"""
agb_data = xr.open_dataset("/data1/raw_data/veg_2010_2016/all_veg_data.nc")
agb_data
# Now select below which data variable you would like to use as outcome, this will be used to mask all the other features.
# +
# We suggest to select one of the maps by Avitabile et al.
# This is the outcome variable and also works as mask for all the features.
# All the features geerated from here onwards, will depend on this choice!
agb_vartype = "mean" # it can be: 'mean' or '5th' or '95th'
author = "avitabile" # it can be: "avitabile", "baccini", "saatchi"
agb_varname = "abg_" + author + "_vod" + agb_vartype
agb_data = eval("agb_data." + agb_varname)
# Units
agb_data.units
# -
agb_data
# Please note the units of AGB are Mg/h!
agb_data[0].plot()
# One of the predictors (VOD) is available from April 2010 to December 2016.
# Therefore here we remove Jan-Feb-Mar 2016.
agb_data = agb_data.loc["2010-04-01":"2016-12-31"]
# +
# Mask out where AGB is zero
AGB_THRESH = 0
agb_data = agb_data.where(agb_data > AGB_THRESH)
agb_data[0].plot()
# -
# ## Burned Area (BA)
# +
"""Load BA data into an xarray dataset"""
ba_source = "ESA-CCI" # it can be 'ESA-CCI' or 'GFED4'
if ba_source == "ESA-CCI":
ba_data = xr.open_mfdataset(
"/data1/raw_data/burned_area_2010_2018/201[0-6]*-ESACCI-L4_FIRE-BA-MODIS-fv5.1.nc"
)
# Extract only burned areas
ba_data = ba_data.burned_area
# Please note the units of BA are m2! Units: ba_data.units
# Convert m2 to hectares
ba_data = ba_data * 0.0001
# Rename lat/lon dimensions
ba_data = ba_data.rename({"lon": "longitude", "lat": "latitude"})
elif ba_source == "GFED4":
ba_data = xr.open_mfdataset(
"/data1/downloaded/GFED_nc_2010_2016/gfed_burned_area_20*.nc"
)
# Rename lat/lon dimensions
ba_data = ba_data.rename({"lon": "longitude", "lat": "latitude"})
# Convert area from Km2 to hectares
area_data_h = area_data * 100
ba_data = ba_data.burned_fraction * area_data_h
# -
# One of the predictors (VOD) is available from April 2010 to December 2016.
# Therefore here we remove Jan-Feb-Mar 2016.
ba_data = ba_data.loc["2010-04-01":"2016-12-31"]
# +
# Mask out low values (small fires)
BA_THRESH = 50 # hectares
ba_data = ba_data.where(ba_data > BA_THRESH)
ba_data[0].plot()
# -
# ## Dry Matter (DM)
#
# The fuel load or dry matter (Mg/Km2) is the Above Ground Biomass (Mg/h) * Burned Area (h) * Combustion Completeness (this is unitless rate) / the area of each cell (Km2). This operation is straightforward because the grids have the same spatial and temporal resolution! Please note the result shows values where BA, CC and AGB are not equal to NA.
dm_data = agb_data * ba_data * cc_data / area_data
dm_data[0].plot()
dm_data[80].plot()
# +
# Store data, if needed.
# folder_path = "~/ai-vegetation-fuel/data/inputs/drymatter/"
# Path(folder_path).mkdir(parents=True, exist_ok=True)
# dm_data.to_netcdf(folder_path + "drymatter_Mg_over_Km2_2010-2016_" + ba_source + "_" + agb_varname + ".nc")
# -
dm_data
# Convert cells with non missing values to dataframe
df = dm_data.to_dataframe(name="dry_matter").dropna().reset_index()
df
df['dry_matter'].describe()
# Example: retrieve data at the grid cells nearest to the target latitudes, longitudes and time
# y = x["index"].sel(longitude=target_lon, latitude=target_lat, time=target_t, method="nearest")
# Where
target_lon = xr.DataArray(df['longitude'], dims="points")
target_lat = xr.DataArray(df['latitude'], dims="points")
target_t = xr.DataArray(df['time'], dims="points")
# # Static predictors
# ## Climatic regions (categorical)
# +
"""Load climatic regions data into an xarray dataset"""
cr_data = xr.open_dataset("/data1/raw_data/Beck_KG_V1_present_0p0083.gridName0320.nc")
# Rotate longitude coordinates
cr_data = cr_data.assign_coords(
longitude=(((cr_data.longitude + 180) % 360) - 180)
).sortby("longitude")
cr_data = cr_data.climatic_region
cr_data
# +
if use_interpolation == True:
# Interpolate to match load resolution
cr_data = cr_data.interp(
coords={
"latitude": dm_data.latitude.values,
"longitude": dm_data.longitude.values,
},
method="nearest",
)
# Retrieve data at the grid cells nearest to the target latitudes and longitudes
df["climatic_region"] = cr_data.sel(longitude=target_lon,
latitude=target_lat,
method="nearest")
# -
# The climatic regions are 5 categories (1 to 5), the floating number make me think they might have been previously pre-processed in the wrong way. Let's fix this by rounding to the nearest integer. Conversion to categorical type should be done just before modelling (as it is lost when saving data to csv).
df["climatic_region"] = df["climatic_region"].round()
df
# ## Biomes (categorical)
"""Load slope data into an xarray dataset"""
biomes_data = xr.open_dataset("/data1/downloaded/landcover_25.nc")
# Convert to data array and select the only time step available
biomes_data = biomes_data.STRF[0]
biomes_data
# +
# No need to interpolate, spatial resolution already matching the load
# Retrieve data at the grid cells nearest to the target latitudes and longitudes
df["biome"] = biomes_data.sel(longitude=target_lon,
latitude=target_lat,
method="nearest")
df
# -
# ## GFED basis region (categorical)
# +
"""Load GFED basis region into an xarray dataset"""
gfedregions_data = xr.open_mfdataset("/data1/downloaded/GFED4_basis_regions_id.nc")
gfedregions_data = gfedregions_data.layer
gfedregions_data
# +
# No need to interpolate, spatial resolution already matching the load
# Retrieve data at the grid cells nearest to the target latitudes and longitudes
df["GFEDregions"] = gfedregions_data.sel(longitude=target_lon,
latitude=target_lat,
method="nearest")
df
# -
# ## Slope
# +
"""Load slope data into an xarray dataset"""
slope_data = xr.open_mfdataset("/data1/raw_data/slope_O320.nc")
slope_data = slope_data.slor
# Rotate longitude coordinates
slope_data = slope_data.assign_coords(
longitude=(((slope_data.longitude + 180) % 360) - 180)
).sortby("longitude")
slope_data
# -
slope_data.plot()
# +
if use_interpolation == True:
# Interpolate to match load resolution
slope_data = slope_data.interp(
coords={
"latitude": dm_data.latitude.values,
"longitude": dm_data.longitude.values,
},
method="nearest",
)
# Retrieve data at the grid cells nearest to the target latitudes and longitudes
df["slope"] = slope_data.sel(longitude=target_lon,
latitude=target_lat,
method="nearest")
df
# -
# # Dynamic predictors
# ## Vegetation Optical Depth (VDO)
# +
"""Load VOD data into an xarray dataset"""
vodfiles = [
os.path.join(d, x)
for year in range(2010, 2017)
for d, dirs, files in os.walk("/data1/downloaded/ESA_VOD/" + str(year))
for x in files
if x.endswith(".nc")
]
vod_data = xr.open_mfdataset(vodfiles)
vod_data = vod_data.SM_IDW
# Calculate monthly means
vod_data = vod_data.resample(time="1MS").mean(dim="time")
vod_data
# -
# Please note this data is limited between 2010-04-01 and 2016-12-31!
# +
if use_interpolation == True:
# Interpolate to match load resolution
vod_data = vod_data.interp(
coords={
"latitude": dm_data.latitude.values,
"longitude": dm_data.longitude.values,
},
method="nearest",
)
# Retrieve data at the grid cells nearest to the target latitudes, longitudes and time
df["vod"] = vod_data.sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df
# -
# ## Leaf Area Index (LAI)
#
# According to [this wiki](https://wiki.landscapetoolbox.org/doku.php/remote_sensing_methods:leaf-area_index), remote sensing LAI methods generate a map of dimensionless LAI values assigned to each pixel. Values can range from 0 (bare ground) to 6 or more, but since rangeland vegetation is generally sparse, values commonly range from 0-1. A LAI value of 1 means that there is the equivalent of 1 layer of leaves that entirely cover a unit of ground surface area, and less than one means that there is some bare ground between vegetated patches. LAI values over 1 indicate a layered canopy with multiple layers of leaves per unit ground surface area. LAI and fPAR data are commonly packaged together (e.g., MODIS products).
"""Load LAI data into an xarray dataset"""
lai_data = xr.open_mfdataset(
"/data1/raw_data/LAI_interpolated_2010_2017/LAI_201[0-6]*.nc"
)
lai_data = lai_data.LAI
# Rename lat/lon dimensions
lai_data = lai_data.rename({"lon": "longitude", "lat": "latitude"})
# Calculate monthly means
lai_data = lai_data.resample(time="1MS").mean(dim="time")
# One of the predictors (VOD) is available from April 2010 to December 2016.
# Therefore here we remove Jan-Feb-Mar 2016.
lai_data = lai_data.loc["2010-04-01":"2016-12-31"]
lai_data
# +
if use_interpolation == True:
# Interpolate to match load resolution
lai_data = lai_data.interp(
coords={
"latitude": dm_data.latitude.values,
"longitude": dm_data.longitude.values,
},
method="nearest",
)
# Retrieve data at the grid cells nearest to the target latitudes, longitudes and time
df["lai"] = lai_data.sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df
# -
# ## Standardised Precipitation Index
# +
"""Load SPI data into an xarray dataset"""
spi_data = xr.open_mfdataset("/data1/raw_data/SPI_GPCC/output_201[0-6]*.nc")
# Rotate longitude coordinates
spi_data = spi_data.assign_coords(
longitude=(((spi_data.longitude + 180) % 360) - 180)
).sortby("longitude")
# One of the predictors (VOD) is available from April 2010 to December 2016.
# Therefore here we remove Jan-Feb-Mar 2016.
spi_data = spi_data.loc[dict(time=slice("2010-04-01", "2016-12-31"))]
# Fix time stamps
spi_data["time"] = dm_data["time"]
spi_data
# +
if use_interpolation == True:
# Interpolate to match load resolution
spi_data = spi_data.interp(
coords={
"latitude": dm_data.latitude.values,
"longitude": dm_data.longitude.values,
},
method="nearest",
)
# Retrieve data at the grid cells nearest to the target latitudes, longitudes and time
df["spi03"] = spi_data["spi03"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["spi06"] = spi_data["spi06"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["spi12"] = spi_data["spi12"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df
# -
# ## Weather Anomalies
# +
"""Load Weather data into an xarray dataset"""
wa_data = xr.open_mfdataset("/data1/raw_data/SEAS5_anomalies/S5_anomaly_201[0-6]*.nc")
# Rotate longitude coordinates
wa_data = wa_data.assign_coords(
longitude=(((wa_data.longitude + 180) % 360) - 180)
).sortby("longitude")
# One of the predictors (VOD) is available from April 2010 to December 2016.
# Therefore here we remove Jan-Feb-Mar 2016.
wa_data = wa_data.loc[dict(time=slice("2010-04-01", "2016-12-31"))]
wa_data
# +
if use_interpolation == True:
# Interpolate to match load resolution
wa_data = wa_data.interp(
coords={
"latitude": dm_data.latitude.values,
"longitude": dm_data.longitude.values,
},
method="nearest",
)
# Retrieve data at the grid cells nearest to the target latitudes, longitudes and time
df["d2m"] = wa_data["d2m"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["erate"] = wa_data["erate"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["fg10"] = wa_data["fg10"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["si10"] = wa_data["si10"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["swvl1"] = wa_data["swvl1"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["t2m"] = wa_data["t2m"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["tprate"] = wa_data["tprate"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df
# -
# ## Fire anomalies
# +
"""Load fire data into an xarray dataset"""
fa_data = xr.open_mfdataset(
"/data1/raw_data/SEAS_FIRE_ANOMALIES_2010_2018/ECMWF_FWI_201[0-6]*_anomaly_m1.nc"
)
# Rename lat/lon dimensions
fa_data = fa_data.rename({"lon": "longitude", "lat": "latitude"})
# Rotate longitude coordinates
fa_data = fa_data.assign_coords(
longitude=(((fa_data.longitude + 180) % 360) - 180)
).sortby("longitude")
# One of the predictors (VOD) is available from April 2010 to December 2016.
# Therefore here we remove Jan-Feb-Mar 2016.
fa_data = fa_data.loc[dict(time=slice("2010-04-01", "2016-12-31"))]
# Fix time stamps
fa_data["time"] = dm_data["time"]
fa_data
# +
if use_interpolation == True:
# Interpolate to match load resolution
fa_data = fa_data.interp(
coords={
"latitude": dm_data.latitude.values,
"longitude": dm_data.longitude.values,
},
method="nearest",
)
# Retrieve data at the grid cells nearest to the target latitudes, longitudes and time
df["danger_risk"] = fa_data["danger_risk"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["fwinx"] = fa_data["fwinx"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["ffmcode"] = fa_data["ffmcode"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["dufmcode"] = fa_data["dufmcode"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["drtcode"] = fa_data["drtcode"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["infsinx"] = fa_data["infsinx"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["fbupinx"] = fa_data["fbupinx"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df["fdsrte"] = fa_data["fdsrte"].sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df
# -
# ## Fire Radiative Power (FRP)
# +
"""Load FRP data into an xarray dataset"""
frp_data = xr.open_dataset("/data1/downloaded/CAMS_frpfire/CAMS_daily_2010-04-01_2016-12-31_monavg.nc")
frp_data = frp_data.frpfire
# Fix time stamps
frp_data["time"] = dm_data["time"]
# Rotate longitude coordinates
frp_data = frp_data.assign_coords(
longitude=(((frp_data.longitude + 180) % 360) - 180)
).sortby("longitude")
frp_data
# +
if use_interpolation == True:
# Interpolate to match load resolution
frp_data = frp_data.interp(
coords={
"latitude": dm_data.latitude.values,
"longitude": dm_data.longitude.values,
},
method="nearest",
)
# Retrieve data at the grid cells nearest to the target latitudes, longitudes and time
df["frp"] = frp_data.sel(longitude=target_lon,
latitude=target_lat,
time=target_t,
method="nearest")
df
# -
# ## Feature engineering
# Check data types by column
df.dtypes
# Drop NAN and reset the index
df = df.dropna().reset_index(drop=True)
df
# Calculate time elapsed (in days) since 2010-01-01
def elapsed_days(last_date):
return (last_date - datetime(2010, 1, 1)).dt.days
# Extract month and year from time, as well as other dateparts
df['daysElapsed'] = elapsed_days(df['time'])
df['timeYear'] = df['time'].dt.year
df['timeMonth'] = df['time'].dt.month
df = df.drop('time', axis = 1)
# Check data types by column
df.dtypes
# ## Save data frame with raw features
df.to_csv("~/ai-vegetation-fuel/data/inputs/full_dataset_interp" + str(use_interpolation) + ".csv", index=False)
# ### Splitting dataset into train/test sets and targets
#
# I want to make sure in train and test data we have approximately the same proportion of biomes. In order to achieve that we use stratified sampling.
train, test = train_test_split(df,
test_size=0.2,
stratify=df[["biome"]],
random_state=1)
train.to_csv("~/ai-vegetation-fuel/data/inputs/train_interp" + str(use_interpolation) + "_raw.csv")
test.to_csv("~/ai-vegetation-fuel/data/inputs/test_interp" + str(use_interpolation) + "_raw.csv")
| notebooks/ecmwf/data_preparation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="wmFMIzQszygx" executionInfo={"status": "ok", "timestamp": 1620806853382, "user_tz": -480, "elapsed": 1554, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="4f2760cd-219b-40c8-bec4-222b625b9d02"
try:
# # %tensorflow_version 只在 Colab 中存在。
# %tensorflow_version 1.x
except Exception:
pass
# 加载 TensorBoard notebook 扩展。
# %load_ext tensorboard
# + colab={"base_uri": "https://localhost:8080/"} id="ClTIV_90z4VS" executionInfo={"status": "ok", "timestamp": 1620806860335, "user_tz": -480, "elapsed": 7416, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="ec36885f-0540-48e6-e423-43522eff0477"
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if not tf.test.is_gpu_available():
print('GPU device not found')
print('Found GPU at: {}'.format(device_name))
# + colab={"base_uri": "https://localhost:8080/"} id="gs6b-Pcy0W9J" executionInfo={"status": "ok", "timestamp": 1620806904792, "user_tz": -480, "elapsed": 50532, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="6442f509-01bf-4b0b-fb6f-83d3f6f51dbf"
from google.colab import drive
drive.mount('/content/drive/')
# + id="f8uWZbDc0ZiR" executionInfo={"status": "ok", "timestamp": 1620806904793, "user_tz": -480, "elapsed": 49401, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}}
# 指定当前的工作文件夹
import os
# 此处为google drive中的文件路径,drive为之前指定的工作根目录,要加上
os.chdir("/content/drive/MyDrive/毕设/")
# + [markdown] id="5BOrAmif0886"
# ## 读取数据
# + id="pIS0mkgl07vS" executionInfo={"status": "ok", "timestamp": 1620806908589, "user_tz": -480, "elapsed": 1817, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}}
# 加载通用词 stopwords
def getStopwords(path):
stopwords = []
with open(path, "r", encoding='utf8') as f:
lines = f.readlines()
for line in lines:
stopwords.append(line.strip())
return stopwords
stop_words_path = "./data/baidu_stopwords.txt"
stopwords = getStopwords(stop_words_path)
# + id="3kSxeYmV0xup" executionInfo={"status": "ok", "timestamp": 1620806909059, "user_tz": -480, "elapsed": 1340, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}}
import jieba
from tqdm import tqdm
# 读取原始数据集分词预处理 并保存词典
def read_toutiao_dataset(data_path, save_vocab_path):
with open(data_path, "r", encoding="utf8") as fo:
all_lines = fo.readlines()
datas, labels = [], []
word_vocabs = {}
for line in tqdm(all_lines):
content_words = []
category, content = line.strip().split("_!_")
for term in jieba.lcut(content):
if term not in stopwords and term not in word_vocabs and term != ' ' and term != '':
word_vocabs[term] = len(word_vocabs)+1
content_words.append(term)
datas.append(content_words)
labels.append(category)
with open(save_vocab_path, "w", encoding="utf8") as fw:
for word, index in word_vocabs.items():
fw.write(word+"\n")
return datas, labels
# 读取词典 生成词-索引对应关系, 其中special_words = ['<PAD>', '<UNK>']
def read_word_vocabs(save_vocab_path, special_words):
with open(save_vocab_path, "r", encoding="utf8") as fo:
word_vocabs = [word.strip() for word in fo]
word_vocabs = special_words + word_vocabs
idx2vocab = {idx: char for idx, char in enumerate(word_vocabs)}
vocab2idx = {char: idx for idx, char in idx2vocab.items()}
return idx2vocab, vocab2idx
# 把预处理过的数据索引化 即变成词编号序列
def process_dataset(datas, labels, category2idx, vocab2idx):
new_datas, new_labels = [], []
for data, label in zip(datas, labels):
index_data = [vocab2idx[word] if word in vocab2idx else vocab2idx['<UNK>'] for word in data]
index_label = category2idx[label]
new_datas.append(index_data)
new_labels.append(index_label)
return new_datas, new_labels
# + colab={"base_uri": "https://localhost:8080/"} id="FgdR4jGn00Sa" executionInfo={"status": "ok", "timestamp": 1620807074803, "user_tz": -480, "elapsed": 164379, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="7c00b12c-8060-4596-cfde-50b6f68d0b37"
data_path = './data/toutiao_news_dataset.txt'
save_vocab_path = './data/word_vocabs.txt'
special_words = ['<PAD>', '<UNK>']
category_lists = ["民生故事","文化","娱乐","体育","财经","房产","汽车","教育","科技","军事",
"旅游","国际","证券股票","农业","电竞游戏"]
category2idx = {cate: idx for idx, cate in enumerate(category_lists)}
idx2category = {idx: cate for idx, cate in enumerate(category_lists)}
datas, labels = read_toutiao_dataset(data_path, save_vocab_path)
idx2vocab, vocab2idx = read_word_vocabs(save_vocab_path, special_words)
all_datas, all_labels = process_dataset(datas, labels, category2idx, vocab2idx)
# + colab={"base_uri": "https://localhost:8080/"} id="HXYJoOmbEPCS" executionInfo={"status": "ok", "timestamp": 1620195654078, "user_tz": -480, "elapsed": 741, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="17f7a9b4-968e-4d52-968f-f75f1387f43a"
print(datas[0], labels[0])
print(idx2vocab[2], idx2vocab[3])
print(idx2vocab[4], idx2vocab[1])
print(idx2vocab[6], idx2vocab[7])
print(vocab2idx['<PAD>'], vocab2idx['<UNK>'], vocab2idx['来场'])
print(idx2category[0], idx2category[1], idx2category[2])
# + [markdown] id="LTv-lVCbD9OQ"
# ## 创建模型
# BiLSTM+Attention
# ---
# keras 实现
# + colab={"base_uri": "https://localhost:8080/"} id="FdNAvt_w1KnL" executionInfo={"status": "ok", "timestamp": 1620808149813, "user_tz": -480, "elapsed": 3965, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="d31b97a6-9e7b-4a78-8841-a0c7b2c4b3ad"
import numpy
import keras
from keras import backend as K
from keras import activations
from keras.engine.topology import Layer
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.models import Model
from keras.layers import Input, Dense, Embedding, LSTM, Bidirectional, Dropout
K.clear_session()
class AttentionLayer(Layer):
def __init__(self, attention_size=None, **kwargs):
self.attention_size = attention_size
super(AttentionLayer, self).__init__(**kwargs)
def get_config(self):
config = super().get_config()
config['attention_size'] = self.attention_size
return config
def build(self, input_shape):
assert len(input_shape) == 3
self.time_steps = input_shape[1]
hidden_size = input_shape[2]
if self.attention_size is None:
self.attention_size = hidden_size
self.W = self.add_weight(name='att_weight', shape=(hidden_size, self.attention_size),
initializer='uniform', trainable=True)
self.b = self.add_weight(name='att_bias', shape=(self.attention_size,),
initializer='uniform', trainable=True)
self.V = self.add_weight(name='att_var', shape=(self.attention_size,),
initializer='uniform', trainable=True)
super(AttentionLayer, self).build(input_shape)
def call(self, inputs):
self.V = K.reshape(self.V, (-1, 1))
H = K.tanh(K.dot(inputs, self.W) + self.b)
score = K.softmax(K.dot(H, self.V), axis=1)
outputs = K.sum(score * inputs, axis=1)
return outputs
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[2]
def create_classify_model(max_len, vocab_size, embedding_size, hidden_size, attention_size, class_nums):
inputs = Input(shape=(max_len,), dtype='int32')
x = Embedding(vocab_size, embedding_size)(inputs)
x = Bidirectional(LSTM(hidden_size, dropout=0.2, return_sequences=True))(x)
x = Dropout(0.5)(x)
x = AttentionLayer(attention_size=attention_size)(x)
outputs = Dense(class_nums, activation='softmax')(x)
model = Model(inputs=inputs, outputs=outputs)
model.summary() # 输出模型结构和参数数量
return model
MAX_LEN = 30
VOCAB_SIZE = len(vocab2idx)
EMBEDDING_SIZE = 100
HIDDEN_SIZE = 64
ATT_SIZE = 50
CLASS_NUMS = len(category2idx)
BATCH_SIZE = 64
EPOCHS = 20
# all_datas = all_datas[:10000]
# all_labels = all_labels[:10000]
count = len(all_labels)
rate1, rate2 = 0.8, 0.9 # train-0.8, test-0.1, dev-0.1
# padding the data
new_datas = sequence.pad_sequences(all_datas, maxlen=MAX_LEN)
new_labels = keras.utils.to_categorical(all_labels, CLASS_NUMS)
# split all data to train, test and dev
x_train, y_train = new_datas[:int(count*rate1)], new_labels[:int(count*rate1)]
x_test, y_test = new_datas[int(count*rate1):int(count*rate2)], new_labels[int(count*rate1):int(count*rate2)]
x_dev, y_dev = new_datas[int(count*rate2):], new_labels[int(count*rate2):]
print(len(all_datas))
print(x_train[0])
print(new_labels[0])
print(new_labels[20])
# + colab={"base_uri": "https://localhost:8080/"} id="SkcUFS7nsg11" executionInfo={"status": "ok", "timestamp": 1620808189774, "user_tz": -480, "elapsed": 1260, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="c33a8193-54cc-4a10-be9e-d4e7764d1d7e"
print(VOCAB_SIZE)
# + id="Ln4sT1c-LXaU"
import numpy as np
class Metrics(tf.keras.callbacks.Callback):
def __init__(self, valid_data):
super(Metrics, self).__init__()
self.validation_data = valid_data
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
val_predict = np.argmax(self.model.predict(self.validation_data[0]), -1)
val_targ = self.validation_data[1]
if len(val_targ.shape) == 2 and val_targ.shape[1] != 1:
val_targ = np.argmax(val_targ, -1)
_val_f1 = f1_score(val_targ, val_predict, average='macro')
_val_recall = recall_score(val_targ, val_predict, average='macro')
_val_precision = precision_score(val_targ, val_predict, average='macro')
logs['val_f1'] = _val_f1
logs['val_recall'] = _val_recall
logs['val_precision'] = _val_precision
print(" — val_f1: %f — val_precision: %f — val_recall: %f" % (_val_f1, _val_precision, _val_recall))
return
# + id="Ds0qQx0TSAyo"
# 设置 更新学习率
from keras.callbacks import ReduceLROnPlateau
reduce_lr = ReduceLROnPlateau(monitor='val_loss', patience=10, mode='auto')
# + colab={"base_uri": "https://localhost:8080/"} id="22KUGgzq1Mnp" executionInfo={"status": "ok", "timestamp": 1620808252180, "user_tz": -480, "elapsed": 9611, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="3d150736-f74e-426e-d0a4-a2bec62982ca"
# create model
from keras.callbacks import TensorBoard
import tensorflow.keras.backend as K
from keras.models import load_model
tbCallBack = TensorBoard(log_dir='./model/log', # log 目录
histogram_freq=0, # 按照何等频率(epoch)来计算直方图,0为不计算
# batch_size=32, # 用多大量的数据计算直方图
write_graph=True, # 是否存储网络结构图
write_grads=True, # 是否可视化梯度直方图
write_images=True,# 是否可视化参数
embeddings_freq=0,
embeddings_layer_names=None,
embeddings_metadata=None)
model_path = "./model/news_classify_model.h5"
# ATT_SIZE = 50
# if os.path.exists('./model/news_classify_model.h5'):
# print('加载模型...')
# model = load_model(model_path, custom_objects={'AttentionLayer': AttentionLayer(ATT_SIZE)}, compile=False)
# else :
# model = create_classify_model(MAX_LEN, VOCAB_SIZE, EMBEDDING_SIZE, HIDDEN_SIZE, ATT_SIZE, CLASS_NUMS)
model = create_classify_model(MAX_LEN, VOCAB_SIZE, EMBEDDING_SIZE, HIDDEN_SIZE, ATT_SIZE, CLASS_NUMS)
# loss and optimizer
# 学习率默认0.001
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print("默认学习率:", K.get_value(model.optimizer.lr))
# train model
# model.fit(x_train, y_train, batch_size=BATCH_SIZE, epochs=10, validation_data=(x_test, y_test),callbacks=[tbCallBack,Metrics(valid_data=(x_test, y_test))])
# save model
# model.save("./model/news_classify_model.h5")
# + colab={"base_uri": "https://localhost:8080/"} id="J2BJH91oFG2g" executionInfo={"status": "ok", "timestamp": 1620204205963, "user_tz": -480, "elapsed": 14398, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="c4ce646e-39ae-49c4-8a8c-569dcb5cfad2"
score, acc = model.evaluate(x_dev, y_dev, batch_size=BATCH_SIZE)
# model.optimizer.lr
print('Test score:', score)
print('Test accuracy:', acc)
# save model
# model.save("./model/news_classify_model.h5")
# + [markdown] id="XfoiGAiWd6qu"
# ## 模型预测
# + id="NgCcoXg1d8pa"
import jieba
from tqdm import tqdm
# 读取原始数据集分词预处理 并保存词典
def read_toutiao_dataset(data_path, save_vocab_path):
with open(data_path, "r", encoding="utf8") as fo:
all_lines = fo.readlines()
datas, labels = [], []
word_vocabs = {}
for line in tqdm(all_lines):
content_words = []
category, content = line.strip().split("_!_")
for term in jieba.lcut(content):
if term not in stopwords and term not in word_vocabs and term != ' ' and term != '':
word_vocabs[term] = len(word_vocabs)+1
content_words.append(term)
datas.append(content_words)
labels.append(category)
with open(save_vocab_path, "w", encoding="utf8") as fw:
for word, index in word_vocabs.items():
fw.write(word+"\n")
return datas, labels
# 读取词典 生成词-索引对应关系, 其中special_words = ['<PAD>', '<UNK>']
def read_word_vocabs(save_vocab_path, special_words):
with open(save_vocab_path, "r", encoding="utf8") as fo:
word_vocabs = [word.strip() for word in fo]
word_vocabs = special_words + word_vocabs
idx2vocab = {idx: char for idx, char in enumerate(word_vocabs)}
vocab2idx = {char: idx for idx, char in idx2vocab.items()}
return idx2vocab, vocab2idx
# 把预处理过的数据索引化 即变成词编号序列
def process_dataset(datas, labels, category2idx, vocab2idx):
new_datas, new_labels = [], []
for data, label in zip(datas, labels):
index_data = [vocab2idx[word] if word in vocab2idx else vocab2idx['<UNK>'] for word in data]
index_label = category2idx[label]
new_datas.append(index_data)
new_labels.append(index_label)
return new_datas, new_labels
# + id="Kj7FR1qteEvi"
import numpy
import keras
from keras import backend as K
from keras import activations
from keras.engine.topology import Layer
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.models import Model
from keras.layers import Input, Dense, Embedding, LSTM, Bidirectional, Dropout
K.clear_session()
class AttentionLayer(Layer):
def __init__(self, attention_size=None, **kwargs):
self.attention_size = attention_size
super(AttentionLayer, self).__init__(**kwargs)
def get_config(self):
config = super().get_config()
config['attention_size'] = self.attention_size
return config
def build(self, input_shape):
assert len(input_shape) == 3
self.time_steps = input_shape[1]
hidden_size = input_shape[2]
if self.attention_size is None:
self.attention_size = hidden_size
self.W = self.add_weight(name='att_weight', shape=(hidden_size, self.attention_size),
initializer='uniform', trainable=True)
self.b = self.add_weight(name='att_bias', shape=(self.attention_size,),
initializer='uniform', trainable=True)
self.V = self.add_weight(name='att_var', shape=(self.attention_size,),
initializer='uniform', trainable=True)
super(AttentionLayer, self).build(input_shape)
def call(self, inputs):
self.V = K.reshape(self.V, (-1, 1))
H = K.tanh(K.dot(inputs, self.W) + self.b)
score = K.softmax(K.dot(H, self.V), axis=1)
outputs = K.sum(score * inputs, axis=1)
return outputs
def compute_output_shape(self, input_shape):
return input_shape[0], input_shape[2]
def create_classify_model(max_len, vocab_size, embedding_size, hidden_size, attention_size, class_nums):
inputs = Input(shape=(max_len,), dtype='int32')
x = Embedding(vocab_size, embedding_size)(inputs)
x = Bidirectional(LSTM(hidden_size, dropout=0.2, return_sequences=True))(x)
x = Dropout(0.5)(x)
x = AttentionLayer(attention_size=attention_size)(x)
outputs = Dense(class_nums, activation='softmax')(x)
model = Model(inputs=inputs, outputs=outputs)
model.summary() # 输出模型结构和参数数量
return model
# + colab={"base_uri": "https://localhost:8080/"} id="obVE8F_51VNC" executionInfo={"status": "ok", "timestamp": 1620204424809, "user_tz": -480, "elapsed": 3277, "user": {"displayName": "\u674e\u4e1c\u96f7", "photoUrl": "", "userId": "16889062735539840397"}} outputId="6bf3901f-ad6e-4c57-9674-0cc7ef9fd5d8"
from keras.models import load_model
import numpy as np
import jieba
np.set_printoptions(suppress=True)
save_vocab_path = "./data/word_vocabs.txt"
model_path = "./model/news_classify_model.h5"
special_words = ['<PAD>', '<UNK>']
category_lists = ["民生故事","文化","娱乐","体育","财经","房产","汽车","教育","科技","军事",
"旅游","国际","证券股票","农业","电竞游戏"]
maxlen = 30
ATT_SIZE = 50
category2idx = {cate: idx for idx, cate in enumerate(category_lists)}
idx2category = {idx: cate for idx, cate in enumerate(category_lists)}
idx2vocab, vocab2idx = read_word_vocabs(save_vocab_path, special_words)
content = "下周一(5.7日)手上持有这些股的要小心"
content_words = [term for term in jieba.lcut(content)]
sent2id = [vocab2idx[word] if word in vocab2idx else vocab2idx['<UNK>'] for word in content_words]
sent2id_new = np.array([sent2id[:maxlen] + [0] * (maxlen-len(sent2id))])
model = load_model(model_path, custom_objects={'AttentionLayer': AttentionLayer(ATT_SIZE)}, compile=False)
y_pred = model.predict(sent2id_new)
print(y_pred)
result = {}
for idx, pred in enumerate(y_pred[0]):
result[idx2category[idx]] = pred
result_sorted = sorted(result.items(), key=lambda item: item[1], reverse=True)
print(result_sorted)
# y_label = np.argmax(y_pred[0])
# print(y_label, idx2category[y_label])
# category_lists = ["民生故事","文化","娱乐","体育","财经","房产","汽车","教育","科技","军事",
# "旅游","国际","证券股票","农业","电竞游戏"]
| code/bilstm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# Documentation de l'API : https://github.com/GeneralMills/pytrends
#
# ### open issues
# Limites de requêtes sur Google :
# - générer un string aléatoire à passer en paramètre custom_useragent
# https://github.com/GeneralMills/pytrends/issues/77
# - si erreur, utiliser un autre compte gmail
#
# Format JSON :
# - quel format lu par chart.js, d3.js ?
# http://stackoverflow.com/questions/24929931/drawing-line-chart-in-chart-js-with-json-response
#
# Plus de 5 recherches à la fois :
# - cf open issues sur pytrends
# https://github.com/GeneralMills/pytrends/issues/77
#
# Gérer les timestamps pour les extracts de données sans heure ni minute
#
# ### Les tendances sur les recherches
# - 'Élection présidentielle française de 2017' : '/m/0n49cn3' (Élection)
#
# #### Codes associés aux candidats :
# - <NAME> : '/m/047drb0' (Ancien Premier ministre français)
# - <NAME> : '/m/0fqmlm' (Ancien Ministre de l’Écologie, du Développement durable et de l’Énergie)
# - <NAME> : '/m/0551nw'
# - <NAME> : '/m/0551vp' (Homme politique)
# - <NAME> : '/m/02y2cb' (Homme politique)
# - <NAME> : '/m/02qg4z' (Président de la République française)
# - <NAME> : '/m/04zzm99' (Homme politique)
# - <NAME> : '/m/05zztc0' (Ecologiste)
# - <NAME> (Debout la France) : '/m/0f6b18'
# - <NAME> (Indépendante) : '/m/061czc' (Ancienne Ministre de l'Intérieur de France)
# - <NAME> (LO) : pas dispo
# - <NAME> (NPA) : '/m/0gxyxxy'
# - <NAME> : '/m/011ncr8c' (Ancien Ministre de l'Économie et des Finances)
# - <NAME> : '/m/047fzn'
# - <NAME> : '/m/02rdgs'
#
# #### Codes associés aux partis :
# - LR : '/g/11b7n_r2jq'
# - PS : '/m/01qdcv'
# - FN : '/m/0hp7g'
# - EELV : '/m/0h7nzzw'
# - FI (France Insoumise) : 'France Insoumise' (pas de sujet dédié)
# - PCF : '/m/01v8x4'
# - Debout la France : '/m/02rwc3q'
# - MoDem : '/m/02qt5xp' (Mouvement Démocrate)
# - Lutte Ouvrière : '/m/01vvcv'
# - Nouveau Parti Anticapitalise : '/m/04glk_t'
# - En marche : 'En Marche' (pas de sujet dédié)
#
# #### Codes sur des thèmes de campagne :
# - Travail : /g/122h6md7
# - Education : /m/02jfc
# - Finance : /m/02_7t
# - Ecologie : /m/02mgp
# - Santé : /m/0kt51
# - Religion : /m/06bvp
# - Manifestation : /m/0gnwz4
# - Union Européenne : /m/0_6t_z8
#
# ### Recherches possibles :
# - 'candidats_A': '/m/047drb0, /m/04zzm99, /m/02rdgs, /m/011ncr8c, /m/0fqmlm'
# - 'partis_A': '/g/11b7n_r2jq, /m/01qdcv, /m/0hp7g, /m/0h7nzzw'
# - 'divers_gauche': 'France Insoumise, /m/01vvcv, /m/04glk_t, /m/01v8x4'
#
# ### Périodes possibles :
# - '3d': 'now 3-d'
#
# +
# #!/usr/bin/python
# coding: utf-8
from __future__ import unicode_literals
from trendsAPI import * # API non officielle
import json
import pandas as pd
from datetime import datetime, timedelta
import re
import sys
from numpy.random import rand
from numpy import sign
import time
def convert_date_column(dataframe): # Conversion du format en un string court
dates = []
rdict = {',': '', ' PST': '', ' à': '', 'janv.': '01', 'févr.': '02', 'mars': '03', 'avr.': '04',
'mai': '05', 'juin': '06', 'juil.': '07', 'août': '08', 'sept.': '09', 'oct.': '10',
'nov.': '11', 'déc.': '12'}
robj = re.compile('|'.join(rdict.keys()))
rdict2 = {' 01 ': ' janv. ', ' 02 ': ' févr. ', ' 03 ': ' mars ', ' 04 ': ' avr. ', ' 05 ': ' mai ',
' 06 ': ' juin ', ' 07 ': ' juil. ', ' 08 ': ' août ', ' 09 ': ' sept. ', ' 10 ': ' oct. ',
' 11 ': ' nov. ', ' 12 ': ' déc. '}
robj2 = re.compile('|'.join(rdict2.keys()))
if 'PST' in dataframe['Date'][0]: # format de date anglais avec heure
in_format = '%b %d %Y %H:%M' # type : Jan 18 2017 12:00
for date in dataframe['Date']: # Conversion en timestamp sur le fuseau GMT+1
t = datetime.strptime(robj.sub(lambda m: rdict[m.group(0)], date), in_format) + timedelta(hours=9)
t = datetime.strftime(t, '%d %m %H:%M') # conversion de nouveau en string du type : 18 01 12:00
dates.append(robj2.sub(lambda m: rdict2[m.group(0)], t)) # remplacement des mois en toutes lettres
elif 'UTC' in dataframe['Date'][0]: # format de date français avec heure
in_format = '%d %m %Y %H:%M' # type : 18 01 2017 12:00
for date in dataframe['Date']: # Conversion en timestamp sur le fuseau GMT+1
t = datetime.strptime(robj.sub(lambda m: rdict[m.group(0)], date[0:-6]), in_format) + timedelta(hours=9)
t = datetime.strftime(t, '%d %m %H:%M') # conversion de nouveau en string du type : 18 01 12:00
dates.append(robj2.sub(lambda m: rdict2[m.group(0)], t)) # remplacement des mois en toutes lettres
else: # si les dates ne contiennent pas l'heure (ie. recherche sur plus d'un mois)
rdict = {', ': ' ', 'janvier': 'janv.', 'février': 'févr.', 'avril': 'avr.', 'juillet': 'juil.',
'septembre': 'sept.', 'octobre': 'oct.', 'novembre': 'nov.', 'décembre': 'déc.'}
robj = re.compile('|'.join(rdict.keys()))
for date in dataframe['Date']:
t = robj.sub(lambda m: rdict[m.group(0)], date)
dates.append(' '.join(t.split(' ')[1:-1]))
dataframe['Date'] = dates
return
def trends_to_json(queries='candidats_A', periodes='3d'):
"""
Télécharge sous format json les données de Google Trends avec les paramètres indiqués.
Ceux-ci doivent appartenir aux recherches préconfigurées dans les dictionnaires <queries>
et <periodes>.
Si aucun paramètre n'est spécifié, la fonction va balayer toutes les combinaisons de
requêtes et de périodes préconfigurées.
"""
# Les termes de recherche (5 au maximum separes par des virgules)
# On associe a un type de recherche la liste des parametres correspondants
all_queries = {'candidats_A': '/m/047drb0, /m/04zzm99, /m/02rdgs, /m/011ncr8c, /m/0fqmlm'}
all_periodes = {'3d': 'now 3-d', '7d': 'now 7-d'}
queries, periodes = set(queries.replace(', ', ',').split(',')), set(periodes.replace(', ', ',').split(','))
success = []
# adresse mail et mot de passe associé
users = {'<EMAIL>': 'projet_fil_rouge', '<EMAIL>': 'pytrends_2'}
for user in list(users.keys())[::int(sign(rand(1) * 2 - 1))]:
# une chance sur deux de partir de la fin de la liste des adresses gmail
try:
# Connection à Google (utiliser une vraie adresse gmail permet plus de requêtes)
pytrend = TrendReq(user, users[user], custom_useragent=None)
for q in queries & set(all_queries): # éléments communs aux deux ensembles
for p in periodes & set(all_periodes):
if (q, p) in success: # si cette requête a déjà été réalisée avec une autre adresse, on itère
continue
else:
payload = {'q': all_queries[q], 'geo': 'FR', 'date': all_periodes[p], 'hl': 'fr-FR'}
# Possibilite de periode personnalise : specifier deux dates (ex : 2015-01-01 2015-12-31)
# geographie : FR (toute France), FR-A ou B ou C... (region de France par ordre alphabetique)
# categorie politique : cat = 396
df = pytrend.trend(payload, return_type='dataframe')
convert_date_column(df) # converts date into a short string
df.set_index('Date', inplace=True)
# reduction du nombre de lignes du dataframe a une trentaine de points
# pour la lisibilité du graph
n = {'4h': 2, '1d': 4, '3d': 1, '7d': 6, '1m': 1, '3m': 2}
# n = 1 # pour désactiver cette fonction
# Sauvegarde en JSON
server_path = '/var/www/html/gtrends/data/' # path complet
# server_path = ''
df[(df.shape[0] - 1) % n[p]::n[p]].to_json(
server_path + q + '_' + p + '.json', orient='split')
print('Connexion réussie avec l\'adresse : ' + user)
print('Enregistrement sous : ' + server_path + q + '_' + p + '.json')
success.append((q, p)) # on garde en mémoire les couples q, p qui ont fonctionné
# espacement des requêtes pour ne pas dépasser la limite
time.sleep(10)
return
except RateLimitError:
print('Limite de requetes depassee, tentative avec une autre adresse mail...')
except ResponseError:
print('Connexion bloquee, tentative avec une autre adresse mail...')
print('Erreur lors de la recuperation des donnees.')
return
# -
# #### Fonction qui sauvegarde les requetes via l'API en JSON
trends_to_json()
df_json = pd.read_json('candidats_A_3d.json', orient='split')
df_json
# ### Requête ad-hoc
payload = {'q': '/m/047drb0, /m/0551nw',
'date': 'now 7-d', 'hl': 'fr-FR'}
pytrend = TrendReq('<EMAIL>', 'projet_fil_rouge', custom_useragent='python')
df = pytrend.trend(payload, return_type='dataframe')
convert_date_column(df)
df.set_index('Date', inplace=True)
df.to_json('Hamon_Valls_7jours.json', orient='split')
df
# ### Conversion Excel téléchargé sur le site Google Trends
xls = pd.read_excel('themes_7j.xlsx')
xls.set_index('Date', inplace=True)
xls[::6].to_json('themes_7j.json', orient='split')
xls[::6]
| Gtrends/old/script_trends.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# # Finding local maxima
#
# The ``peak_local_max`` function returns the coordinates of local peaks (maxima)
# in an image. A maximum filter is used for finding local maxima. This operation
# dilates the original image and merges neighboring local maxima closer than the
# size of the dilation. Locations where the original image is equal to the
# dilated image are returned as local maxima.
#
# +
from scipy import ndimage as ndi
import matplotlib.pyplot as plt
from skimage.feature import peak_local_max
from skimage import data, img_as_float
im = img_as_float(data.coins())
# image_max is the dilation of im with a 20*20 structuring element
# It is used within peak_local_max function
image_max = ndi.maximum_filter(im, size=20, mode='constant')
# Comparison between image_max and im to find the coordinates of local maxima
coordinates = peak_local_max(im, min_distance=20)
# display results
fig, axes = plt.subplots(1, 3, figsize=(8, 3), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(im, cmap=plt.cm.gray)
ax[0].axis('off')
ax[0].set_title('Original')
ax[1].imshow(image_max, cmap=plt.cm.gray)
ax[1].axis('off')
ax[1].set_title('Maximum filter')
ax[2].imshow(im, cmap=plt.cm.gray)
ax[2].autoscale(False)
ax[2].plot(coordinates[:, 1], coordinates[:, 0], 'r.')
ax[2].axis('off')
ax[2].set_title('Peak local max')
fig.tight_layout()
plt.show()
| digital-image-processing/notebooks/segmentation/plot_peak_local_max.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# -*- coding: utf-8 -*-
"""
Created on Tue Mar 3 13:12:43 2015
write a text file for Blender visualization
x y z r
x y z r
x y z r
In this script top N massive halos
@author: hoseung
"""
import numpy as np
# +
def distance_to(xc, xx):
import numpy as np
return np.sqrt([(xc[0] - xx[0])**2 + (xc[1] - xx[1])**2 + (xc[2] - xx[2])**2])
def extract_halos_within(halos, i_center, scale=1.0):
import numpy as np
'''
Returns halos within SCALE * Rvir of the central halo.
def extract_halos_within(halos, ind_center, scale=1.0)
halos : halo finder output (single snapshot)
ind_center : index of central halo
scale : multiplying factor to the Rvir of the central halo
'''
xc = halos['p'][0][0][i_center]
yc = halos['p'][0][1][i_center]
zc = halos['p'][0][2][i_center]
rvir= halos['rvir'][0][i_center]
xx = halos['p'][0][0]
yy = halos['p'][0][1]
zz = halos['p'][0][2]
m = np.array(data['m'][0])
dd = distance_to([xc,yc,zc],[xx,yy,zz])
Mcut = 1e11
i_m = m > Mcut
i_ok = np.logical_and(dd < (rvir * scale), i_m)
return i_ok
# -
''' Cluster 05101, cluster subhaloes (at the final snapshot)
'''
n_massive = 500
include_id = False
fixed_position = True
Ncut = 120
work_dir = '/home/hoseung/Work/data/AGN2/'
nout_ini = 131
nout_fi = 132
nouts = range(nout_fi, nout_ini, -1)
Nnouts = len(nouts)
# +
try:
f = open(work_dir + 'satellite_halo_trees.txt', 'w')
except:
print("No filename is given.\n Try write_halo_xyz(x,y,z,r,filename = fn)")
import tree.treeutils as tru
import tree.rshalo as rsh
# get_main_tree
# gal_list
import pickle
dir_halo = work_dir + "rhalo/rockstar_halos/"
f_halo = work_dir + "rhalo/tree.pickle"
# Open the file and call pickle.load.
with open(f_halo, "rb") as f_halo:
data = pickle.load(f_halo)
# +
# Gals = satellite halos inside the zoomed-in cluster above a mass cut.
all_final_halo = tru.final_halo_list(data)
# tru.final_halo_list returns the list of galaxies at the final snapshot.
i_center = tru.get_center(data)
# No number of particle information. Devise a new way.
fn_halo = dir_halo + 'halos_py/halos_82.py'
## There is no halos_82.py.
# You have to convert ascii files into .py or .pickle before.
#
#
#
tree_final = data[i_final]
#tree_final = rsh.read_halo_all(fn_halo)
##
i_satellites = extract_halos_within(data, i_center, scale=1.0)[0]
print("Total {0} halos \n{1} halos are selected".format(
len(i_satellites),sum(i_satellites)))
#%%
cnt=150
ngal = len(gals)
#cnt = range(ngal)
# loop over individual galaxies
for thisgal in gals[cnt:cnt+1]:
print(thisgal)
if (cnt % 10 == 0): print(cnt)
tree = tru.get_main_tree(data, thisgal)
ind_tree = np.zeros(len(tree), dtype=np.int)
for i in range(sum(x > 0 for x in tree)): ind_tree[i] = np.where(data['id'] == tree[i])[0] # Why first element?
tree = data[ind_tree] # td = tree
x = data['p'][0][0][i_satellites]
y = data['p'][0][1][i_satellites]
z = data['p'][0][2][i_satellites]
r = data['rvir'][0][i_satellites]
if include_id is True:
dd = np.column_stack([x, y, z, r, data['hnu'][0][ind]])
for i i
n range(dd.shape[0]):
f.write("{0} {1} {2} {3} {4} \n".format(
dd[i][0],dd[i][1],dd[i][2],dd[i][3],int(dd[i][4])))
else:
dd = np.column_stack([x, y, z, r])
for i in range(dd.shape[0]):
f.write("{0} {1} {2} {3} \n".format(
dd[i][0],dd[i][1],dd[i][2],dd[i][3]))
f.close()
# -
| scripts/Rsmith/Halo_list_RS.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.5 64-bit (''venv'': venv)'
# name: python37564bitvenvvenv16524ab6d9c04010a849c4faf6663120
# ---
# # Analyse tumour growth under Erlotinib treatment
# ## 1. Import data
# +
import os
import pandas as pd
# Import data
path = os.getcwd()
data_raw = pd.read_csv(path + '/data/PKPD_ErloAndGefi_LXF.csv', sep=';')
# Filter relevant information
data = data_raw[['#ID', 'TIME', 'Y', 'DOSE GROUP', 'DOSE', 'BW']]
# Convert TIME and Y to numeric values (currently strings)
data['TIME'] = pd.to_numeric(data['TIME'], errors='coerce')
data['Y'] = pd.to_numeric(data['Y'], errors='coerce')
# Filter for Erlotinib data
data = data[data_raw['DRUG'] == 1]
# Sort TIME values (for plotting convenience)
data.sort_values(by='TIME', inplace=True)
# Show data
data
# -
# ### 1.1 Sort data into dose groups
# +
# Get dose groups
groups = data['DOSE GROUP'].unique()
# Sort into groups
data_one = data[data['DOSE GROUP'] == groups[0]]
data_two = data[data['DOSE GROUP'] == groups[1]]
data_three = data[data['DOSE GROUP'] == groups[2]]
# -
# ## 2. Visualise observed tumour growth
# ### 2.1 Under Erlotinib treatment high dose
#
# Mice were administered with a daily oral dose of 100 mg/kg/day from day 0 to 5 and 11 to 13
# +
import matplotlib.pyplot as plt
# Display dose group
print(groups[0])
# Get unique animal IDs
ids = data_one['#ID'].unique()
# Plot measurements
fig = plt.figure(figsize=(8, 6))
for i in ids:
# Mask for individual
mask = data_one['#ID'] == i
time = data_one[mask]['TIME']
volume = data_one[mask]['Y']
# Filter out Nan values
mask = volume.notnull()
time = time[mask]
volume = volume[mask]
# Create semi log plot
plt.semilogy(time, volume, color='black')
plt.scatter(time, volume, color='gray', edgecolor='black')
# Set y lim
plt.ylim([1E0, 1E4])
# Label axes
plt.xlabel('Time in [day]')
plt.ylabel(r'Tumour volume in [mm$^3$]')
plt.show()
# -
# ### 2.2 Under Erlotinib treatment medium dose
#
# From day 0 to 13 a daily dose of 25 mg/kg/day was orally administered.
# +
import matplotlib.pyplot as plt
# Display dose group
print(groups[1])
# Get unique animal IDs
ids = data_two['#ID'].unique()
# Plot measurements
fig = plt.figure(figsize=(8, 6))
for i in ids:
# Mask for individual
mask = data_two['#ID'] == i
time = data_two[mask]['TIME']
volume = data_two[mask]['Y']
# Filter out Nan values
mask = volume.notnull()
time = time[mask]
volume = volume[mask]
# Create semi log plot
plt.semilogy(time, volume, color='black')
plt.scatter(time, volume, color='gray', edgecolor='black')
# Set y lim
plt.ylim([1E0, 1E4])
# Label axes
plt.xlabel('Time in [day]')
plt.ylabel(r'Tumour volume in [mm$^3$]')
plt.show()
# -
# ### 2.3 Under Erlotinib treatment low dose
#
# From day 0 to 13 a daily dose of 6.25 mg/kg/day was orally administered.
# +
import matplotlib.pyplot as plt
# Display dose group
print(groups[2])
# Get unique animal IDs
ids = data_three['#ID'].unique()
# Plot measurements
fig = plt.figure(figsize=(8, 6))
for i in ids:
# Mask for individual
mask = data_three['#ID'] == i
time = data_three[mask]['TIME']
volume = data_three[mask]['Y']
# Filter out Nan values
mask = volume.notnull()
time = time[mask]
volume = volume[mask]
# Create semi log plot
plt.semilogy(time, volume, color='black')
plt.scatter(time, volume, color='gray', edgecolor='black')
# Set y lim
plt.ylim([1E0, 1E4])
# Label axes
plt.xlabel('Time in [day]')
plt.ylabel(r'Tumour volume in [mm$^3$]')
plt.show()
# -
| erlotinib_growth_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Progressive Growing GAN (PGGAN)
# In this notebook, we will demonstrate the functionality of ``Scheduler`` which enables advanced training schemes such as the progressive training method described in [Karras et al.](https://arxiv.org/pdf/1710.10196.pdf).
# We will train a PGGAN to produce synthetic frontal chest X-ray images where both the generator and the discriminator grow from $4\times4$ to $128\times128$.
#
# ### Progressive Growing Strategy
# [Karras et al.](https://arxiv.org/pdf/1710.10196.pdf) propose a training scheme in which both the generator and the discriminator progressively grow from a low resolution to a high resolution.
# Both networks begin their training based on $4\times4$ images as illustrated below.
# 
# Then, both networks progress from $4\times4$ to $8\times8$ by an adding an additional block that contains a couple of convolutional layers.
# 
# Both the generator and the discriminator progressively grow until reaching the desired resolution of $1024\times 1024$.
# 
# *Image Credit: [Presentation slide](https://drive.google.com/open?id=1jYlrX4DgTs2VAfRcyl3pcNI4ONkBg3-g)*
#
# ### Smooth Transition between Resolutions
# However, when growing the networks, the new blocks must be slowly faded into the networks in order to smoothly transition between different resolutions.
# For example, when growing the generator from $16\times16$ to $32\times32$, the newly added block of $32\times32$ is slowly faded into the already well trained $16\times16$ network by linearly increasing a fade-factor $\alpha$ from $0$ to $1$.
# Once the network is fully transitioned to $32\times32$, the network is trained a bit further to stabilize before growing to $64\times64$.
# 
# *Image Credit: [PGGAN Paper](https://arxiv.org/pdf/1710.10196.pdf)*
#
# With this progressive training strategy, PGGAN has achieved the state-of-the-art results in producing high fidelity synthetic images.
#
# ## Problem Setting
# In this PGGAN example, we decided the following:
# * 560K images will be used when transitioning from a lower resolution to a higher resolution.
# * 560K images will be used when stabilizing the fully transitioned network.
# * Initial resolution will be $4\times4$.
# * Final resolution will be $128\times128$.
#
# The number of images for both transitioning and stabilizing is equivalent to 5 epochs; the networks would smoothly grow over 5 epochs and would stabilize for 5 epochs. This yields the following schedule for growing both networks:
#
# * From $1^{st}$ epoch to $5^{th}$ epoch: train $4\times4$ resolution
# * From $6^{th}$ epoch to $10^{th}$ epoch: transition from $4\times4$ to $8\times8$
# * From $11^{th}$ epoch to $15^{th}$ epoch: stabilize $8\times8$
# * From $16^{th}$ epoch to $20^{th}$ epoch: transition from $8\times8$ to $16\times16$
# * From $21^{st}$ epoch to $25^{th}$ epoch: stabilize $16\times16$
# * From $26^{th}$ epoch to $30^{th}$ epoch: transition from $16\times16$ to $32\times32$
# * From $31^{st}$ epoch to $35^{th}$ epoch: stabilize $32\times32$
#
# $\cdots$
#
# * From $51^{th}$ epoch to $55^{th}$ epoch: stabilize $128\times128$
# +
import tempfile
import numpy as np
import torch
import matplotlib.pyplot as plt
import fastestimator as fe
from fastestimator.schedule import EpochScheduler
from fastestimator.util import get_num_devices
# + tags=["parameters"]
target_size=128
epochs=55
save_dir=tempfile.mkdtemp()
train_steps_per_epoch=None
data_dir=None
# -
# ## Configure growing parameters
num_grow = np.log2(target_size) - 2
assert num_grow >= 1 and num_grow % 1 == 0, "need exponential of 2 and greater than 8 as target size"
num_phases = int(2 * num_grow + 1)
assert epochs % num_phases == 0, "epoch must be multiple of {} for size {}".format(num_phases, target_size)
num_grow, phase_length = int(num_grow), int(epochs / num_phases)
event_epoch = [1, 1 + phase_length] + [phase_length * (2 * i + 1) + 1 for i in range(1, num_grow)]
event_size = [4] + [2**(i + 3) for i in range(num_grow)]
# ## Defining Input `Pipeline`
#
# First, we need to download the chest frontal X-ray dataset from the National Institute of Health (NIH); the dataset has over 112,000 images with resolution $1024\times1024$. We use the pre-built ``fastestimator.dataset.nih_chestxray`` API to download these images. A detailed description of the dataset is available [here](https://www.nih.gov/news-events/news-releases/nih-clinical-center-provides-one-largest-publicly-available-chest-x-ray-datasets-scientific-community).
#
# ### Note: Please make sure to have a stable internet connection when downloading the dataset for the first time since the size of the dataset is over 40GB.
# +
from fastestimator.dataset.data import nih_chestxray
dataset = nih_chestxray.load_data(root_dir=data_dir)
# -
# ### Given the images, we need the following preprocessing operations to execute dynamically for every batch:
# 1. Read the image.
# 2. Resize the image to the correct size based on the current epoch.
# 3. Create a lower resolution of the image, which is accomplished by downsampling by a factor of 2 then upsampling by a factor of 2.
# 4. Rescale the pixels of both the original image and lower resolution image to the range [-1, 1]
# 5. Convert both the original image and lower resolution image from channel last to channel first
# 6. Create the latent vector used by the generator
# +
from fastestimator.op.numpyop import Batch, LambdaOp
from fastestimator.op.numpyop.multivariate import Resize
from fastestimator.op.numpyop.univariate import ChannelTranspose, Normalize, ReadImage
resize_map = {
epoch: Resize(image_in="x", image_out="x", height=size, width=size)
for (epoch, size) in zip(event_epoch, event_size)
}
resize_low_res_map1 = {
epoch: Resize(image_in="x", image_out="x_low_res", height=size // 2, width=size // 2)
for (epoch, size) in zip(event_epoch, event_size)
}
resize_low_res_map2 = {
epoch: Resize(image_in="x_low_res", image_out="x_low_res", height=size, width=size)
for (epoch, size) in zip(event_epoch, event_size)
}
batch_size_map = {
epoch: max(512 // size, 4) * get_num_devices() if size <= 512 else 2 * get_num_devices()
for (epoch, size) in zip(event_epoch, event_size)
}
batch_scheduler = EpochScheduler(epoch_dict=batch_size_map)
pipeline = fe.Pipeline(
batch_size=batch_scheduler,
train_data=dataset,
ops=[
ReadImage(inputs="x", outputs="x", color_flag="gray"),
EpochScheduler(epoch_dict=resize_map),
EpochScheduler(epoch_dict=resize_low_res_map1),
EpochScheduler(epoch_dict=resize_low_res_map2),
Normalize(inputs=["x", "x_low_res"], outputs=["x", "x_low_res"], mean=1.0, std=1.0, max_pixel_value=127.5),
ChannelTranspose(inputs=["x", "x_low_res"], outputs=["x", "x_low_res"]),
LambdaOp(fn=lambda: np.random.normal(size=[512]).astype('float32'), outputs="z"),
Batch(drop_last=True)
])
# -
# Let's visualize how our `Pipeline` changes image resolution at the different epochs we specified using `Schedulers`. FastEstimator as a ``get_results`` method to aid in this. In order to correctly visualize the output of the `Pipeline`, we need to provide epoch numbers to the `get_results` method:
plt.figure(figsize=(50,50))
for i, epoch in enumerate(event_epoch):
batch_data = pipeline.get_results(epoch=epoch)
img = np.squeeze(batch_data["x"][0] + 0.5)
plt.subplot(1, 9, i+1)
plt.imshow(img, cmap='gray')
# ## Defining `Network`
# ### Defining the generator and the discriminator
# To express the progressive growing of networks, we return a list of models that progressively grow from $4 \times 4$ to $1024 \times 1024$ such that $i^{th}$ model in the list is a superset of the previous models. We define a ``fade_in_alpha`` to control the smoothness of growth. ``fe.build`` then bundles each model, optimizer, and model name together for use.
# +
from torch.optim import Adam
def _nf(stage, fmap_base=8192, fmap_decay=1.0, fmap_max=512):
return min(int(fmap_base / (2.0**(stage * fmap_decay))), fmap_max)
class EqualizedLRDense(torch.nn.Linear):
def __init__(self, in_features, out_features, gain=np.sqrt(2)):
super().__init__(in_features, out_features, bias=False)
torch.nn.init.normal_(self.weight.data, mean=0.0, std=1.0)
self.wscale = np.float32(gain / np.sqrt(in_features))
def forward(self, x):
return super().forward(x) * self.wscale
class ApplyBias(torch.nn.Module):
def __init__(self, in_features):
super().__init__()
self.in_features = in_features
self.bias = torch.nn.Parameter(torch.Tensor(in_features))
torch.nn.init.constant_(self.bias.data, val=0.0)
def forward(self, x):
if len(x.shape) == 4:
x = x + self.bias.view(1, -1, 1, 1).expand_as(x)
else:
x = x + self.bias
return x
class EqualizedLRConv2D(torch.nn.Conv2d):
def __init__(self, in_channels, out_channels, kernel_size=3, padding=1, padding_mode='zeros', gain=np.sqrt(2)):
super().__init__(in_channels, out_channels, kernel_size, padding=padding, padding_mode=padding_mode, bias=False)
torch.nn.init.normal_(self.weight.data, mean=0.0, std=1.0)
fan_in = np.float32(np.prod(self.weight.data.shape[1:]))
self.wscale = np.float32(gain / np.sqrt(fan_in))
def forward(self, x):
return super().forward(x) * self.wscale
def pixel_normalization(x, eps=1e-8):
return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdims=True) + eps)
def mini_batch_std(x, group_size=4, eps=1e-8):
b, c, h, w = x.shape
group_size = min(group_size, b)
y = x.reshape((group_size, -1, c, h, w)) # [G, M, C, H, W]
y -= torch.mean(y, dim=0, keepdim=True) # [G, M, C, H, W]
y = torch.mean(y**2, axis=0) # [M, C, H, W]
y = torch.sqrt(y + eps) # [M, C, H, W]
y = torch.mean(y, dim=(1, 2, 3), keepdim=True) # [M, 1, 1, 1]
y = y.repeat(group_size, 1, h, w) # [B, 1, H, W]
return torch.cat((x, y), 1)
def fade_in(x, y, alpha):
return (1.0 - alpha) * x + alpha * y
class ToRGB(torch.nn.Module):
def __init__(self, in_channels, num_channels=3):
super().__init__()
self.elr_conv2d = EqualizedLRConv2D(in_channels, num_channels, kernel_size=1, padding=0, gain=1.0)
self.bias = ApplyBias(in_features=num_channels)
def forward(self, x):
x = self.elr_conv2d(x)
x = self.bias(x)
return x
class FromRGB(torch.nn.Module):
def __init__(self, res, num_channels=3):
super().__init__()
self.elr_conv2d = EqualizedLRConv2D(num_channels, _nf(res - 1), kernel_size=1, padding=0)
self.bias = ApplyBias(in_features=_nf(res - 1))
def forward(self, x):
x = self.elr_conv2d(x)
x = self.bias(x)
x = torch.nn.functional.leaky_relu(x, negative_slope=0.2)
return x
class BlockG1D(torch.nn.Module):
def __init__(self, res=2, latent_dim=512):
super().__init__()
self.elr_dense = EqualizedLRDense(in_features=latent_dim, out_features=_nf(res - 1) * 16, gain=np.sqrt(2) / 4)
self.bias1 = ApplyBias(in_features=_nf(res - 1))
self.elr_conv2d = EqualizedLRConv2D(in_channels=_nf(res - 1), out_channels=_nf(res - 1))
self.bias2 = ApplyBias(in_features=_nf(res - 1))
self.res = res
def forward(self, x):
# x: [batch, 512]
x = pixel_normalization(x) # [batch, 512]
x = self.elr_dense(x) # [batch, _nf(res - 1) * 16]
x = x.view(-1, _nf(self.res - 1), 4, 4) # [batch, _nf(res - 1), 4, 4]
x = self.bias1(x) # [batch, _nf(res - 1), 4, 4]
x = torch.nn.functional.leaky_relu(x, negative_slope=0.2) # [batch, _nf(res - 1), 4, 4]
x = pixel_normalization(x) # [batch, _nf(res - 1), 4, 4]
x = self.elr_conv2d(x) # [batch, _nf(res - 1), 4, 4]
x = self.bias2(x) # [batch, _nf(res - 1), 4, 4]
x = torch.nn.functional.leaky_relu(x, negative_slope=0.2) # [batch, _nf(res - 1), 4, 4]
x = pixel_normalization(x)
return x
class BlockG2D(torch.nn.Module):
def __init__(self, res):
super().__init__()
self.elr_conv2d1 = EqualizedLRConv2D(in_channels=_nf(res - 2), out_channels=_nf(res - 1))
self.bias1 = ApplyBias(in_features=_nf(res - 1))
self.elr_conv2d2 = EqualizedLRConv2D(in_channels=_nf(res - 1), out_channels=_nf(res - 1))
self.bias2 = ApplyBias(in_features=_nf(res - 1))
self.upsample = torch.nn.Upsample(scale_factor=2)
def forward(self, x):
# x: [batch, _nf(res - 2), 2**(res - 1), 2**(res - 1)]
x = self.upsample(x)
x = self.elr_conv2d1(x) # [batch, _nf(res - 1), 2**res , 2**res)]
x = self.bias1(x) # [batch, _nf(res - 1), 2**res , 2**res)]
x = torch.nn.functional.leaky_relu(x, negative_slope=0.2) # [batch, _nf(res - 1), 2**res , 2**res)]
x = pixel_normalization(x) # [batch, _nf(res - 1), 2**res , 2**res)]
x = self.elr_conv2d2(x) # [batch, _nf(res - 1), 2**res , 2**res)]
x = self.bias2(x) # [batch, _nf(res - 1), 2**res , 2**res)]
x = torch.nn.functional.leaky_relu(x, negative_slope=0.2) # [batch, _nf(res - 1), 2**res , 2**res)]
x = pixel_normalization(x) # [batch, _nf(res - 1), 2**res , 2**res)]
return x
def _block_G(res, latent_dim=512, initial_resolution=2):
if res == initial_resolution:
model = BlockG1D(res=res, latent_dim=latent_dim)
else:
model = BlockG2D(res=res)
return model
class Gen(torch.nn.Module):
def __init__(self, g_blocks, rgb_blocks, fade_in_alpha):
super().__init__()
self.g_blocks = torch.nn.ModuleList(g_blocks)
self.rgb_blocks = torch.nn.ModuleList(rgb_blocks)
self.fade_in_alpha = fade_in_alpha
self.upsample = torch.nn.Upsample(scale_factor=2)
def forward(self, x):
for g in self.g_blocks[:-1]:
x = g(x)
previous_img = self.rgb_blocks[0](x)
previous_img = self.upsample(previous_img)
x = self.g_blocks[-1](x)
new_img = self.rgb_blocks[1](x)
return fade_in(previous_img, new_img, self.fade_in_alpha)
def build_G(fade_in_alpha, latent_dim=512, initial_resolution=2, target_resolution=10, num_channels=3):
g_blocks = [
_block_G(res, latent_dim, initial_resolution) for res in range(initial_resolution, target_resolution + 1)
]
rgb_blocks = [ToRGB(_nf(res - 1), num_channels) for res in range(initial_resolution, target_resolution + 1)]
generators = [torch.nn.Sequential(g_blocks[0], rgb_blocks[0])]
for idx in range(2, len(g_blocks) + 1):
generators.append(Gen(g_blocks[0:idx], rgb_blocks[idx - 2:idx], fade_in_alpha))
final_model_list = g_blocks + [rgb_blocks[-1]]
generators.append(torch.nn.Sequential(*final_model_list))
return generators
class BlockD1D(torch.nn.Module):
def __init__(self, res=2):
super().__init__()
self.elr_conv2d = EqualizedLRConv2D(in_channels=_nf(res - 1) + 1, out_channels=_nf(res - 1))
self.bias1 = ApplyBias(in_features=_nf(res - 1))
self.elr_dense1 = EqualizedLRDense(in_features=_nf(res - 1) * 16, out_features=_nf(res - 2))
self.bias2 = ApplyBias(in_features=_nf(res - 2))
self.elr_dense2 = EqualizedLRDense(in_features=_nf(res - 2), out_features=1, gain=1.0)
self.bias3 = ApplyBias(in_features=1)
self.res = res
def forward(self, x):
# x: [batch, 512, 4, 4]
x = mini_batch_std(x) # [batch, 513, 4, 4]
x = self.elr_conv2d(x) # [batch, 512, 4, 4]
x = self.bias1(x) # [batch, 512, 4, 4]
x = torch.nn.functional.leaky_relu(x, negative_slope=0.2) # [batch, 512, 4, 4]
x = x.view(-1, _nf(self.res - 1) * 16) # [batch, 512*4*4]
x = self.elr_dense1(x) # [batch, 512]
x = self.bias2(x) # [batch, 512]
x = torch.nn.functional.leaky_relu(x, negative_slope=0.2) # [batch, 512]
x = self.elr_dense2(x) # [batch, 1]
x = self.bias3(x) # [batch, 1]
return x
class BlockD2D(torch.nn.Module):
def __init__(self, res):
super().__init__()
self.elr_conv2d1 = EqualizedLRConv2D(in_channels=_nf(res - 1), out_channels=_nf(res - 1))
self.bias1 = ApplyBias(in_features=_nf(res - 1))
self.elr_conv2d2 = EqualizedLRConv2D(in_channels=_nf(res - 1), out_channels=_nf(res - 2))
self.bias2 = ApplyBias(in_features=_nf(res - 2))
self.pool = torch.nn.AvgPool2d(kernel_size=2)
def forward(self, x):
x = self.elr_conv2d1(x)
x = self.bias1(x)
x = torch.nn.functional.leaky_relu(x, negative_slope=0.2)
x = self.elr_conv2d2(x)
x = self.bias2(x)
x = torch.nn.functional.leaky_relu(x, negative_slope=0.2)
x = self.pool(x)
return x
def _block_D(res, initial_resolution=2):
if res == initial_resolution:
model = BlockD1D(res)
else:
model = BlockD2D(res)
return model
class Disc(torch.nn.Module):
def __init__(self, d_blocks, rgb_blocks, fade_in_alpha):
super().__init__()
self.d_blocks = torch.nn.ModuleList(d_blocks)
self.rgb_blocks = torch.nn.ModuleList(rgb_blocks)
self.fade_in_alpha = fade_in_alpha
self.pool = torch.nn.AvgPool2d(kernel_size=2)
def forward(self, x):
new_x = self.rgb_blocks[1](x)
new_x = self.d_blocks[-1](new_x)
downscale_x = self.pool(x)
downscale_x = self.rgb_blocks[0](downscale_x)
x = fade_in(downscale_x, new_x, self.fade_in_alpha)
for d in self.d_blocks[:-1][::-1]:
x = d(x)
return x
def build_D(fade_in_alpha, initial_resolution=2, target_resolution=10, num_channels=3):
d_blocks = [_block_D(res, initial_resolution) for res in range(initial_resolution, target_resolution + 1)]
rgb_blocks = [FromRGB(res, num_channels) for res in range(initial_resolution, target_resolution + 1)]
discriminators = [torch.nn.Sequential(rgb_blocks[0], d_blocks[0])]
for idx in range(2, len(d_blocks) + 1):
discriminators.append(Disc(d_blocks[0:idx], rgb_blocks[idx - 2:idx], fade_in_alpha))
return discriminators
fade_in_alpha = torch.tensor(1.0)
d_models = fe.build(
model_fn=lambda: build_D(fade_in_alpha, target_resolution=int(np.log2(target_size)), num_channels=1),
optimizer_fn=[lambda x: Adam(x, lr=0.001, betas=(0.0, 0.99), eps=1e-8)] * len(event_size),
model_name=["d_{}".format(size) for size in event_size])
g_models = fe.build(
model_fn=lambda: build_G(fade_in_alpha, target_resolution=int(np.log2(target_size)), num_channels=1),
optimizer_fn=[lambda x: Adam(x, lr=0.001, betas=(0.0, 0.99), eps=1e-8)] * len(event_size) + [None],
model_name=["g_{}".format(size) for size in event_size] + ["G"])
# -
# ## The Following operations will happen in our `Network`:
# 1. random vector -> generator -> fake images
# 2. fake images -> discriminator -> fake scores
# 3. real image, low resolution real image -> blender -> blended real images
# 4. blended real images -> discriminator -> real scores
# 5. fake images, real images -> interpolater -> interpolated images
# 6. interpolated images -> discriminator -> interpolated scores
# 7. interpolated scores, interpolated image -> get_gradient -> gradient penalty
# 8. fake_score -> GLoss -> generator loss
# 9. real score, fake score, gradient penalty -> DLoss -> discriminator loss
# 10. update generator
# 11. update discriminator
# +
from fastestimator.op.tensorop import TensorOp
from fastestimator.op.tensorop.model import ModelOp, UpdateOp
from fastestimator.backend import feed_forward, get_gradient
class ImageBlender(TensorOp):
def __init__(self, alpha, inputs=None, outputs=None, mode=None):
super().__init__(inputs=inputs, outputs=outputs, mode=mode)
self.alpha = alpha
def forward(self, data, state):
image, image_lowres = data
new_img = self.alpha * image + (1 - self.alpha) * image_lowres
return new_img
class Interpolate(TensorOp):
def forward(self, data, state):
fake, real = data
batch_size = real.shape[0]
coeff = torch.rand(batch_size, 1, 1, 1).to(fake.device)
return real + (fake - real) * coeff
class GradientPenalty(TensorOp):
def __init__(self, inputs, outputs=None, mode=None):
super().__init__(inputs=inputs, outputs=outputs, mode=mode)
def forward(self, data, state):
x_interp, interp_score = data
gradient_x_interp = get_gradient(torch.sum(interp_score), x_interp, higher_order=True)
grad_l2 = torch.sqrt(torch.sum(gradient_x_interp**2, dim=(1, 2, 3)))
gp = (grad_l2 - 1.0)**2
return gp
class GLoss(TensorOp):
def forward(self, data, state):
return -torch.mean(data)
class DLoss(TensorOp):
"""Compute discriminator loss."""
def __init__(self, inputs, outputs=None, mode=None, wgan_lambda=10, wgan_epsilon=0.001):
super().__init__(inputs=inputs, outputs=outputs, mode=mode)
self.wgan_lambda = wgan_lambda
self.wgan_epsilon = wgan_epsilon
def forward(self, data, state):
real_score, fake_score, gp = data
loss = fake_score - real_score + self.wgan_lambda * gp + real_score**2 * self.wgan_epsilon
return torch.mean(loss)
fake_img_map = {
epoch: ModelOp(inputs="z", outputs="x_fake", model=model)
for (epoch, model) in zip(event_epoch, g_models[:-1])
}
fake_score_map = {
epoch: ModelOp(inputs="x_fake", outputs="fake_score", model=model)
for (epoch, model) in zip(event_epoch, d_models)
}
real_score_map = {
epoch: ModelOp(inputs="x_blend", outputs="real_score", model=model)
for (epoch, model) in zip(event_epoch, d_models)
}
interp_score_map = {
epoch: ModelOp(inputs="x_interp", outputs="interp_score", model=model)
for (epoch, model) in zip(event_epoch, d_models)
}
g_update_map = {
epoch: UpdateOp(loss_name="gloss", model=model)
for (epoch, model) in zip(event_epoch, g_models[:-1])
}
d_update_map = {epoch: UpdateOp(loss_name="dloss", model=model) for (epoch, model) in zip(event_epoch, d_models)}
network = fe.Network(ops=[
EpochScheduler(fake_img_map),
EpochScheduler(fake_score_map),
ImageBlender(alpha=fade_in_alpha, inputs=("x", "x_low_res"), outputs="x_blend"),
EpochScheduler(real_score_map),
Interpolate(inputs=("x_fake", "x"), outputs="x_interp"),
EpochScheduler(interp_score_map),
GradientPenalty(inputs=("x_interp", "interp_score"), outputs="gp"),
GLoss(inputs="fake_score", outputs="gloss"),
DLoss(inputs=("real_score", "fake_score", "gp"), outputs="dloss"),
EpochScheduler(g_update_map),
EpochScheduler(d_update_map)
])
# -
# ## Defining Estimator
#
# Given that ``Pipeline`` and ``Network`` are properly defined, we need to define an `AlphaController` `Trace` to help both the generator and the discriminator smoothly grow by controlling the value of the `fade_in_alpha` tensor created previously. We will also use `ModelSaver` to save our model during every training phase.
# +
from fastestimator.trace import Trace
from fastestimator.trace.io import ModelSaver
class AlphaController(Trace):
def __init__(self, alpha, fade_start_epochs, duration, batch_scheduler, num_examples):
super().__init__(inputs=None, outputs=None, mode="train")
self.alpha = alpha
self.fade_start_epochs = fade_start_epochs
self.duration = duration
self.batch_scheduler = batch_scheduler
self.num_examples = num_examples
self.change_alpha = False
self.nimg_total = self.duration * self.num_examples
self._idx = 0
self.nimg_so_far = 0
self.current_batch_size = None
def on_epoch_begin(self, state):
# check whetehr the current epoch is in smooth transition of resolutions
fade_epoch = self.fade_start_epochs[self._idx]
if self.system.epoch_idx == fade_epoch:
self.change_alpha = True
self.nimg_so_far = 0
self.current_batch_size = self.batch_scheduler.get_current_value(self.system.epoch_idx)
print("FastEstimator-Alpha: Started fading in for size {}".format(2**(self._idx + 3)))
elif self.system.epoch_idx == fade_epoch + self.duration:
print("FastEstimator-Alpha: Finished fading in for size {}".format(2**(self._idx + 3)))
self.change_alpha = False
if self._idx + 1 < len(self.fade_start_epochs):
self._idx += 1
self.alpha.data = torch.tensor(1.0)
def on_batch_begin(self, state):
# if in resolution transition, smoothly change the alpha from 0 to 1
if self.change_alpha:
self.nimg_so_far += self.current_batch_size
self.alpha.data = torch.tensor(self.nimg_so_far / self.nimg_total, dtype=torch.float32)
traces = [
AlphaController(alpha=fade_in_alpha,
fade_start_epochs=event_epoch[1:],
duration=phase_length,
batch_scheduler=batch_scheduler,
num_examples=len(dataset)),
ModelSaver(model=g_models[-1], save_dir=save_dir, frequency=phase_length)]
estimator = fe.Estimator(pipeline=pipeline,
network=network,
epochs=epochs,
traces=traces,
train_steps_per_epoch=train_steps_per_epoch)
# -
# ## Start Training
#
# ### Note: for 128x128 resolution, it takes about 24 hours on single V100 GPU. for 1024x1024 resolution, it takes ~ 2.5 days on 4 V100 GPUs.
estimator.fit()
| apphub/image_generation/pggan/pggan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # MidTerm Assignment: notebook 1: Revision
# # <NAME>
#
# ## Net ID: mwm356
# #### Total : 10 pts
# # Question 1.1. Statistical learning: Maximum likelihood (Total 5pts)
#
# This exercise contains a pen and paper part and a coding part. You should submit the pen and paper either in lateX or take a picture of your written solution and join it to the Assignment folder.
#
# We consider the dataset given below. This dataset was generated from a Gaussian distribution with a given mean $\mathbf{\mu} = (\mu_1, \mu_2)$ and covariance matrix $\mathbf{\Sigma} = \left[\begin{array}{cc}
# \sigma_1^2 & 0 \\
# 0 & \sigma_2^2
# \end{array}\right]$. We would like to recover the mean and variance from the data. In order to do this, use the following steps:
#
# 1. Write the general expression for the probability (multivariate (2D) Gaussian with diagonal covariance matrix) to observe a single sample
# 2. We will assume that the samples are independent and identically distributed so that the probability of observing the whole dataset is the product of the probabilties of observing each one of the samples $\left\{\mathbf{x}^{(i)} = (x_1^{(i)}, x_2^{(i)})\right\}_{i=1}^N$. Write down this probability
# 3. Take the negative logarithm of this probability
# 4. Once you have taken the logarithm, find the expression for $\mu_1, \mu_2$, $\sigma_1$ and $\sigma_2$ by maximizing the probability.
# ## I.1.1Solution: Mathematical Base
# #### Solution Guide: <p><font color='red'><b>Equations in a Box are answers to questions asked above. Rest is the way to get them</b></font></p>
# #### Univariate Gaussian Distribution
#
# 1. Recall 1-dimensional Gaussian with mean paramete $ \mu $
#
#
# $$ p(x|\mu) = \frac{1}{\sqrt{2\pi}} exp \left[-\frac{1}{2}(x - \mu)^2\right] $$
#
#
#
# 2. This can also have variance parameter $\sigma^2$ that widens or narrows the Gaussian distribution
#
#
# $$ p(x|\mu, \sigma) = \frac{1}{\sqrt{2\pi\sigma^2}} exp \left[-\frac{1}{2\sigma^2}(x - \mu)^2\right] $$
# #### Multivariate Gaussian Distribution
# 3. This Gaussian can be extended to __Multivariate Gaussian__ with co-variance matrix $\sum$
#
#
# $$ X = { ( \overrightarrow{\text{x}}_1,\overrightarrow{\text{x}}_2, \dots, \overrightarrow{\text{x}}_{D-1},\overrightarrow{\text{x}}_D)} $$
#
# $$ Moment-Parameterization: \mu = {\mathbb{E}}{(X)} = (\mu_1, \mu_2,\dots,\mu_{D-1}, \mu_D )$$
#
# $$ \sigma^2 = {\mathbb{E}} \left[X - {\mathbb{E}}(X) \right]^2 = {\mathbb{E}}\left[X - \mu \right]^2 $$
# $$ \Sigma = Cov(X) = {\mathbb{E}} \left[\overrightarrow{\text{x}} - \overrightarrow{\mu} \right] \left[\overrightarrow{\text{x}} - \overrightarrow{\mu} \right]^T $$
# $$ Mahalanobis-distance: \triangle^2 = \left[\overrightarrow{\text{x}} - \overrightarrow{\mu} \right]^T \Sigma^{-1} \left[\overrightarrow{\text{x}} - \overrightarrow{\mu}\right] $$
#
#
# By Using: $ X, \mu, \sigma^2, \Sigma $ i.e. equations 3 to 6, we get:
#
# $$ \boxed {p(\overrightarrow{\text{x}}|\overrightarrow{\mu}, \Sigma ) = \frac{1}{{2\pi}^{\frac{D}{2}}\sqrt
# {|\Sigma|}} exp \left[-\frac{1}{2}(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T \Sigma^{-1} (\overrightarrow{\text{x}} - \overrightarrow{\mu}) \right]} $$
#
# where
#
# $$
# \overrightarrow{\text{x}} \in \mathbb{R}^{D} , \overrightarrow{\mu} \in \mathbb{R}^{D} , \Sigma \in \mathbb{R}^{{D}\times{D}}
# $$
#
# #### Diagonal Covariance Probability
# 4. Diagonal Covariance: Dimensions of x are independent product of multiple 1-D Gaussians
#
# $$ \boxed {p(\overrightarrow{\text{x}}|\overrightarrow{\mu}, \Sigma ) = \prod_{d=1}^D \frac{1}{\sqrt{2\pi}\overrightarrow{\sigma}(d)} exp \left[ - \frac{(\overrightarrow{\text{x}}(d) - \overrightarrow{\mu} (d))^2}{2\overrightarrow{\sigma}(d)^2} \right]} $$
#
# where
#
# $$ \Sigma =
# \begin{bmatrix}
# \overrightarrow{\mu}(1)^2 & 0 & 0 & 0\\
# 0 & \overrightarrow{\mu}(2)^2 & 0 & 0\\
# 0 & 0& \overrightarrow{\mu}(3)^2 & 0\\
# 0 & 0& \ddots &0 \\
# 0 & 0& 0 & \overrightarrow{\mu}(D)^2 \\
# \end{bmatrix}$$
#
#
#
# #### Maximum Likelihood
#
# 5. To recover mean and variance, we COULD use standard Maximum Likelihood where probability of given data is maximized
#
#
# $$ X = { ( \overrightarrow{\text{x}}_1,\overrightarrow{\text{x}}_2, \dots, \overrightarrow{\text{x}}_{N-1},\overrightarrow{\text{x}}_N)} $$
#
# Let $\theta$ represent the parameters $(\mu, \sigma)$ of the two distributions. Then the probability of observing the data with parameter$\theta$ is called the likelihood.
#
# $$ p(X|\theta) = p{ ( \overrightarrow{\text{x}}_1,\overrightarrow{\text{x}}_2, \dots,\overrightarrow{\text{x}}_N | \theta)} $$
#
# FOR independent Gaussian samples
# $$ \boxed {p(X) = \prod_{i=1}^N p(\overrightarrow{\text{x}}_i | \overrightarrow{\mu}_i, \Sigma_i)} $$
#
# FOR identically Distributed
#
# $$ \boxed{p(X) = \prod_{i=1}^N p(\overrightarrow{\text{x}}_i | \overrightarrow{\mu}, \Sigma) }$$
#
# #### Negative Log-Maximum Likelihood
#
# 6. HOWEVER, rather than simple maximum likelihood, we use maximum of log-likelihood by taking log
#
# $$
# \boxed{\sum_{i=1}^N \log{\mathsf{p}}(\overrightarrow{\text{x}}_i | \overrightarrow{\mu}, \Sigma) = -\sum_{i=1}^N \log \frac{1}{{2\pi}^{\frac{D}{2}}\sqrt
# {|\Sigma|}} exp \left[-\frac{1}{2}(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T \Sigma^{-1} (\overrightarrow{\text{x}} - \overrightarrow{\mu}) \right]}
# $$
#
#
# #### Finding vector $\overrightarrow{\mu}$ ($\mu_1, \mu_2$) by maximizing their probabilties:
#
# 7. __Max over $\mu$__
# $$
# \underset{\mu}{\mathrm{argmax}} = \frac{\partial}{\partial \mu} \left[-\sum_{i=1}^N \log \frac{1}{{2\pi}^{\frac{D}{2}}\sqrt
# {|\Sigma|}} exp \left[-\frac{1}{2}(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T \Sigma^{-1} (\overrightarrow{\text{x}} - \overrightarrow{\mu}) \right]\right] = 0
# $$
#
# $$
# \therefore \frac{\partial}{\partial \mu} \left[\sum_{i=1}^N -\frac{D}{2}\log {2} \pi - \frac{1}{2}\log|\Sigma| -\frac{1}{2}(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T \Sigma^{-1} (\overrightarrow{\text{x}} - \overrightarrow{\mu}) \right]
# $$
#
# $$
# \frac{\partial \overrightarrow{\text{x}}^T\overrightarrow{\text{x}} }{\partial \overrightarrow{\text{x}}} = 2 \overrightarrow{\text{x}}^T \Longrightarrow \frac{\partial}{\partial \mu} (\overrightarrow{\text{x}} - \overrightarrow{\mu})^T (\overrightarrow{\text{x}} - \overrightarrow{\mu}) = {2} (\overrightarrow{\text{x}} - \overrightarrow{\mu})^T
# $$
#
# $$
# \sum_{i=1}^N {\frac{1}{2}} \times {2} (\overrightarrow{\text{x}} - \overrightarrow{\mu})^T \Sigma^{-1} = \overrightarrow{\text{0}}
# $$
#
# Hence
# $$
# \therefore \boxed {\overrightarrow{\mu} = \frac{1}{N}\sum_{i=1}^N \overrightarrow{\text{x}_i}}
# $$
#
# #### Finding vector $\overrightarrow{\Sigma}$ ($\sigma_1, \sigma_2$) matrix by maximizing their probabilties $\mathbf{\Sigma} = \left[\begin{array}{cc}
# \sigma_1^2 & 0 \\
# 0 & \sigma_2^2
# \end{array}\right]$
#
#
# 8. __Max over $\Sigma^{-1}$__ by using Trace properties. Rewrite log-likelihood using __"Trace Trick"__ and Let $l$ be:
#
# $$
# l = \sum_{i=1}^N -\frac{D}{2}\log {2} \pi - \frac{1}{2}\log|\Sigma| -\frac{1}{2}(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T \Sigma^{-1} (\overrightarrow{\text{x}} - \overrightarrow{\mu})
# $$
#
# $$
# \therefore -\frac{ND}{2}\log {2} \pi + \frac{N}{2}\log|\Sigma^{-1}| -\frac{1}{2}\sum_{i=1}^N \mathrm{Tr} \left[(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T \Sigma^{-1} (\overrightarrow{\text{x}} - \overrightarrow{\mu}) \right]
# $$
#
# $$
# \therefore -\frac{ND}{2}\log {2} \pi + \frac{N}{2}\log|\Sigma^{-1}| -\frac{1}{2}\sum_{i=1}^N \mathrm{Tr} \left[(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T (\overrightarrow{\text{x}} - \overrightarrow{\mu})\Sigma^{-1} \right]
# $$
#
# Let $$ A = \Sigma^{-1}$$
# $$
# \therefore -\frac{ND}{2}\log {2} \pi + \frac{N}{2}\log|A| -\frac{1}{2}\sum_{i=1}^N \mathrm{Tr} \left[(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T (\overrightarrow{\text{x}} - \overrightarrow{\mu})A \right]
# $$
#
# Since $\frac {\partial \log{|A|}}{\partial {A}} = (A^{-1})^T ; \frac{\partial \mathrm{Tr} (AB)}{\partial {A}} = B^T $
#
# $$
# \frac {\partial {l}}{\partial {A}} = -0 + \frac{N}{2}(A^{-1})^T - \frac{1}{2}\sum_{i=1}^N \left[(\overrightarrow{\text{x}} - \overrightarrow{\mu})(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T\right]^T
# $$
#
# $$
# \frac{N}{2}\Sigma - \frac{1}{2}\sum_{i=1}^N (\overrightarrow{\text{x}} - \overrightarrow{\mu})(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T
# $$
#
# $$
# \therefore \frac {\partial {l}}{\partial {A}} = 0 \Longrightarrow \boxed {\Sigma = \frac{1}{N}(\overrightarrow{\text{x}} - \overrightarrow{\mu})(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T}
# $$
# ## I.1.2 Programming
# #### Code on following dataset to display gaussian distribution using log-maximum likelihood
# ###### Import Respective Libraries
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
import matplotlib.pyplot as plt
from matplotlib import cm
from scipy.stats import multivariate_normal
# ###### Load Data
# +
X = loadmat('dataNotebook1_Ex1.mat')['X']
plt.scatter(X[:,0], X[:,1])
plt.show()
# -
# #### 5. Once you have you estimates for the parameters of the Gaussian distribution, plot the level lines of that distribution on top of the points by using the lines below.
# ### Solution
# ### Please note that my solution to above question also includes outliers into the Gaussian distribution.
# ##### Compute $\overrightarrow{\mu}, \sigma^2$ and use them for Scipy function multivariate_normal.pdf( )
# $$
# \boxed {\overrightarrow{\mu} = \frac{1}{N}\sum_{i=1}^N \overrightarrow{\text{x}_i}}
# $$
def compute_mu_scipy(X):
N = len(X)
mu = (1/N)*np.sum(X)
return mu
def multivariate_normal_pdf_scipy(X):
x1 = np.linspace(0, 1.85, 100)
x2 = np.linspace(0.25, 2.5, 100)
xx1, xx2 = np.meshgrid(x1, x2)
from scipy.stats import multivariate_normal
xmesh = np.vstack((xx1.flatten(), xx2.flatten())).T
mu1 = compute_mu_scipy(X[:,0])
mu2 = compute_mu_scipy(X[:,1])
print("mu1 is: {} \nmu2 is: {}".format(mu1, mu2))
sigma1 = np.std(X[:,0])
sigma2 = np.std(X[:,1])
sigma = np.zeros((2,2))
sigma[0,0] = sigma1**2
sigma[1,1] = sigma2**2
print("Sigma1 is: {} \nSigma2 is: {} \nSigma Vector is: \n{}".format(sigma1, sigma2, sigma))
y = multivariate_normal.pdf(xmesh, mean=[mu1,mu2], cov=sigma)
print("Returned Y is: ",y)
return x1,x2,xx1, xx2, y
def plot_scipy_MND(X):
x1,x2,xx1, xx2, y = multivariate_normal_pdf_scipy(X)
plt.scatter(X[:,0], X[:,1])
plt.contourf(xx1, xx2, np.reshape(y, (100, 100)), zdir='z', offset=-0.15, cmap=cm.viridis, alpha=0.5)
plt.show()
plot_scipy_MND(X)
# ##### From Professor: Solution should look like this
from IPython.display import Image
#<img src="solution_gaussian.png" width="400" />
Image('solution_gaussian.png')
# ### Optional Additional Work for Q 1.1 without using Scipy Library
#
# ##### Extra Optional Work: Compute $\overrightarrow{\mu}$
# $$
# \boxed {\overrightarrow{\mu} = \frac{1}{N}\sum_{i=1}^N \overrightarrow{\text{x}_i}}
# $$
def compute_mu(X):
N = len(X)
mu = (1/N)*np.sum(X)
# mu = mu.reshape(-1,1)
return mu, N
# ##### Extra Optional Work: Compute $\Sigma$
# $$\boxed {\Sigma = \frac{1}{N}(\overrightarrow{\text{x}} - \overrightarrow{\mu})(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T}
# $$
def compute_sigma(X):
mu, N = compute_mu(X)
sigma = (1/N)*(X - mu)*(X-mu).T
return sigma
# ##### Extra Optional Work: Multivariate Gaussian Distribution
# $$ \boxed {p(\overrightarrow{\text{x}}|\overrightarrow{\mu}, \Sigma ) = \frac{1}{\sqrt{{2\pi}^D
# |\Sigma|}} exp \left[-\frac{1}{2}(\overrightarrow{\text{x}} - \overrightarrow{\mu})^T \Sigma^{-1} (\overrightarrow{\text{x}} - \overrightarrow{\mu}) \right]} $$
# +
def multivariate_normal_pdf(X):
X = X.reshape(-1,1)
mu, N = compute_mu(X)
sigma = compute_sigma(X)
sigma_determinant = np.linalg.det(sigma)
sigma_inverse = np.linalg.pinv(sigma)
mu = mu.reshape(-1,1)
instances, columns = sigma.shape
# first_denominator = (2 * np.pi)**(np.true_divide(instances,2)) * np.sqrt(sigma_determinant)
first_denominator = np.sqrt(((2 * np.pi)**(instances))*sigma_determinant)
exponential_nominator = -(1/2) * (X - mu).T * sigma_inverse * (X - mu)
result = (np.true_divide(1, first_denominator)) * np.exp(exponential_nominator)
return result, sigma
# +
def solve_for_results():
value = 100
X = np.linspace(0, 1.85, value)
Y = np.linspace(0.25, 2.5, value)
XX, YY = np.meshgrid(X, Y)
data = [X, Y]
Z = []
for i in data:
z, sigma = np.array(multivariate_normal_pdf(i))
Z.append(z)
return X,Y,Z,sigma
def plot_results():
X, Y, Z = solve_for_results()
fig = plt.figure(figsize = (10,10))
ax = fig.gca(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, linewidth=1, antialiased=True,
cmap=cm.viridis)
cset = ax.contourf(X, Y, Z, zdir='z', offset=-0.15, cmap=cm.viridis)
# Adjust the limits, ticks and view angle
ax.set_zlim(-0.15,0.5)
ax.set_zticks(np.linspace(0,0.2,5))
ax.view_init(20, 25)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_title('Multivariate Gaussian Sigma = {}'.format(Sigma))
plt.show()
# solve_for_results()
# -
# ## 1.2. We consider the following linear regression problem. (Total 5pts)
# +
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
X_original = loadmat('MidTermAssignment_dataEx2.mat')['MidTermAssignment_dataEx2']
plt.scatter(X_original[:,0], X_original[:,1])
plt.show()
# -
# ## Questions 1.2/2.1/2.2
# ### Solve the $\ell_2$ regularized linear regression problem __through the normal equations__ (be careful that you have to take the $\ell_2$regularization into account). Then double-check your solution by comparing it with the regression function from scikit learn. Plot the result below.
# ## Solution
# ### Mathematical Base
# 1. Loss Function Equation
# $$
# l(\beta) = \sum_{i=1}^N(t^{(i)} - X\overrightarrow{\beta})^2
# $$
#
# Vectorized Form
# $$
# \sum_{i=1}^N (V_i)^2 = \overrightarrow{\text{v}}^T\overrightarrow{\text{v}} \Longrightarrow l(\beta) =(t^{(i)} - X\overrightarrow{\beta})^T(t^{(i)} - X\overrightarrow{\beta})
# $$
# 2. Normal Equation: After taking derivative of Loss func i.e. $l(\beta)$, Vectorized Normal Equ is
# $$
# \overrightarrow{\beta} = (X^TX)^{-1}X^T\overrightarrow{\text{t}}
# $$
#
# 3. Ridge Regularized Normal Equation:
# $$
# \overrightarrow{\beta} = \left[(X^TX + \lambda I)^{-1}X^T\overrightarrow{\text{t}}\right]
# $$
# ### Loading Data
X = np.vstack(X_original[:,0])
ones = np.vstack(np.ones(X.shape))
X = np.hstack((ones,X))
target = np.vstack(X_original[:,1])
print("Shape of X: {} \nShape of target: {}".format(X.shape, target.shape))
def prediction(X, beta):
result = np.dot(X, beta)
return result
# ### Non-Regularized Normal Equation
# +
def Vectorized_closed_form(X, target):
target = np.mat(target)
left_matrix = np.linalg.inv(np.dot(X.T, X))
right_matrix = np.dot(X.T, target)
beta = np.dot(left_matrix,right_matrix)
print("Our Non-regularized beta is: \n{}".format(beta))
return beta
beta_1 = Vectorized_closed_form(XX, target)
print("Shape of returned predict array",prediction(X, beta_2).shape)
print("Non-Regularized Normal Equation yields following Regression")
plt.figure()
plt.scatter(X_original[:,0], target)
plt.plot(X_original[:,0], prediction(X, beta_1), color = 'red')
plt.show()
# -
# ### Regularized Normal Equation with multiple Lambda $ \lambda $ Values
# +
def Regularized_Vectorized_closed_form(X, target,lambda0):
# lambda0 = 1
target = np.mat(target)
left_matrix = np.linalg.inv(np.dot(X.T, X) + np.dot(lambda0,np.identity(target.shape[1])))
right_matrix = np.dot(X.T, target)
beta = np.dot(left_matrix,right_matrix)
print("Our Regularized beta with Lambda value {} is: \n{}".format(lambda0, beta))
return beta
lambda0 = [0.01,0.1,1,10]
for i in lambda0:
beta_2 = Regularized_Vectorized_closed_form(X, target,i)
print("Shape of returned predict array",prediction(X, beta_2).shape)
print("Regularized Normal Equation with Lambda value {} yields following Regression".format(i))
plt.figure()
plt.scatter(X_original[:,0], target)
plt.plot(X_original[:,0], prediction(X, beta_2), color = 'red')
plt.show()
# -
# ### Verification from Scikit Learn Model
# +
from sklearn.linear_model import Ridge
def Scikit_Ridge_Linear_Regression(X_original, X, target):
bias_list = [0,0.5, 1]
for bias in bias_list:
# ==================================Building Model==============================================================
model = Ridge(alpha = 0.1)
fit = model.fit(X,target)
ridgeCoefs = model.coef_
predict = model.predict(ridgeCoefs)
y_hat = np.dot(X, ridgeCoefs.T)
print("Our Fit Model \n",fit)
print("Our Coefficients with (current) Bias value '{}' are: \n{}".format(bias, ridgeCoefs+bias))
print("predcit from scikit model: ",predict)
print("Following is the Scikit Normal Equation/Ridge Linear Regression with Bias value '{}'".format(bias))
# ==================================Plot Graph==============================================================
plt.figure()
plt.scatter(X_original[:,0], target)
plt.plot(X_original[:,0], y_hat+bias, color = 'red')
plt.show()
Scikit_Ridge_Linear_Regression(X_original, X, target)
# -
# ## Questions 2.3
# 2.3. __Kernel Ridge regression__. Given the 'Normal Equations' solution to the regularized regression model, we now want to turn the regression model into a formulation over kernels.
#
#
# ## __2.3.1. Start by showing that this solution can read as__
#
# $$\mathbf{\beta} = \mathbf{X}^T\left(\mathbf{K} + \lambda\mathbf{I}_N\right)^{-1}\mathbf{t}$$
#
# where $\mathbf{K}$ is the kernel matrix defined from the scalar product of the prototypes, i.e. $\mathbf{K}_{i,j} = \kappa(\mathbf{x}^{(i)}, \mathbf{x}^{(j)}) = (\mathbf{x}^{(i)})^T(\mathbf{x}^{(j)})$.
#
# ## Solution 2.3.1
# 1. Substitute K into the Equation:
# $$\mathbf{K} = \mathbf{X}\mathbf{X}^T$$
# ## Extra Work: (Optional) Proof
# 1. Our Normal Equation is:
# $$
# \overrightarrow{\beta} = (X^TX)^{-1}X^T\overrightarrow{\text{t}}
# $$
#
# 2. Suppose $(X^TX)^{-1} $ exists, then let $ \widehat{\beta}_{ML}$ be:
#
#
# $$
# \widehat{\beta}_{ML} = (X^TX)^{-1}X^T\overrightarrow{\text{t}}
# $$
# $$
# \therefore (X^TX)(X^TX)^{-1}(X^TX)^{-1}X^T\overrightarrow{\text{t}}
# $$
# $$
# \therefore (X^TX)(X^TX)^{-2}X^T\overrightarrow{\text{t}}
# $$
#
# $$
# \widehat{\beta}_{ML} \simeq X^T\alpha
# $$
# where $\alpha = X(X^TX)^{-2}X^T\overrightarrow{\text{t}}$
# 3. Get __Gram Matrix__ if we want to predict the y values from X values:
# $$
# X\widehat{\beta}_{ML} = XX^T\alpha = K\alpha
# $$
#
#
# 4. Let our Ridge Regularized Normal Equation be $\widehat{\beta}_{MAP}$:
# $$
# \widehat{\beta}_{MAP} = (X^TX + \lambda I)^{-1}X^T\overrightarrow{\text{t}}
# $$
# $$
# (X^TX + \lambda I)\widehat{\beta}_{MAP} = X^T\overrightarrow{\text{t}}
# $$
# $$
# X^TX\widehat{\beta}_{MAP} + \lambda\widehat{\beta}_{MAP} = X^T\overrightarrow{\text{t}}
# $$
#
# $$
# \lambda\widehat{\beta}_{MAP} = X^T\left(\overrightarrow{\text{t}} - X\widehat{\beta}_{MAP}\right)
# $$
#
# $$
# \widehat{\beta}_{MAP} = \lambda^{-1} X^T\left(\overrightarrow{\text{t}} - X\widehat{\beta}_{MAP}\right)
# $$
# $$
# \widehat{\beta}_{MAP} = X^T\alpha
# $$
# where $ \alpha =\lambda^{-1}\left(\overrightarrow{\text{t}} - X\widehat{\beta}_{MAP}\right) $
#
# 5. Solve for $\alpha$, use Gram Matrix equation and substitute the equation:
# $$
# \lambda \alpha = \overrightarrow{\text{t}} - X\widehat{\beta}_{MAP}
# $$
# $$
# \lambda \alpha = \overrightarrow{\text{t}} - XX^T\alpha
# $$
# $$
# \left(XX^T + \lambda \mathbf{I}_{N}\right) \alpha = \overrightarrow{\text{t}}
# $$
# $$
# \alpha = \left(XX^T + \lambda \mathbf{I}_{N}\right)^{-1}\overrightarrow{\text{t}}
# $$
# Substitute $XX^T $ for K:
# $$
# \alpha = \left(K + \lambda \mathbf{I}_{N}\right)^{-1}\overrightarrow{\text{t}}
# $$
#
# 6. Substitude $Equation$ $55$ into $Equation$ $50$
# $$
# \beta = X^T\left(K + \lambda \mathbf{I}_{N}\right)^{-1}\overrightarrow{\text{t}}
# $$
# ## Question __2.3.2.__
# Given this, the classifier can read as $f(\mathbf{x}) = \mathbf{\beta}^T\mathbf{x} = \sum_{i=1}^N \alpha_i \kappa(\mathbf{x}, \mathbf{x}_i)$. What are the $\alpha$ in this case?
# ## Solution 2.3.2
#
# <div align="center"> $\alpha$ in this case are $weights$ </div>
#
# ## Question __2.3.3.__
# We will apply this idea to text data. Using kernels with text data is interesting because it is usually easier to compare documents than to find appropriate features to represent those documents. The file 'headlines_train.txt' contains a few headlines, some of them being about finance, others being about weather forecasting. Use the first group of lines below to load those lines and their associated targets (1/0).
# ## Solution 2.3.3
# +
# Start by loading the file using the lines below
import numpy as np
def load_text_train_data():
f = open('headlines_train.txt', "r")
lines = f.readlines()
f.close()
sentences = ['Start']
target = [0]
for l in np.arange(len(lines)-2):
if l%2 == 0:
lines_tmp = lines[l]
lines_tmp = lines_tmp[:-1]
sentences.append(lines_tmp)
if lines_tmp[-1] == ' ':
target.append(float(lines_tmp[-2]))
else:
target.append(float(lines_tmp[-1]))
sentences = sentences[1:]
target = target[1:]
print("Example of Sentence: {} \
\n\nExamples of Target: {} ".format(sentences[4], target[:10]))
return sentences,target
sentences, target = load_text_train_data()
# -
# ## Question __2.3.4.__
# Now use the lines below to define the kernel. The kernel is basically built by generating a TF-IDF vector for each sentence and comparing those sentences through a cosine similarity measure. the variable 'kernel' the kernel matrix, i.e. $\kappa(i,j) = \frac{\phi_i^T\phi_j}{\|\phi_i\|\|\phi_j\|}$ where the $\phi_i$ encodes the tf-idf vectors. Use the lines below to compute the kernel matrix.
# ## Solution 2.3.4
# +
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.metrics.pairwise import pairwise_kernels
import matplotlib.pyplot as plt
model = TfidfVectorizer(max_features=100, stop_words='english',
decode_error='ignore')
TF_IDF = model.fit_transform(sentences)
feature_names = model.get_feature_names()
kernel = cosine_similarity(TF_IDF)
print("Our Model \n {}".format(model))
print("\n")
print("TF-IDF Shape: {}".format(TF_IDF.shape))
print("TF-IDF Example: \n{}".format(TF_IDF[5]))
print("\nFeature Names: \n {}".format(feature_names))
print("\n")
print("Shape of Kernel Matrix (an array of shape (X,Y)): {}\
\n \nA example of Kernel Matrix Value (15): \n {}".format(kernel.shape, kernel[15]))
plt.imshow(kernel)
plt.show()
# -
stop_word_list = model.get_stop_words()
print("Stop word List Example: \n{}\n".format(stop_word_list))
# ## Question __2.3.4.__
# Once you have the kernel matrix, compute the weights $\alpha$ of the classifier $y(\mathbf{x}) = \sum_{i\in \mathcal{D}}\alpha_i \kappa(\mathbf{x}, \mathbf{x}_i)$.
# ## Solution 2.3.4
# $$ \mathbf{\beta} = \mathbf{X}^T\left(\mathbf{K} + \lambda\mathbf{I}_N\right)^{-1}\mathbf{t} $$
# $\mathbf{K}_{i,j} = \kappa(\mathbf{x}^{(i)}, \mathbf{x}^{(j)}) = (\mathbf{x}^{(i)})^T(\mathbf{x}^{(j)})$
#
# $ \mathbf{K} = \mathbf{X}\mathbf{X}^T$
# +
# compute the alpha weights
def alpha_weights(X,kernel, target):
lambda0 = 0.1
K = np.dot(X,X.T)
center = np.linalg.inv(kernel + lambda0*np.identity(X.shape[0]))
# beta = X.T @ center @ target
beta = center @ target
print("Shape of weights: ",beta.shape, "\n")
return beta
weights = alpha_weights(TF_IDF, kernel,target)
# -
# ## Question __2.3.5.__
# Now that you have the weights, we want to apply the classifier to a few new headlines. Those headlines are stored in the file 'headlines_test.txt'. Use the lines below to load those sentences and compute their TF-IDF representation. the classifier $y(\mathbf{x}) = \sum_{i\in \mathcal{D}}\alpha_i \kappa(\mathbf{x}, \mathbf{x}_i)$
# ## Solution 2.3.5
# Start by loading the file using the lines below
import numpy as np
def load_data_text_test():
f = open('headlines_test.txt', "r")
lines = f.readlines()
f.close()
sentences_test = ['Start']
for l in np.arange(len(lines)):
if l%2 == 0:
lines_tmp = lines[l]
lines_tmp = lines_tmp[:-1]
sentences_test.append(lines_tmp)
sentences_test = sentences_test[1:]
print("Example of Test Sentence: \n{}\n".format(sentences_test[3]))
return sentences_test
sentences_test = load_data_text_test()
# +
'''Compute Test_F and print Relevent Information'''
test_F = np.hstack((tfidf_test.todense(), np.zeros((rows, 100-np.shape(tfidf_test.todense())[1]))))
print("Our Model_test \n {}".format(model))
print("\n")
print("TF-IDF_test Shape: {}".format(tfidf_test.shape))
print("TF-IDF_test Example: \n{}".format(tfidf_test[2]))
print("\n")
print("Shape of Kernel_test Matrix (an array of shape (X,Y)): {}\
\n \nA example of Kernel_test Matrix Value (2): \n {}".format(kernel_test.shape, kernel_test[2]))
print("\nShape of test_F: {}".format(test_F.shape))
# -
# ## Question __2.3.6.__
# Once you have the tf-idf representations stored in the matrix test_F (size 4 by 100 features) the value $\kappa(\mathbf{x}, \mathbf{x}_i)$ that you need to get the final classifier $y(\mathbf{x}) = \sum_{i\in \mathcal{D}}\alpha_i \kappa(\mathbf{x}, \mathbf{x}_i)$ and hence the target of the new sentences, you need to compute the cosine similarity of the new "test" tf-idf vectors with the "training" tf-idf vectors which you computed earlier. each of those cosine similarities will give you an entry in $\kappa(\mathbf{x}, \mathbf{x}_i)$ (here $\mathbf{x}$ denotes any of the fixed test sentences). once you have those similarities, compute the target from your $\alpha$ values as $t(\mathbf{x}) = \sum_{i\in \text{train}} \alpha_i\kappa(\mathbf{x}, \mathbf{x}_i)$. print those targets below.
# ## Solution 2.3.6
# +
tfidf_test = model.transform(sentences_test)
'''Kernel Test Documents'''
kernel_test = cosine_similarity(test_F,TF_IDF)
'''Non-binary Target Values'''
final_target = np.dot(weights,kernel_test.T)
target_test_final = []
for tar in final_target:
if tar >= 0.5:
tar = 1
target_test_final.append(tar)
else:
tar = 0
target_test_final.append(tar)
print("Shape of Kernel for Test Documents: {}\n".format(kernel_test.shape))
print("These are non-binary Target values {} before converting them \ninto binary numbers, 0's and 1's \n".format(final_target))
print("\tFinal Targets for Test Documents are {}; each value for each Document (sentence).\n".format(target_test_final))
print("\033[1m"+"\t\tIn our case, 0 = Weather/Climate | 1 = Finance/Buisness"+"\033[0m")
identity_label = ["Climate", "Finance","Climate", "Finance"]
for tense, label, identity in zip(sentences_test,target_test_final,identity_label):
print("\nOur Document (Sentence) is: \n{}. \n\tand\
its target is {} which is {} in our case".format(tense, label,identity))
# -
# ### Please reach out if anything is unclear
# ## PDF of this file is attached
# # END OF CODE
| Midterm/Gaussian_distribution_Normal_Equ_TF_IDF/MD_Notebook1_Wajahat_submission.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: env
# language: python
# name: env
# ---
import process_output
from PIL import Image, ImageEnhance, ImageFilter
import requests
from io import BytesIO
import imgkit
import json
# +
def get_unsplash_url(client_id, query, orientation):
root = 'https://api.unsplash.com/'
path = 'photos/random/?client_id={}&query={}&orientation={}'
search_url = root + path.format(client_id, query, orientation)
api_response = requests.get(search_url)
data = api_response.json()
api_response.close()
#print(json.dumps(data, indent=4, sort_keys=True))
return data['urls']['regular']
client_id = 'L-CxZwGQjlKToJ1xdSiBCnj1gAyUJ0nBLKYqaQOXOAg'
query = 'nature dark'
orientation = 'landscape'
image_url = get_unsplash_url(client_id, query, orientation)
# +
quote_text = process_output.get_quote()
image_response = requests.get(image_url)
img = Image.open(BytesIO(image_response.content))
image_response.close()
# resize down until either a desired width or height is acheived, then crop the other dimension
# to acheive a non-distorted version of the image with desired dimensions
def resize_crop(im, desired_width=800, desired_height=600):
width, height = im.size
if width/height > desired_width/desired_height:
im.thumbnail((width, desired_height))
else:
im.thumbnail((desired_width, height))
width, height = im.size
box = [0, 0, width, height] # left, upper, right, lower
if width > desired_width:
box[0] = width/2 - desired_width/2
box[2] = width/2 + desired_width/2
if height > desired_height:
box[1] = height/2 - desired_height/2
box[3] = height/2 + desired_height/2
im = im.crop(box=box)
return im
def reduce_color(im, desired_color=0.5):
converter = ImageEnhance.Color(im)
im = converter.enhance(desired_color)
return im
def gaussian_blur(im, radius=2):
im = im.filter(ImageFilter.GaussianBlur(radius=radius))
return im
img = resize_crop(img)
img = reduce_color(img)
#img = gaussian_blur(img)
img.save('backdrop.jpg')
# +
html_doc = None
with open('image_template.html', 'r') as f:
html_doc = f.read()
html_doc = html_doc.replace('dynamictext', quote_text)
#print(len(quote_text))
def get_font_size(text):
size = len(text)
if size < 40:
return '44'
if size < 75:
return '36'
return '30'
html_doc = html_doc.replace('dynamicfontsize', get_font_size(quote_text))
with open('image_out.html', 'w') as f:
f.write(html_doc)
# -
imgkit.from_file('image_out.html', 'image_out.jpg', options={'width' : 800,
'height' : 600,
'quality' : 100,
'encoding' : 'utf-8'
})
| .ipynb_checkpoints/prepare-image-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Vmap - write code for one sample point (datum), automatically batch it!
# This is what enables per sample gradient - example below
# +
import jax.numpy as jnp
from jax import grad, jit, vmap
from jax import random
key = random.PRNGKey(0)
# +
mat = random.normal(key, (150, 100))
batched_x = random.normal(key, (10, 100))
def apply_matrix(v):
return jnp.dot(mat, v)
# +
def naively_batched_apply_matrix(v_batched):
return jnp.stack([apply_matrix(v) for v in v_batched])
print('Naively batched')
# %timeit naively_batched_apply_matrix(batched_x).block_until_ready()
# +
@jit
def batched_apply_matrix(v_batched):
return jnp.dot(v_batched, mat.T)
print('Manually batched')
# %timeit batched_apply_matrix(batched_x).block_until_ready()
# +
@jit
def vmap_batched_apply_matrix(v_batched):
return vmap(apply_matrix)(v_batched)
print('Auto-vectorized with vmap')
# %timeit vmap_batched_apply_matrix(batched_x).block_until_ready()
# -
# # Deep learning per sample gradients
def predict(params, inputs):
for W, b in params:
outputs = jnp.dot(inputs, W) + b
inputs = jnp.tanh(outputs) # inputs to the next layer
return outputs
def loss(params, inputs, targets):
preds = predict(params, inputs)
return jnp.sum((preds - targets)**2)
grad_loss = jit(grad(loss)) # compiled gradient evaluation function
perex_grads = jit(vmap(grad_loss, in_axes=(None, 0, 0))) # fast per-example grads
# https://jax.readthedocs.io/en/latest/notebooks/quickstart.html
#
# https://github.com/google/jax
| examples/jax/notebooks/Demos/vmap.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Imports
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
import seaborn as sns
from mpl_toolkits.axes_grid1 import Size, Divider
# # Read data
# +
folder = 'plot_data/'
filename = 'rec_line_cons_equ_bed_hreg2.nc'
with xr.open_dataset(folder + filename) as ds:
dataset = ds
true_bed_h = ds.total_true_bed_h.data
true_sfc_h = ds.true_surface_h.data
guessed_bed_h = ds.guessed_bed_h.data
guessed_sfc_h = ds.surface_h.data
total_distance = ds.coords['total_distance'].data
ice_mask = ds.ice_mask.data
ice_distance = total_distance[ice_mask]
first_guess_bed_h = ds.first_guessed_bed_h.data
first_guess_sfc_h = ds.first_guess_surface_h.data
# -
dataset
# # start plot for exploring data
# +
figsize=(20,14)
fig = plt.figure(figsize=figsize, facecolor='white')
ax = fig.subplots()
ax.plot(total_distance, true_bed_h, '.-', label='true bed')
ax.plot(total_distance, true_sfc_h, '.-', label='true sfc')
#ax.plot(ice_distance, first_guess_bed_h, label='first guess')
#ax.plot(total_distance, first_guess_sfc_h, label='first guess')
#for i in np.arange(0,5):
# ax.plot(ice_distance, guessed_bed_h[i], label=str(i) + '. Iteration')
# ax.plot(total_distance, guessed_sfc_h[i], label=str(i) + '. Iteration')
ax.plot(ice_distance, guessed_bed_h[1], '.-', label=str(len(guessed_bed_h)) + '. Iteration')
ax.plot(total_distance, guessed_sfc_h[1], '.-', label=str(len(guessed_bed_h)) + '. Iteration')
ax.legend(fontsize=25)
ax.set_xlim([5.5,6.5])
ax.set_ylim([2340, 2800])
# -
# show evolution:
# - first row: first guess, 1. Iteration, 2. Iteration , 3. Iteration
# - second row: 10. Iteration, 20. Iteration, 30. Iteration, 40. Iteration
# # Define colors
colors = sns.color_palette("colorblind")
colors
true_bed_color = list(colors[3]) + [1.] #list(colors[3]) + [1.]
second_bed_color = list(colors[0]) + [1.]
glacier_color = list(colors[9]) + [.5]
outline_color = [0., 0., 0., 1.]
axis_color = list(colors[7]) + [1.]
# # subfigure plot
def subplot(
ax,
true_bed_h,
true_sfc_h,
second_bed_h,
second_sfc_h,
label,
fontsize,
lw=2,
ms=20,
add_legend=False):
# define index which points should be used
index = np.arange(56, 66)
# plot true bed_h and surface_h
ax.plot(total_distance[index],
true_bed_h[index],
'.-',
lw=lw,
ms=ms,
c=true_bed_color,
label=r'true $b$')
ax.plot(total_distance[index],
true_sfc_h[index],
'.--',
lw=lw,
ms=ms,
c=true_bed_color,
label=r'$s^{end}_o$')
# plot second bed_h and surface_h
ax.plot(total_distance[index],
np.append(second_bed_h, true_bed_h[~ice_mask])[index],
'.-',
lw=lw,
ms=ms,
c=second_bed_color,
zorder=5,
label=r'guessed $b$')
ax.plot(total_distance[index],
second_sfc_h[index],
'.--',
lw=lw,
ms=ms,
c=second_bed_color,
label=r'$s^{end}_m$')
# add glacier polygon
x_use=total_distance[index]
x_polygon = np.concatenate((x_use, x_use[::-1]))
y_polygon = np.concatenate((np.append(second_bed_h, true_bed_h[~ice_mask])[index],
second_sfc_h[index][::-1]))
coord_polygon = np.concatenate((np.expand_dims(x_polygon, axis=1),np.expand_dims(y_polygon, axis=1)), axis=1)
ax.add_patch(Polygon(coord_polygon,
fc=glacier_color,
ec=None,#outline_color,
closed=False,
lw = 0.8,
zorder=1,
label=''))
# add labels for grid points
tick_labels = [r'$trm$', r'$-1$', r'$-2$']
point_indices = [63, 62, 61]
len_label_line = 110
text_y_distance = 2
extra_distance_between_labels = 15
for i, (tick_label, point_index)in enumerate(zip(tick_labels, point_indices)):
# plot line
ax.plot([total_distance[point_index],
total_distance[point_index]],
[np.min([true_bed_h[point_index], second_bed_h[point_index]]),
true_bed_h[point_index] +
len_label_line +
i * extra_distance_between_labels],
'-',
c=axis_color,
lw=lw-1,
zorder=1)
# add tick label text
ax.text(total_distance[point_index],
true_bed_h[point_index] +
len_label_line +
text_y_distance +
i * extra_distance_between_labels,
tick_label,
fontsize=fontsize,
c=axis_color,
verticalalignment='bottom',
horizontalalignment='center')
# add text with description
ax.text(6.57, 2700,
label,
fontsize=fontsize,
verticalalignment='center',
horizontalalignment='right')
# add visual axis
x_origin = total_distance[index[0]]
z_origin = true_bed_h[index[-1]]
x_len = 0.12
z_len = 50
x_text_setoff = 0.015
z_text_setoff = 10
# add z axis
plt.annotate(text='',
xy=(x_origin, z_origin),
xytext=(x_origin, z_origin + z_len),
arrowprops=dict(arrowstyle='<-',
mutation_scale=25,
color=axis_color,
lw=1),
zorder=1
)
plt.text(x_origin ,z_origin + z_len + z_text_setoff,'z',
horizontalalignment='center',
verticalalignment='center',
fontsize=fontsize + 2,
c=axis_color)
# add x axis
plt.annotate(text='',
xy=(x_origin, z_origin),
xytext=(x_origin + x_len, z_origin),
arrowprops=dict(arrowstyle='<-',
mutation_scale=25,
color=axis_color,
lw=1),
zorder=1
)
plt.text(x_origin + x_len + x_text_setoff, z_origin, 'x',
horizontalalignment='center',
verticalalignment='center',
fontsize=fontsize + 2,
c=axis_color)
# set limits of x and y axis
ax.set_xlim([5.5,6.6])
ax.set_ylim([2340, 2720])
if add_legend:
ax.legend(fontsize=fontsize)
ax.set_xticks([])
ax.set_yticks([])
# +
fontsize = 25
figsize=(6,5)
fig = plt.figure(figsize=figsize, facecolor='white')
ax = fig.subplots()
subplot(
ax=ax,
true_bed_h=true_bed_h,
true_sfc_h=true_sfc_h,
second_bed_h=first_guess_bed_h,
second_sfc_h=first_guess_sfc_h,
label='first guess',
fontsize=fontsize,
add_legend=False)
# -
# # legend plot
def add_legend(ax,
fontsize,
lw,
ms):
# plot true bed_h and surface_h
ax.plot([],
[],
'.-',
lw=lw,
ms=ms,
c=true_bed_color,
label=r'$b_{t}$')
ax.plot([],
[],
'.--',
lw=lw,
ms=ms,
c=true_bed_color,
label=r'$s^{e}_{o}$')
# plot second bed_h and surface_h
ax.plot([],
[],
'.-',
lw=lw,
ms=ms,
c=second_bed_color,
zorder=5,
label=r'$b$')
ax.plot([],
[],
'.--',
lw=lw,
ms=ms,
c=second_bed_color,
label=r'$s^{e}_{m}$')
ax.legend(loc='center', fontsize=fontsize)
ax.axis('off')
# # Create whole figure
# +
# define some parameters for ploting
lw=2
ms=20
fontsize = 25
fig = plt.figure(figsize=(1,1), facecolor='white')
# define grid
# define fixed size of subplot
subplot_width = 5
subplot_height = 5
subplot_separation_x = .1
subplot_separation_y = .1
# define height of legend
legend_height = 5
#define separation legend subplots
separation_y_legend_subplots = .1
# fixed size in inch
# along x axis x-index for locator
horiz = [Size.Fixed(subplot_width), # 0 1st column subplot
Size.Fixed(subplot_separation_x),
Size.Fixed(subplot_width), # 2 2nd column subplot
Size.Fixed(subplot_separation_x),
Size.Fixed(subplot_width), # 4 3rd column subplot
Size.Fixed(subplot_separation_x),
Size.Fixed(subplot_width), # 6 4th column subplot
]
# y-index for locator
vert = [Size.Fixed(subplot_height), # 0 2nd row subplot
Size.Fixed(subplot_separation_y),
Size.Fixed(subplot_height), # 2 1st row subplot
Size.Fixed(separation_y_legend_subplots),
Size.Fixed(legend_height) # 4 legend
]
rect = (0., 0., 1., 1.) # Position of the grid in the figure
# divide the axes rectangle into grid whose size is specified by horiz * vert
divider = Divider(fig, rect, horiz, vert, aspect=False)
# first guess
ax = fig.subplots()
subplot(
ax=ax,
true_bed_h=true_bed_h,
true_sfc_h=true_sfc_h,
second_bed_h=first_guess_bed_h,
second_sfc_h=first_guess_sfc_h,
label='(a) first guess',
fontsize=fontsize,
lw=lw,
ms=ms)
ax.set_axes_locator(divider.new_locator(nx=0, ny=2))
# add iteration 1, 2, 3
for i, prefix in zip(np.arange(1,4), ['(b) ', '(c) ', '(d) ']):
ax = fig.subplots()
subplot(
ax=ax,
true_bed_h=true_bed_h,
true_sfc_h=true_sfc_h,
second_bed_h=guessed_bed_h[i-1],
second_sfc_h=guessed_sfc_h[i-1],
label=prefix + str(i) + '. Iteration',
fontsize=fontsize,
lw=lw,
ms=ms)
ax.set_axes_locator(divider.new_locator(nx=i*2, ny=2))
# add iteration 10, 20, 30
for column, (i, prefix) in enumerate(zip(np.arange(10,31,10), ['(e) ', '(f) ', '(g) '])):
ax = fig.subplots()
subplot(
ax=ax,
true_bed_h=true_bed_h,
true_sfc_h=true_sfc_h,
second_bed_h=guessed_bed_h[i-1],
second_sfc_h=guessed_sfc_h[i-1],
label=prefix + str(i) + '. Iteration',
fontsize=fontsize,
lw=lw,
ms=ms)
ax.set_axes_locator(divider.new_locator(nx=(column+1)*2, ny=0))
# add legend
ax = fig.subplots()
add_legend(ax=ax,
fontsize=fontsize,
lw=lw,
ms=ms)
ax.set_axes_locator(divider.new_locator(nx=0, ny=0))
fig.savefig('instability_plot.pdf',format='pdf',bbox_inches='tight',dpi=300);
# -
| Fig.3.8_instability/Instability_plot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import bs4 as bs
from urllib.request import urlopen as ureq
from bs4 import BeautifulSoup as soup
import pandas as pd
import geopy
from geopy.geocoders import Nominatim
import re
# +
url_postnr = 'http://www.nr.dk/danmark.html'
# opening up connection, grabbing karakter from Politikken
postnr = ureq(url_postnr)
postnr_html = postnr.read()
postnr.close()
# -
# html parsing of karakter
postnr_soup = soup(postnr_html, "lxml")
# +
# finding all table rows and appending to a dataframe
postnr_list = [] #list containing all school rows
#loop finding and appending all row in the table to a list
postnr_table_rows = postnr_soup.find_all('tr')
for tr in postnr_table_rows:
td = tr.find_all('td')
row = [i.text for i in td]
postnr_list.append(row)
postnr_df = pd.DataFrame(postnr_list, columns = ['Postnummer', 'Bynavn', 'Gade', 'Firma', 'Land']) # creating dataframe with column names
postnr_df.set_index('Postnummer', inplace = True) #setting school rank as index
postnr_df.head(35)
# -
postnr_df = postnr_df
| post nr til bydel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Word Vectorisation Notebook
# Below is the code used for generating various embeddings used in the neural models.
#
# ### Imported libraries
# +
import os
import urllib.request
import numpy as np
import zipfile
import tensorflow as tf
import collections
import random
import h5py
from keras.models import Sequential,Model
from keras.optimizers import RMSprop
from keras.layers import Embedding,LSTM,Dense,Lambda,merge,Input
from keras.callbacks import TensorBoard,ModelCheckpoint,Callback
from keras import backend as K
# -
# ### Functions defined
# ```python
# maybe_download(filename)
# #Downloads a file if not present.
#
# read_data(filename)
# #Extract the first file enclosed in a zip file as a list of words.
#
# build_dataset(words, n_words)
# #Process raw inputs into a dataset.
#
# generate_batches(data, size, contextWidth, negativeSize)
# #Returns batches of input words with their contexts and a set of negative samples.
# ```
#
# +
def maybe_download(url, filename):
"""Download a file if not present."""
if not os.path.exists("./downloads/"+filename):
filename, _ = urllib.request.urlretrieve(url + filename, "./downloads/"+filename)
return filename
def read_data(filename):
"""Extract the first file enclosed in a zip file as a list of words."""
with zipfile.ZipFile("./downloads/"+filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
def build_dataset(words, n_words):
"""Process raw inputs into a dataset."""
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(n_words - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def generate_batches(data, size, contextWidth, negativeSize):
cHalfWidth = int(contextWidth/2)
words = []
contexts = []
negatives = []
index = random.sample(range(cHalfWidth,len(data)-cHalfWidth),size)
for z in index:
context = []
for m in range(-cHalfWidth,cHalfWidth+1):
if m == 0:
words.append([data[z]])
else:
context.append(data[z+m])
contexts.append(context)
negatives.append(random.sample(data,negativeSize))
return([np.array(words),np.array(contexts),np.array(negatives)],[np.array([1]*size),np.array([[0]*negativeSize]*size)])
# -
# ### Downloading Wikipedia text database
filename = maybe_download('http://mattmahoney.net/dc/', 'text8.zip')
vocabulary = read_data('text8.zip')
print('Number of words: ', len(vocabulary))
# ### Parameters for skipgram model
# +
vocabulary_size = 5000
data_index = 0
batch_size = 128
wordvec_dim = 32
skip_window = 3 # How many words to consider left and right.
num_skips = 4 # How many times to reuse an input to generate a label.
context_half = 3
context_size = context_half*2
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
neg_size = 5 # Number of negative examples to sample.
# -
# ### Creating a dictionary and reverse dictionary for word embedding
data, count, dictionary, reverse_dictionary = build_dataset(vocabulary,vocabulary_size)
del vocabulary # Hint to reduce memory.
# ### Saving a .tsv label file for viewing in TensorBoard
os.mkdir('./logs')
with open('./logs/word2vec_label.tsv', 'w') as fr:
for i in range(vocabulary_size):
fr.write(reverse_dictionary[i]+'\n')
# ### Generate training/validation batches
X,Y = generate_batches(data, 500000, context_size, neg_size)
vX, vY = generate_batches(data, 5000, context_size, neg_size)
# ### Definining neural model with Keras
# Graph of word2vec neural model used shown below.
#
#
# 
#
# +
word = Input(shape=(1,), name='inputWord')
context = Input(shape=(context_size,), name='inputContext')
negSamples = Input(shape=(neg_size,), name='inputNegatives')
word2vec = Embedding(input_dim=vocabulary_size,output_dim=wordvec_dim, embeddings_initializer='glorot_normal', name='word2vec')
vec_word = word2vec(word)
vec_context = word2vec(context)
vec_negSamples = word2vec(negSamples)
cbow = Lambda(lambda x: K.mean(x, axis=1), name='cbowAverage')(vec_context)
word_context = merge([vec_word, cbow], mode='dot')
negative_context = merge([vec_negSamples, cbow], mode='dot', concat_axis=-1)
model = Model(input=[word,context,negSamples], output=[word_context,negative_context])
# -
model.compile(optimizer='rmsprop', loss='mse', metrics=['accuracy'])
model.summary()
#from keras.utils import plot_model
#plot_model(model, to_file='./images/word2vecmodel.png', show_shapes=True)
# ### Create logs for saving parameters and run training
# +
tensorboard = TensorBoard(log_dir='./logs/wordvec',
batch_size=500, histogram_freq=1, write_images=True, write_grads=False, write_graph=True, embeddings_freq=1)
model_checkpoint = ModelCheckpoint('./logs/wordvec_model.h5')
model.fit(X,Y,epochs=50,batch_size=500,callbacks=[model_checkpoint,tensorboard], validation_data=(vX,vY))
# -
| workbooks/word2vec.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:hvplot] *
# language: python
# name: conda-env-hvplot-py
# ---
import pandas as pd
import requests
import json
pd.set_option('display.max_rows', 1500)
url = 'https://nvdbapi-eksport-v2.atlas.vegvesen.no/vegobjekter/60'
params = { 'egenskap' : '(2175!=null)',
'inkluder' : 'egenskaper,lokasjon,metadata'}
r = requests.get( url, params=params)
with open( 'brudump.csv', 'w', encoding='latin') as f:
f.write( r.text)
bru = pd.read_csv( 'brudump.csv', sep=';', encoding='latin1')
# +
bru.dropna( subset=['Opprinnelig F-Nr'], inplace=True )
# Med lokasjon får vi en rad per vegsegment i CSV-dump, dvs duplikate ID'er.
bru.drop_duplicates( subset='vegobjektid', inplace=True )
duplikat = bru[ bru.duplicated( ['Opprinnelig F-Nr','Nummer'], keep=False ) ][[ 'vegobjektid', 'Navn', 'Opprinnelig F-Nr', 'Nummer', 'vegreferanse', 'startdato', 'sistmodifisert' ]].sort_values(['Opprinnelig F-Nr', 'Nummer' ])
duplikat
# -
duplikat.to_excel( 'brutus_duplikater_NVDB.xlsx')
| brutus/sjekkBruId.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Handling hover and click map events
#
# [![open_in_colab][colab_badge]][colab_notebook_link]
# [![open_in_binder][binder_badge]][binder_notebook_link]
#
# [colab_badge]: https://colab.research.google.com/assets/colab-badge.svg
# [colab_notebook_link]: https://colab.research.google.com/github/UnfoldedInc/examples/blob/master/notebooks/08%20-%20Eventhandling.ipynb
# [binder_badge]: https://mybinder.org/badge_logo.svg
# [binder_notebook_link]: https://mybinder.org/v2/gh/UnfoldedInc/examples/master?urlpath=lab/tree/notebooks/08%20-%20Eventhandling.ipynb
#
# Using the [set_map_event_handlers](https://docs.unfolded.ai/map-sdk/api/set-map-event-handlers) function it is possible to define event callbacks for the `on_hover` and `on_click` events. These events can return the data from the layer points or polygons if the user clicks or hovers on them.
# ## Dependencies
#
# This notebook requires the following Python dependencies:
#
# - `unfolded.map-sdk`: The Unfolded Map SDK
#
# If running this notebook in Binder, these dependencies should already be installed. If running in Colab, the next cell will install these dependencies.
# If in Colab, install this notebook's required dependencies
import sys
if "google.colab" in sys.modules:
# !pip install 'unfolded.map_sdk>=0.6.3'
# ## Imports
from unfolded.map_sdk import UnfoldedMap
import ipywidgets as widgets
# ## Handling event callbacks
unfolded_map = UnfoldedMap(mapUUID='fb6aad80-eb4c-4f33-86eb-668772cc5fc4')
unfolded_map
# We define the `on_hover` callback function:
output = widgets.Output()
@output.capture(clear_output=True)
def on_hover_output(info):
print('Hover event')
print(info)
output
# We define the `on_click` callback function:
output = widgets.Output()
@output.capture(clear_output=True)
def on_click_output(info):
print('Click event')
print(info)
output
# Here we register the defined callback functions. These functions will be called once you hover or click on the points or on the empty part of the map for the corresponding function.
unfolded_map.set_map_event_handlers({
'on_hover': on_hover_output,
'on_click': on_click_output
})
| notebooks/08 - Eventhandling.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Now You Code 1: Number
#
# In this now you code we will learn to re-factor a program into a function. This is the most common way to write a function when you are a beginner. *Re-factoring* is the act of re-writing code without changing its functionality. We commonly do re-factoring to improve performance or readability of our code.
#
# The way you do this is rather simple. First you write a program to solve the problem, then you re-write that program as a function and finally test the function to make sure it works as expected.
#
# This helps train you to think abstractly about problems, but leverages what you understand currently about programming.
#
# ## Introducing the Write - Refactor - Test - Rewrite approach
#
# The best way to get good at writing functions, a skill you will need to master to become a respectable programmer, is to use the **Write - Refactor - Test - Rewrite** approach. Let's follow.
#
# ### Step 1: we write the program
#
# Write a program to take an input string and convert to a float. If the string cannot be converted to a float it returns the string "NaN" which means "Not a Number" We did this first part for you.
#
# #### Problem Analysis (This has been done for you)
#
# Inputs: Any value
#
# Outputs: whether that value is a number
#
# Algorithm:
#
# 1. input a value
# 2. try to convert the value to a number
# 3. if you can convert it, print the number
# 4. if you cannot print 'NaN' for Not a number.
## STEP 2 : Write the program
# ### Step 2: we refactor it into a function
#
# Complete the `ToNumber` function. It should be similar to the program above, but it should not have any `input()` or `print()` functions as those are reserved for the main program, instead the function should send variable arguments in as input, and return a value as output. In this case the function takes `text` as input and returns `number` as output
#
# +
# Step 3: write the function
## Function: ToNumber
## Argument (input): text value
## Returns (output): float of text value or "NaN"
def ToNumber(text):
# TODO Write code here
# -
# ### Step 3: we test our function
#
# With the function complete, we need to test our function. The simplest way to do that is call the function with inputs we expect and verify the output. For example:
#
# ```
# WHEN text='10.5' We EXPECT ToNumber(text) to return 10.5 ACTUAL: 10.5
# WHEN text='threeve' We EXPECT ToNumber(text) to return 'NaN' ACTUAL: NaN
# ```
#
# We can do this with simple `print()` statements, where we simply say what we are testing, then call the function with the value.
#
# How many do we need? Enough to cover all the possibilities in output. We only need two tests here, one for when the number can be converted and one for when the number cannot.
print("WHEN text='10.5' We EXPECT ToNumber(text) to return 10.5 ACTUAL:", ToNumber('10.5'))
print("WHEN text='threeve' We EXPECT ToNumber(text) to return 'NaN' ACTUAL:", ToNumber('threeve'))
# ### Step 4: rewrite the program to use the function
#
# Finally re-write the original program to use the new `ToNumber` function. The program now works the same as STEP1 but it now calls our function!
# +
## Step 4: write the program from step 2 again, but this time use the function you defined two code cells up from here.
# -
# ## Step 3: Questions
#
# 1. Can you define a function with the same name more than once?
#
# Answer:
#
# 2. Can you call a function more than once?
#
# Answer:
#
# 3. What is the input to the number function? What is the output?
#
# Answer:
# ## Step 4: Reflection
#
# Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?
#
# To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.
#
# Keep your response to between 100 and 250 words.
#
# `--== Write Your Reflection Below Here ==--`
#
#
| content/lessons/05/Now-You-Code/NYC1-Number.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # freud.diffraction.DiffractionPattern
#
# The `freud.diffraction.DiffractionPattern` class computes a diffraction pattern, which is a 2D image of the [static structure factor](https://en.wikipedia.org/wiki/Structure_factor) $S(\vec{k})$ of a set of points.
import freud
import matplotlib.pyplot as plt
import numpy as np
import rowan
# First, we generate a sample system, a face-centered cubic crystal with some noise.
box, points = freud.data.UnitCell.fcc().generate_system(
num_replicas=10, sigma_noise=0.02
)
# Now we create a `DiffractionPattern` compute object.
dp = freud.diffraction.DiffractionPattern(grid_size=512, output_size=512)
# Next, we use the `compute` method and plot the result. We use a view orientation with the identity quaternion `[1, 0, 0, 0]` so the view is aligned down the z-axis.
fig, ax = plt.subplots(figsize=(4, 4), dpi=150)
dp.compute((box, points), view_orientation=[1, 0, 0, 0])
dp.plot(ax)
plt.show()
# We can also use a random quaternion for the view orientation to see what the diffraction looks like from another axis.
fig, ax = plt.subplots(figsize=(4, 4), dpi=150)
np.random.seed(0)
view_orientation = rowan.random.rand()
dp.compute((box, points), view_orientation=view_orientation)
print("Looking down the axis:", rowan.rotate(view_orientation, [0, 0, 1]))
dp.plot(ax)
plt.show()
# The `DiffractionPattern` object also provides $\vec{k}$ vectors in the original 3D space and the magnitudes of $k_x$ and $k_y$ in the 2D projection along the view axis.
print("Magnitudes of k_x and k_y along the plot axes:")
print(dp.k_values[:5], "...", dp.k_values[-5:])
print("3D k-vectors corresponding to each pixel of the diffraction image:")
print("Array shape:", dp.k_vectors.shape)
print("Center value: k =", dp.k_vectors[dp.output_size // 2, dp.output_size // 2, :])
print("Top-left value: k =", dp.k_vectors[0, 0, :])
# We can also measure the diffraction of a random system (note: this is an ideal gas, not a liquid-like system, because the particles have no volume exclusion or repulsion). Note that the peak at $\vec{k} = 0$ persists. The diffraction pattern returned by this class is normalized by dividing by $N^2$, so $S(\vec{k}=0) = 1$ after normalization.
box, points = freud.data.make_random_system(box_size=10, num_points=10000)
fig, ax = plt.subplots(figsize=(4, 4), dpi=150)
dp.compute((box, points))
dp.plot(ax)
plt.show()
| module_intros/diffraction.DiffractionPattern.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Suicide Overview
# Using Suicide Rate information we are trying to understand the factors which contribute to higher suicide rates
#
# ### Hypothesis
# Suicide rates are influenced by socio-economic factors
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
# %matplotlib inline
# Reading dataset
file = r'C:\Users\User\Desktop\master.csv'
df = pd.read_csv(file, sep = '\t')
df.head(5)
df.describe()
# Due to high volume of Null and duplicate values, HDI & Country-Year columns are dropped
df = df.drop(['HDI for year', 'country-year'], axis = 1)
df.head(5)
# Plotting all factors to understand corelation with suicides
sns.pairplot(df)
# Encoding the categorical variables
from sklearn.preprocessing import LabelEncoder
cat_cols = ['country', 'sex', 'age', 'generation']
df[cat_cols] = df[cat_cols].apply(LabelEncoder().fit_transform)
df.head(5)
# Tracking feature importance
X = df[['country', 'year', 'sex', 'age', ' gdp_for_year ($) ', 'gdp_per_capita ($)', 'generation']]
y = df['suicides_no']
# +
from sklearn.ensemble import RandomForestRegressor
from yellowbrick.features.importances import FeatureImportances
fig = plt.figure(figsize=(20,20))
ax = fig.add_subplot()
viz = FeatureImportances(RandomForestRegressor(), ax=ax)
viz.fit(X, y)
viz.poof()
# -
sns.relplot(x= 'gdp_per_capita ($)', y= 'suicides_no', data = df)
# As per analysis results, GPD per capita has a strong corelation with suicide rates. As GPD per capita increases suicide rate tend to go down.
| ADS-Spring2019/Suicide.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Inivariate Multistep Vector Output Stacked LSTM example
# https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/
# +
from numpy import array
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense
# -
# Separate a multivariate sequence into samples
def split_sequence(sequences, n_steps_in, n_steps_out):
X, y = list(), list()
for i in range(len(sequences)):
# Find the end of the pattern
end_ix = i + n_steps_in
out_end_ix = end_ix + n_steps_out
# Check if we are bound by sequence
if out_end_ix > len(sequences) - 1:
break
# Gather input and output parts of the pattern
seq_x, seq_y = sequences[i:end_ix], sequences[end_ix:out_end_ix]
X.append(seq_x)
y.append(seq_y)
return array(X), array(y)
# Define input sequence
raw_seq = [10, 20, 30, 40, 50, 60, 70, 80, 90]
# Choose a number of time steps
n_steps_in = 3
n_steps_out = 2
# Convert into input/output
X, y = split_sequence(raw_seq, n_steps_in, n_steps_out)
# Reshape from [samples, timesteps] into [samples, timesteps, features]
n_features = 1
X = X.reshape((X.shape[0], X.shape[1], n_features))
# Define model
model = Sequential()
model.add(LSTM(100, activation='relu', return_sequences=True, input_shape=(n_steps_in, n_features)))
model.add(LSTM(100, activation='relu'))
model.add(Dense(n_steps_out))
model.compile(optimizer='adam', loss='mse')
# Fit model
model.fit(X, y, epochs=200)
# Demonstrate prediction
x_input = array([70, 80, 90])
x_input = x_input.reshape((1, n_steps_in, n_features))
yhat = model.predict(x_input)
yhat
| TimeSeries/UnivariateMultistepStackedLSTMexample.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.013463, "end_time": "2022-03-06T13:47:53.451723", "exception": false, "start_time": "2022-03-06T13:47:53.438260", "status": "completed"} tags=[]
# This notebook is based on <NAME>'s notebook (https://www.kaggle.com/slawekbiel/positive-score-with-detectron-3-3-inference)
# + [markdown] papermill={"duration": 0.010674, "end_time": "2022-03-06T13:47:53.473720", "exception": false, "start_time": "2022-03-06T13:47:53.463046", "status": "completed"} tags=[]
# #### Version history
# * V1 - test the model_best_5.pth
# * V2 - test the model_best_4.pth
# * V3 - test the model_final_6.pth
# * V4 - test the model_final_5.pth
# + [markdown] papermill={"duration": 0.010663, "end_time": "2022-03-06T13:47:53.495210", "exception": false, "start_time": "2022-03-06T13:47:53.484547", "status": "completed"} tags=[]
# ## Inference and Submission
# After training the model ,we can inference with our trained models.
# + [markdown] papermill={"duration": 0.010759, "end_time": "2022-03-06T13:47:53.516789", "exception": false, "start_time": "2022-03-06T13:47:53.506030", "status": "completed"} tags=[]
# #### Install detectron 2
# + _kg_hide-input=true _kg_hide-output=true papermill={"duration": 200.652136, "end_time": "2022-03-06T13:51:14.179737", "exception": false, "start_time": "2022-03-06T13:47:53.527601", "status": "completed"} tags=[]
# !pip install ../input/detectron-05/whls/pycocotools-2.0.2/dist/pycocotools-2.0.2.tar --no-index --find-links ../input/detectron-05/whls
# !pip install ../input/detectron-05/whls/fvcore-0.1.5.post20211019/fvcore-0.1.5.post20211019 --no-index --find-links ../input/detectron-05/whls
# !pip install ../input/detectron-05/whls/antlr4-python3-runtime-4.8/antlr4-python3-runtime-4.8 --no-index --find-links ../input/detectron-05/whls
# !pip install ../input/detectron-05/whls/detectron2-0.5/detectron2 --no-index --find-links ../input/detectron-05/whls
# + papermill={"duration": 1.313287, "end_time": "2022-03-06T13:51:15.521242", "exception": false, "start_time": "2022-03-06T13:51:14.207955", "status": "completed"} tags=[]
import detectron2
import torch
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from PIL import Image
import cv2
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from fastcore.all import *
# + [markdown] papermill={"duration": 0.027121, "end_time": "2022-03-06T13:51:15.575919", "exception": false, "start_time": "2022-03-06T13:51:15.548798", "status": "completed"} tags=[]
# #### Input dataset
# + papermill={"duration": 0.033997, "end_time": "2022-03-06T13:51:15.637360", "exception": false, "start_time": "2022-03-06T13:51:15.603363", "status": "completed"} tags=[]
dataDir=Path('../input/sartorius-cell-instance-segmentation')
# + papermill={"duration": 0.040963, "end_time": "2022-03-06T13:51:15.705823", "exception": false, "start_time": "2022-03-06T13:51:15.664860", "status": "completed"} tags=[]
# From https://www.kaggle.com/stainsby/fast-tested-rle
def rle_decode(mask_rle, shape=(520, 704)):
'''
mask_rle: run-length as string formated (start length)
shape: (height,width) of array to return
Returns numpy array, 1 - mask, 0 - background
'''
s = mask_rle.split()
starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0]*shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo:hi] = 1
return img.reshape(shape) # Needed to align to RLE direction
def rle_encode(img):
'''
img: numpy array, 1 - mask, 0 - background
Returns run length as string formated
'''
pixels = img.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return ' '.join(str(x) for x in runs)
def get_masks(fn, predictor):
im = cv2.imread(str(fn))
pred = predictor(im)
pred_class = torch.mode(pred['instances'].pred_classes)[0]
take = pred['instances'].scores >= THRESHOLDS[pred_class]
pred_masks = pred['instances'].pred_masks[take]
pred_masks = pred_masks.cpu().numpy()
res = []
used = np.zeros(im.shape[:2], dtype=int)
for mask in pred_masks:
mask = mask * (1-used)
if mask.sum() >= MIN_PIXELS[pred_class]: # skip predictions with small area
used += mask
res.append(rle_encode(mask))
return res
# + papermill={"duration": 0.039823, "end_time": "2022-03-06T13:51:15.773059", "exception": false, "start_time": "2022-03-06T13:51:15.733236", "status": "completed"} tags=[]
ids, masks=[],[]
test_names = (dataDir/'test').ls()
# + [markdown] papermill={"duration": 0.027075, "end_time": "2022-03-06T13:51:15.827469", "exception": false, "start_time": "2022-03-06T13:51:15.800394", "status": "completed"} tags=[]
# #### Initiate a Predictor from the trained models
# + papermill={"duration": 6.534313, "end_time": "2022-03-06T13:51:22.389224", "exception": false, "start_time": "2022-03-06T13:51:15.854911", "status": "completed"} tags=[]
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.INPUT.MASK_FORMAT='bitmask'
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 3
cfg.MODEL.WEIGHTS = os.path.join('../input/gyy-sartoriusmodels', "model_final_4.pth")
cfg.TEST.DETECTIONS_PER_IMAGE = 1000
predictor = DefaultPredictor(cfg)
THRESHOLDS = [.19, .39, .57] # [.15, .35, .55]
MIN_PIXELS = [75, 154, 757] # [75, 150, 75]
# + [markdown] papermill={"duration": 0.029562, "end_time": "2022-03-06T13:51:22.449306", "exception": false, "start_time": "2022-03-06T13:51:22.419744", "status": "completed"} tags=[]
# #### Look whether the outputs on a sample test file are correct or not
# + papermill={"duration": 5.758764, "end_time": "2022-03-06T13:51:28.238380", "exception": false, "start_time": "2022-03-06T13:51:22.479616", "status": "completed"} tags=[]
i=1
encoded_masks = get_masks(test_names[i], predictor)
# + papermill={"duration": 12.074865, "end_time": "2022-03-06T13:51:40.342819", "exception": false, "start_time": "2022-03-06T13:51:28.267954", "status": "completed"} tags=[]
_, axs = plt.subplots(1,2, figsize=(40,15))
axs[1].imshow(cv2.imread(str(test_names[i])))
for enc in encoded_masks:
dec = rle_decode(enc)
axs[0].imshow(np.ma.masked_where(dec==0, dec))
# + [markdown] papermill={"duration": 0.041957, "end_time": "2022-03-06T13:51:40.426329", "exception": false, "start_time": "2022-03-06T13:51:40.384372", "status": "completed"} tags=[]
# #### Generate masks for all the test files and create a submission
# + papermill={"duration": 1.028213, "end_time": "2022-03-06T13:51:41.496408", "exception": false, "start_time": "2022-03-06T13:51:40.468195", "status": "completed"} tags=[]
for fn in test_names:
encoded_masks = get_masks(fn, predictor)
for enc in encoded_masks:
ids.append(fn.stem)
masks.append(enc)
# + [markdown] papermill={"duration": 0.041793, "end_time": "2022-03-06T13:51:41.580230", "exception": false, "start_time": "2022-03-06T13:51:41.538437", "status": "completed"} tags=[]
# #### Create the submission.csv
# + papermill={"duration": 0.071433, "end_time": "2022-03-06T13:51:41.693479", "exception": false, "start_time": "2022-03-06T13:51:41.622046", "status": "completed"} tags=[]
pd.DataFrame({'id':ids, 'predicted':masks}).to_csv('submission.csv', index=False)
pd.read_csv('submission.csv').head()
| Inference/inference-and-submission-29db6f.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''base'': conda)'
# language: python
# name: python3
# ---
# ### AI project "Fake news detection" with raw data
# Importation
# +
import numpy as np
import pandas as pd
import re
Data = pd.read_excel("C:/Users/user/Documents/Studies/ENSIAS/2A/AI/Projet IA/Data-FakeRealCOVID.xlsx")
# -
# Delete the links
Data["Cleaned_Tweets"] = Data ["tweet"].apply(lambda s: ' '.join(re.sub("(w+://S+)", " ", s).split()))
# Delete Ponctuation
Data['Cleaned_Tweets'] = Data ['Cleaned_Tweets'].apply(lambda s : ' '.join(re.sub("[.,!?:;-=@#_]", " ", s).split()))
import nltk
from nltk import corpus as cps
# Delete Emojies :
def deEmojify(inputString):
return inputString.encode('ascii', 'ignore').decode('ascii')
Data["Cleaned_Tweets"] = Data["Cleaned_Tweets"].apply(lambda s: deEmojify(s))
Data[['tweet','Cleaned_Tweets']].iloc[1]
nltk.download("stopwords")
sw = set(nltk.corpus.stopwords.words('english'))
print(sw)
def rem_en(input_txt):
words = input_txt.lower().split()
noise_free_words = [word for word in words if word not in sw]
noise_free_text = " ".join(noise_free_words)
return noise_free_text
Data["Cleaned_Tweets"] = Data["Cleaned_Tweets"].apply(lambda s: rem_en(s))
# Tokenization
from nltk.tokenize import RegexpTokenizer
tokeniser = nltk.tokenize.RegexpTokenizer(r'w+')
Data["Cleaned_Tweets"] = Data["Cleaned_Tweets"].apply(lambda x: tokeniser.tokenize(x))
Data[["tweets","Cleaned_Tweets"]]
| Fake_news_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# =================================================
# SVM-Anova: SVM with univariate feature selection
# =================================================
#
# This example shows how to perform univariate feature selection before running a
# SVC (support vector classifier) to improve the classification scores.
#
#
# +
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets, feature_selection
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import Pipeline
# #############################################################################
# Import some data to play with
digits = datasets.load_digits()
y = digits.target
# Throw away data, to be in the curse of dimension settings
y = y[:200]
X = digits.data[:200]
n_samples = len(y)
X = X.reshape((n_samples, -1))
# add 200 non-informative features
X = np.hstack((X, 2 * np.random.random((n_samples, 200))))
# #############################################################################
# Create a feature-selection transform and an instance of SVM that we
# combine together to have an full-blown estimator
transform = feature_selection.SelectPercentile(feature_selection.f_classif)
clf = Pipeline([('anova', transform), ('svc', svm.SVC(C=1.0))])
# #############################################################################
# Plot the cross-validation score as a function of percentile of features
score_means = list()
score_stds = list()
percentiles = (1, 3, 6, 10, 15, 20, 30, 40, 60, 80, 100)
for percentile in percentiles:
clf.set_params(anova__percentile=percentile)
# Compute cross-validation score using 1 CPU
this_scores = cross_val_score(clf, X, y, n_jobs=1)
score_means.append(this_scores.mean())
score_stds.append(this_scores.std())
plt.errorbar(percentiles, score_means, np.array(score_stds))
plt.title(
'Performance of the SVM-Anova varying the percentile of features selected')
plt.xlabel('Percentile')
plt.ylabel('Prediction rate')
plt.axis('tight')
plt.show()
| lab03/svm/plot_svm_anova.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import signal
from contextlib import contextmanager
@contextmanager
def timeout(time):
# Register a function to raise a TimeoutError on the signal.
signal.signal(signal.SIGALRM, raise_timeout)
# Schedule the signal to be sent after ``time``.
signal.alarm(time)
try:
yield
except TimeoutError:
pass
finally:
# Unregister the signal so it won't be triggered
# if the timeout is not reached.
signal.signal(signal.SIGALRM, signal.SIG_IGN)
def raise_timeout(signum, frame):
raise TimeoutError
def my_func():
# Add a timeout block.
with timeout(1):
print('entering block')
import time
time.sleep(10)
print('This should never get printed because the line before timed out')
# +
import requests
from datetime import datetime
import google.cloud.storage as storage
import io
from pydub import AudioSegment
#r = requests.get("https://18853.live.streamtheworld.com/BLURADIO_SC", streaming=True)
def download_file(url):
local_filename = url.split('/')[-1]
# NOTE the stream=True parameter below
with requests.get(url, stream=True) as r:
r.raise_for_status()
for chunk in r.iter_content(chunk_size=219200):
if chunk: # filter out keep-alive new chunks
c = io.BytesIO(chunk)
dateTimeObj = datetime.now()
timestampStr = dateTimeObj.strftime("Blue_%d-%b-%Y %H:%M:%S.%f")
dateStr = dateTimeObj.strftime("%d-%b-%Y")
name = "dateStr/" + "{}.flac".format(timestampStr)
song = AudioSegment.from_mp3(c)
ss = song.export(format="flac", parameters=["-ac", "1"])
upload_blob("radioscrapping", ss, name)
# f.flush()
return local_filename
def upload_blob(bucket_name, my_file, destination_blob_name):
"""Uploads a file to the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
blob.upload_from_file(my_file)
# -
try:
with timeout(30):
download_file("https://18853.live.streamtheworld.com/BLURADIO_SC")
except:
print("Success")
{
"url": "https://18853.live.streamtheworld.com/BLURADIO_SC",
"c_size" : 219200,
"timeo":90,
"name_folder" : "blue"
}
# +
import requests
from datetime import datetime
import google.cloud.storage as storage
import io
from pydub import AudioSegment
#r = requests.get("https://18853.live.streamtheworld.com/BLURADIO_SC", streaming=True)
def download_file(url):
local_filename = url.split('/')[-1]
# NOTE the stream=True parameter below
with requests.get(url, stream=True) as r:
r.raise_for_status()
for chunk in r.iter_content(chunk_size=219200):
if chunk: # filter out keep-alive new chunks
c = io.BytesIO(chunk)
dateTimeObj = datetime.now()
timestampStr = dateTimeObj.strftime("Blue_%d-%b-%Y %H:%M:%S.%f")
dateStr = dateTimeObj.strftime("%d-%b-%Y")
name = "dateStr/" + "{}.flac".format(timestampStr)
song = AudioSegment.from_mp3(c)
ss = song.export(format="flac", parameters=["-ac", "1"])
upload_blob("radioscrapping", ss, name)
# f.flush()
return local_filename
| jupyterlab/radio_scrapping/curl_with_python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Topics
#
# 1. Introduction
# 2. Analyzing an Interview
# 3. Quiz: Analyzing an Interview
# 4. Free Throw Probability
# 5. Query a SQL database
# 6. Maximum Difference in an Integer
# 7. Design a Spam Classifier
# 8. Jimmy's Analysis of the Interview
# 9. Next Steps
# ## 1. Introduction
#
# #### Data Analyst Mock Interview
#
# The mock interview will show the kind of questions you would encounter in a Data Analyst interview.
#
# In addition to watching the mock interview, you'll also do the following:
# - Critique each response that the candidate gives
# - Hear feedback from the interviewer
# This exercise will help you prepare well for your own interviews.
#
# ## 2. Analyzing Technical Answers
# It's time to play the role of the interviewer. You will observe a mixture of behavorial and technical questions. Try to "think like the employer." Make sure to compare your analysis with our analysis at the end.
# ## 3. Quiz:
#
# https://www.youtube.com/watch?v=cXluuqCVg18
#
# https://www.youtube.com/watch?v=wkWDrSBBtz0
# ## 4. Free Throw Probability
#
# https://www.youtube.com/watch?v=Zyq0FQ0XO3o
#
# https://www.youtube.com/watch?v=pDELcPTP2BI
# ## 5. Query a SQL Database
#
# https://www.youtube.com/watch?v=UVSFLWdAKl4
# ## 6. Maximum Difference in an Integer Array
#
# https://www.youtube.com/watch?v=i3RTW83wI1Q
#
# https://www.youtube.com/watch?v=R9AtVBq2Z5E
# ## 7. Design a Spam Classifier
#
# - Bias for companies is a good point of discussion
# - Naive Bayes (is a good one but the porblem is that the events in this problem might not be independent)
# - Logistic Regression
# - Support Vector Machines
# - slower to train on larger dataset
#
# https://www.youtube.com/watch?v=qRv1wrtgsmM
#
# ##### Feedback
# - How to convert a problem so that the machine can learn from it.
# - It is something that needs practice. You need to have that wit to come up with the right features.
#
#
# https://www.youtube.com/watch?v=Uy89Ff49pRc
# ## 8. Jimmy's Analysis of the interview
#
# - ask clarifying questions
# - stay connected with the interviewer during the process of getting to the answer
# - Once you have agreed to the solution with the interviewer, then you can start working on the solution. <br>
# (This way you are also explaining what you are doing to the interviewer, which is to be able to explain things to the other of what you are doing)
# - Be more familiar to the machine learning algorithm that you mention in an interview.
#
#
# https://www.youtube.com/watch?v=wg535YU4jFw
# ## Next Steps
#
# In order to achieve mastery in interviewing, do the following:
#
# - Visit the Career Resource Center for additional resources and practice questions. <br>
# https://career-resource-center.udacity.com/interviews
# - Practice coding on a whiteboard or online text editor instead of using an IDE and compiler.
# - Practice interview questions with another person, using our interview checklist.<br>
# https://docs.google.com/document/d/<KEY>
#
# By adopting the interviewing practices from this lesson and practicing as much as possible, you will make the best impression in an actual interview.
#
| D-Data-Science-Interview-Preparation/2-Data-Analysis-Interview-Practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="6wu6Icac_WQy" colab_type="code" colab={}
# # !pip install --upgrade tables
# # !pip install eli5
# # !pip install xgboost
# # !pip install hyperopt
# + id="QO0-jJgs_oUL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="62fd4c09-ca5d-4eaf-cd31-b571a264e623" executionInfo={"status": "ok", "timestamp": 1583483965145, "user_tz": -60, "elapsed": 825, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08410695017926699522"}}
# cd /content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car
# + id="Vzm8Xy2O_8sp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} outputId="2e7ff783-bedc-4981-bbf6-1becd0ea15e7" executionInfo={"status": "ok", "timestamp": 1583483969262, "user_tz": -60, "elapsed": 2323, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08410695017926699522"}}
# ls
# + [markdown] id="axmhfb3jABsQ" colab_type="text"
# Imports
# + id="i__KRuK4_9Th" colab_type="code" colab={}
import pandas as pd
import numpy as np
from sklearn.dummy import DummyRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
import xgboost as xgb
from sklearn.metrics import mean_squared_error as mea
from sklearn.model_selection import cross_val_score
from hyperopt import hp, fmin, tpe, STATUS_OK
import eli5
from eli5.sklearn import PermutationImportance
# + [markdown] id="zwUtY-98ArsG" colab_type="text"
# Wczytanie danych
# + id="yTSp3RdCAbEt" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="0036a02e-32d3-4f6a-c733-7c3f12193f9a" executionInfo={"status": "ok", "timestamp": 1583484192462, "user_tz": -60, "elapsed": 5646, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08410695017926699522"}}
df = pd.read_hdf("data/car.h5")
df.shape
# + [markdown] id="kRxr0w9bA2D9" colab_type="text"
# # Feature engineering
# + id="MnXRhtSPAxrp" colab_type="code" colab={}
SUFFIX_CAT = "__cat"
# + id="P_ycKezHA9mp" colab_type="code" colab={}
for feat in df.columns:
if isinstance( df[feat][0], list ): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
# + id="XulRkTcXBCpZ" colab_type="code" colab={}
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
# + id="kArvKldQBJJ6" colab_type="code" colab={}
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else x)
df["param_moc"] = df["param_moc"].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]))
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int(str(x).split('cm')[0].replace(' ','')))
# + id="LBz_6g1HDl28" colab_type="code" colab={}
xgb_params = {
"max_depth": 5,
"n_estimators": 50,
"learning_rate": 0.1,
"seed":0
}
# + id="yjy4OvzxBRWB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 84} outputId="0fb88555-8e5d-4f74-cf50-a4ce53457870" executionInfo={"status": "ok", "timestamp": 1583484959242, "user_tz": -60, "elapsed": 14564, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08410695017926699522"}}
feats = ["param_napęd__cat","param_rok-produkcji","param_stan__cat","param_skrzynia-biegów__cat","param_faktura-vat__cat","param_moc","param_marka-pojazdu__cat","feature_kamera-cofania__cat","param_typ__cat","param_pojemność-skokowa","seller_name__cat","feature_wspomaganie-kierownicy__cat","param_model-pojazdu__cat","param_wersja__cat","param_kod-silnika__cat","feature_system-start-stop__cat","feature_asystent-pasa-ruchu__cat","feature_czujniki-parkowania-przednie__cat","feature_łopatki-zmiany-biegów__cat","feature_regulowane-zawieszenie__cat"]
run_model(xgb.XGBRegressor(**xgb_params), feats)
# + [markdown] id="-EejGaoGD5EX" colab_type="text"
# # Hyperopt
# + id="P_VsmaXZDsDD" colab_type="code" colab={}
def obj_func(params):
print("Training with params: ")
print(params)
mean_mea, score_std = run_model(xgb.XGBRegressor(**params), feats)
return {"loss": np.abs(mean_mea), "status": STATUS_OK}
# space
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)),
'subsample': hp.quniform('subsample', 0.5, 1, 0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),
'objective': 'reg:squarederror',
'n_estimators': 100,
'seed': 0,
}
# + id="p6t8JfFXI5j-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 910} outputId="e3ce68bf-7830-4a52-8acb-a90b6448f59f" executionInfo={"status": "ok", "timestamp": 1583488451761, "user_tz": -60, "elapsed": 851190, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "08410695017926699522"}}
#@title Run Hyperopt
best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=25)
best
# + id="Yd5yzzF4JMsj" colab_type="code" colab={}
| day5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.5 64-bit
# name: python3
# ---
# <h1>
# Face Detection - Face Recognition:
# </h1>
# Not:
# <span style="color:Yellow">
# Windows artık environment variables pathleri altındaki dll leri bulmayı desteklemediği için "cuda dlib" yolunu manuel belirtmemiz gerekiyor:
# </span>
#
# <span style="color:green">
# os.add_dll_directory(os.path.join(os.environ['CUDA_PATH'], 'bin'))
# </span>
# <br/>
# <h4>
# DATABASE'DEN YÜZLERİN ÖZELLİKLERİNİ ÇIKAR VE KAYDET
# <h4>
# <hr>
os.add_dll_directory(os.path.join(os.environ['CUDA_PATH'], 'bin'))
from imutils import paths
import face_recognition
import pickle
import cv2
import os
import time
import dlib
import numpy as np
import multiprocessing as mp
# +
imagePaths = list(paths.list_images('../../helper/trainImages'));
knownEncodings = [];
knownNames = [];
for (i, imagePath) in enumerate(imagePaths):
#Resim yollarından dosya isimleri çıkartılır. (Sondan -2 de var)
name = imagePath.split(os.path.sep)[-2];
#Dosyalar çekilir
image = cv2.imread(imagePath);
#Gerekli dönüşümler yapılır.
#dlib(RGB)
#OpenCV(BGR)
image = np.array(image)
rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB);
#Yüzleri tespit etmek için HOG modeli kullanılır.
#boxes = face_recognition.face_locations(rgb, model="hog");
boxes = face_recognition.face_locations(rgb, model="cnn");
#Yüzün encode bilgisini hesaplıyoruz.
encodings = face_recognition.face_encodings(face_image=rgb, known_face_locations=boxes, model="large", num_jitters=1);
#Encodeları isimleriyle beraber tutmak için
for encoding in encodings:
knownEncodings.append(encoding);
knownNames.append(name);
#end
#end
#Son Kaydet
data = {"encodings": knownEncodings, "names": knownNames};
#Pickle ile verileri kaydet
f = open("face_enc", "wb");
f.write(pickle.dumps(data)),
f.close();
# -
# <br/>
# <h4>
# KAYDEDİLEN YÜZ ENCODELARINI YÜKLE VE KAMERADAN GÖRÜNTÜ ALIP YÜZ TANIMLA
# <h4>
# <hr>
# + tags=[]
def landMarks(landmark, fr):
print(landmark)
for j in landmark:
for i in landmark[j]:
x,y=i
x*=2;
y*=2;
print(x,y)
fr = cv2.circle(fr, (x,y), 5, (255, 0, 0), 1)
return fr;
def fancyDraw(img, x, y, w, h, l=30, t=5, rt= 1):
x1, y1 = x + w, y + h
# Top Left x,y
cv2.line(img, (x, y), (x + l, y), (255, 0, 255), t)
cv2.line(img, (x, y), (x, y+l), (255, 0, 255), t)
# Top Right x1,y
cv2.line(img, (x1, y), (x1 - l, y), (255, 0, 255), t)
cv2.line(img, (x1, y), (x1, y+l), (255, 0, 255), t)
# Bottom Left x,y1
cv2.line(img, (x, y1), (x + l, y1), (255, 0, 255), t)
cv2.line(img, (x, y1), (x, y1 - l), (255, 0, 255), t)
# Bottom Right x1,y1
cv2.line(img, (x1, y1), (x1 - l, y1), (255, 0, 255), t)
cv2.line(img, (x1, y1), (x1, y1 - l), (255, 0, 255), t)
return img
data = pickle.loads(open('face_enc', "rb").read());
face_locations = [];
face_encodings = [];
face_names = [];
islemYap = True;
pool = mp.Pool(processes=4)
vc = cv2.VideoCapture(0);
#vc = cv2.VideoCapture("https://abc:123@192.168.1.103:8080/video");
#vc = cv2.VideoCapture("https://abc:123@192.168.43.1:8080/video");
vc.set(cv2.CAP_PROP_FRAME_WIDTH, 1280);
vc.set(cv2.CAP_PROP_FRAME_HEIGHT, 720);
#vc.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'));
#vc.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('A', 'V', 'C', '1'));
vc.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('H', '2', '6', '5')); #Daha iyi sıkıştırma sağlıyor.
vc.set(cv2.CAP_PROP_FPS, 60);
while True:
(ret, frame) = vc.read();
fView = frame.view()
gray = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR);
small_frame = cv2.resize(gray, (0, 0), fx=0.5, fy=0.5);
#Yüz Konum Tespiti
#"face_locations" Kullanılan Modeller:
# 1-) dlib_face_recognition_resnet_model_v1.dat,
# 2-) mmod_human_face_detector.dat,
# 3-) shape_predictor_5_face_landmarks.dat,
# 4-) shape_predictor_68_face_landmarks.dat
face_locations = face_recognition.face_locations(small_frame, number_of_times_to_upsample=1, model="cnn"); #tuple-> top,right,bottom,left
face_landmarks = face_recognition.api.face_landmarks(small_frame, face_locations=face_locations, model='large');
tic = time.time();
face_encodings = face_recognition.face_encodings(face_image=small_frame, known_face_locations=face_locations, model="large", num_jitters=2);
toc = time.time();
print(toc-tic)
face_names = [];
for face_encoding in face_encodings:
matches = face_recognition.compare_faces(data["encodings"], face_encoding)
name = "Bilinmeyen"
if True in matches:
first_match_index = matches.index(True)
name = data["names"][first_match_index]
#end
#end
face_names.append(name)
#end
parallel = False
#Aynı frame üzerinde paralel process uygulanacak
if parallel == True:
arg = []
if face_landmarks!=[]:
for k in range(len(face_landmarks)):
arg.append((np.array(face_landmarks[k]), fView));
try:
frames = pool.starmap(landMarks, arg )
#frame=frames[0]
except exp:
print(exp)
else:
if face_landmarks!=[]:
for k in range(len(face_landmarks)):
#face_landmarks ı paralel yapmalıyız.
for j in face_landmarks[k]:
for i in face_landmarks[k][j]:
x,y=i
x*=2;
y*=2;
frame = cv2.circle(frame, (x,y), 5, (255, 255, 0), 1)
#Sonuçları göster
for (top, right, bottom, left), name in zip(face_locations, face_names):
top *= 2
right *= 2
bottom *= 2
left *= 2
#Yüzü çerçeve içerisine al
cv2.rectangle(frame, (left, top), (right, bottom), (255, 0, 255), 2)
#İsim etiketi oluştur
#frame = fancyDraw(frame, x, y, w, h, l=30, t=5, rt= 1
cv2.rectangle(frame, (left, bottom - 25), (right, bottom), (255, 255, 255), cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
cv2.putText(frame, name, (left + 5, bottom - 5), font, 0.5, (0, 0, 0), 1)
#end
# Oluşan çerçeveyi ekrana yansıt
cv2.imshow('Video', frame)
# Çıkış için 'q'
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#end
#end
#Kamerayı kapat
vc.release();
cv2.destroyAllWindows();
# -
cv2.destroyAllWindows()
vc.release();
| Face/face_recognition/Recognition/face_recognition_CNN(MMOD).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Setup
# +
# Python 3 compatability
from __future__ import division, print_function
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
import math
from numpy import linalg
import scipy
from scipy import stats
# plotting
import matplotlib
from matplotlib import pyplot as plt
# fits data
from astropy.io import fits
# inline plotting
# %matplotlib inline
# -
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'axes.titlepad': '15.0'})
rcParams.update({'axes.labelpad': '15.0'})
rcParams.update({'font.size': 30})
# # Star with Position Fixed and Free
# Load and process data.
# extract data
hdul = fits.open('data/noise_exp.fits')
header = hdul[0].header
f = header['TRUEFLUX']
ferr = np.sqrt(4 * np.pi * header['PSFWIDTH']**2 * header['NOISE']**2)
xerr = np.sqrt(8 * np.pi * header['PSFWIDTH']**4 *
header['NOISE']**2 / header['TRUEFLUX']**2)
data = hdul[1].data
nruns = len(data)
flux, flux_fixed, err = data['Flux'], data['F_FixPos'], data['FluxErr']
x, y = data['X'], data['Y']
# define relevant quantities
dx = x / xerr # normalized deviation (position)
snr = f / ferr # true SNR
df_var = (flux - f) / ferr # normalized deviation (ML flux)
df_fix = (flux_fixed - f) / ferr # noramlized deviation (flux at true position)
# Plot flux offset as a function of position offset.
# construct smoothed (binned) KDE
from scipy.ndimage.filters import gaussian_filter
sig, smooth, grid = 5, 0.6, 5e-3
n, bx, by = np.histogram2d(dx, df_fix, np.arange(-sig, sig + grid, grid))
ns = gaussian_filter(n, smooth / grid)
# +
# normalize to 1 in each row
ns /= np.nanmax(ns, axis=0)
# compute quantiles
quantiles = [0.025, 0.16, 0.5, 0.84, 0.975]
qs = []
for q in quantiles:
qs.append(by[np.argmin((ns.cumsum(axis=0) - q * ns.sum(axis=0))**2, axis=0)])
# +
# plot conditional density
plt.figure(figsize=(24, 16))
plt.imshow(ns, extent=[-sig, sig, -sig, sig], aspect='auto',
origin='lower', cmap='viridis')
# plot quantiles
plt.plot([-sig, sig], [0., 0.], lw=5, color='black', ls='--')
for i, q in enumerate(qs):
q_poly = np.polyfit((bx[1:] + bx[:-1]) / 2, q, deg=10) # polynomial smoothing
q_interp = np.poly1d(q_poly)((bx[1:] + bx[:-1]) / 2) # interpolate onto grid
plt.plot((bx[1:] + bx[:-1]) / 2, q_interp, color='red',
lw=5, alpha=0.7)
# prettify
plt.text(1.5, -2.35, '2.5%', color='red',
horizontalalignment='center', verticalalignment='center')
plt.text(1.5, -1.3, '16%', color='red',
horizontalalignment='center', verticalalignment='center')
plt.text(1.5, 0.2, '50%', color='red',
horizontalalignment='center', verticalalignment='center')
plt.text(1.5, 1.2, '84%', color='red',
horizontalalignment='center', verticalalignment='center')
plt.text(1.5, 2.3, '97.5%', color='red',
horizontalalignment='center', verticalalignment='center')
plt.text(-1.7, -3.4, 'Noise peak far\nfrom true position',
horizontalalignment='center', verticalalignment='center',
color='white', fontsize=36)
plt.arrow(-1.15, -3.85, -1., 0., head_width=0.15, head_length=0.2,
facecolor='white', edgecolor='white', linewidth=5)
plt.text(2.0, -3.4, 'Noise peak close\nto true position',
horizontalalignment='center', verticalalignment='center',
color='white', fontsize=36)
plt.arrow(1.45, -3.85, 1., 0., head_width=0.15, head_length=0.2,
facecolor='white', edgecolor='white', linewidth=5)
plt.text(-1.7, 3.5, 'Flux estimate\nhigher elsewhere',
horizontalalignment='center', verticalalignment='center',
color='yellow', fontsize=36)
plt.arrow(-1.15, 3.05, -1., 0., head_width=0.15, head_length=0.2,
facecolor='yellow', edgecolor='yellow', linewidth=5)
plt.text(2.0, 3.5, 'Flux estimate high\nat true position',
horizontalalignment='center', verticalalignment='center',
color='yellow', fontsize=36)
plt.arrow(1.45, 3.05, 1., 0., head_width=0.15, head_length=0.2,
facecolor='yellow', edgecolor='yellow', linewidth=5)
plt.xlabel(r'Normalized Flux Offset at True Position')
plt.ylabel(r'Normalized Position Offset of ML Flux')
plt.xlim([-(sig-2), (sig-2)])
plt.ylim([-(sig-1), (sig-1)])
plt.colorbar(label='Conditional Density')
plt.tight_layout()
# save figure
plt.savefig('plots/star_varpos.png', bbox_inches='tight')
# -
| plot_star_varpos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.6 64-bit (''pyhmcode'': conda)'
# name: python3
# ---
# # PyHMx
#
# Demo of the alternative interface (non `f90wrap`). This is currently called `pyhmx`.
# +
import numpy as np
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.colorbar
import camb
import pyhmx
# -
def colorbar(colormap, ax, vmin=None, vmax=None):
cmap = plt.get_cmap(colormap)
cb_ax = matplotlib.colorbar.make_axes(ax)
norm = matplotlib.colors.Normalize(vmin=vmin, vmax=vmax)
cb = matplotlib.colorbar.ColorbarBase(cb_ax[0], cmap=cmap,
norm=norm, **cb_ax[1])
return cb, lambda x, norm=norm: cmap(norm(x))
# ## Compare the HMCode implementations in HMx and in CAMB
#
# Set cosmology and halo model parameters.
# +
hmx = pyhmx.HMx()
h = 0.7
omc = 0.25
omb = 0.048
mnu = 0.12
w = -1.0
wa = 0.0
ns = 0.97
As = 2.1e-9
Theat = 10**7.8
halo_model_mode = pyhmx.constants.HMCode2016
A = 3.13
eta0 = 0.603
fields = np.array([pyhmx.constants.field_dmonly])
# -
# Run CAMB to generate the linear and non-linear matter power spectra.
# +
# Get linear power spectrum
p = camb.CAMBparams(WantTransfer=True,
NonLinearModel=camb.nonlinear.Halofit(halofit_version="mead",
HMCode_A_baryon=A, HMCode_eta_baryon=eta0))
p.set_cosmology(H0=h*100, omch2=omc*h**2, ombh2=omb*h**2, mnu=mnu)
p.set_dark_energy(w=w)
p.set_initial_power(camb.InitialPowerLaw(As=As, ns=ns))
z_lin = np.linspace(0, 3, 128, endpoint=True)
p.set_matter_power(redshifts=z_lin, kmax=20.0, nonlinear=False)
r = camb.get_results(p)
sigma8 = r.get_sigma8()[-1]
k_lin, z_lin, pofk_lin_camb = r.get_matter_power_spectrum(minkh=1e-3, maxkh=20.0, npoints=128)
omv = r.omega_de + r.get_Omega("photon") + r.get_Omega("neutrino")
omm = p.omegam
# -
# Now run HMx to get the non-linear matter power spectrum (using its HMCode implementation).
# +
cosmology = {"Omega_m" : omm,
"Omega_b" : omb,
"Omega_v" : omv,
"h" : h,
"n_s" : ns,
"sigma_8" : sigma8,
"m_nu" : mnu}
halo_model = {"eta0" : eta0,
"As" : A}
Pk_HMx_dmonly = hmx.run_HMCode(cosmology=cosmology,
halo_model=halo_model,
k=k_lin,
z=z_lin,
pk_lin=pofk_lin_camb)
p.set_matter_power(redshifts=z_lin, kmax=max(k_lin), nonlinear=True)
r = camb.get_results(p)
Pk_nl_CAMB_interpolator = r.get_matter_power_interpolator()
pofk_nonlin_camb = Pk_nl_CAMB_interpolator.P(z_lin, k_lin, grid=True)
# -
# Finally, plot both the non-linear, HMCode power spectra, from CAMB and HMx.
# +
fig, ax = plt.subplots(2, 1, sharex=True)
fig.subplots_adjust(hspace=0, right=0.95)
cb, cmap = colorbar("magma", ax, vmin=z_lin[0], vmax=z_lin[-1])
cb.set_label("z")
for i in range(len(z_lin)):
ax[0].loglog(k_lin, pofk_lin_camb[i], ls=":", c=cmap(z_lin[i]), label="Linear" if i == 0 else None)
ax[0].loglog(k_lin, pofk_nonlin_camb[i], ls="--", c=cmap(z_lin[i]), label="HMCode CAMB" if i == 0 else None)
ax[0].loglog(k_lin, Pk_HMx_dmonly[i], ls="-", c=cmap(z_lin[i]), label="HMCode HMx" if i == 0 else None)
ax[1].semilogx(k_lin, Pk_HMx_dmonly[i]/pofk_nonlin_camb[i]-1, c=cmap(z_lin[i]))
ax[0].legend(frameon=False)
ax[0].set_ylabel("$P(k)$ [Mpc$^3$ $h^{-3}$]")
ax[1].set_ylabel("Frac. diff. HMCode")
ax[1].set_xlabel("$k$ [$h$ Mpc$^{-1}$]")
ax[0].set_title("HMCode vs HMx")
# fig.savefig("plots/HMCode_test_CAMB_vs_HMx.png", dpi=300)
# -
# ## Matter and pressure power spectra from HMx
#
# HMx is much slower than HMCode, so we only use 8 redshifts here.
# +
z_lin = np.linspace(0, 2, 8, endpoint=True)
p.set_matter_power(redshifts=z_lin, kmax=20.0, nonlinear=False)
r = camb.get_results(p)
k_lin, z_lin, pofk_lin_camb = r.get_matter_power_spectrum(minkh=1e-3, maxkh=20.0, npoints=128)
# +
log_Theat = np.linspace(7.6, 8.0, 3)
Pk_HMx_matter = {}
for T in log_Theat:
print(f"Running HMx with log Theat={T:.1f}")
halo_model={"Theat" : 10**T}
Pk_HMx_matter[T] = hmx.run_HMx(cosmology=cosmology, halo_model=halo_model,
fields=[pyhmx.constants.field_matter, pyhmx.constants.field_gas],
mode=pyhmx.constants.HMx2020_matter_pressure_with_temperature_scaling,
k=k_lin,
z=z_lin,
pk_lin=pofk_lin_camb)
# +
fig, ax = plt.subplots(2, 1, sharex=True)
fig.subplots_adjust(hspace=0, right=0.95)
cb, cmap = colorbar("plasma", ax, vmin=min(log_Theat), vmax=max(log_Theat))
cb.set_label("log T_heat")
ax[0].loglog(k_lin, Pk_HMx_dmonly[0], c="k", ls="--", label="HMCode")
for T in log_Theat:
ax[0].loglog(k_lin, Pk_HMx_matter[T][0,0,0], c=cmap(T))
ax[1].semilogx(k_lin, Pk_HMx_matter[T][0,0,0]/Pk_HMx_dmonly[0], c=cmap(T))
ax[0].legend(frameon=False)
ax[0].set_ylabel("$P(k)$ [Mpc$^3$ $h^{-3}$]")
ax[1].set_ylabel("Frac. diff. HMCode")
ax[1].set_xlabel("$k$ [$h$ Mpc$^{-1}$]")
ax[0].set_title("HMCode vs HMx")
# fig.savefig("plots/HMCode_vs_HMx.png", dpi=300)
| notebooks/pyhmx_demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import os
import json
from ast import literal_eval
from datetime import datetime
SOURCE_FILENAME = 'beer_50000.json'
DATA_PATH = os.path.join(os.getcwd(), 'data')
# +
lines = []
beer_data = []
with open(os.path.join(DATA_PATH, SOURCE_FILENAME)) as infile:
lines = infile.readlines()
for line in lines:
beer_data.append(literal_eval(line))
# +
def pretty_json(my_dict):
return json.dumps(
my_dict,
sort_keys=True,
indent=4
)
print pretty_json(beer_data[-1])
# +
sample_beer_id = '20539'
results = [b for b in beer_data if b.get('beer/beerId') == sample_beer_id]
print 'Total reviews for beer/beerId %s: %d' % (sample_beer_id, len(results))
min_time = min([t.get('review/timeUnix') for t in results])
max_time = max([t.get('review/timeUnix') for t in results])
print 'First review date for beer/beerId %s: %s' % (sample_beer_id, datetime.fromtimestamp(int(min_time)))
print 'Last review date for beer/beerId %s: %s' % (sample_beer_id, datetime.fromtimestamp(int(max_time)))
# -
def groupby_key(data, key_str):
key_map = {}
for datum in data:
key = datum.get(key_str)
key_map[key] = key_map.setdefault(key, 0) + 1
return key_map
# +
print 'Total reviews:\t%s' % "{:,}".format(len(beer_data))
beers_grouped = groupby_key(beer_data, 'beer/beerId')
print 'Unique beers:\t%s' % "{:,}".format(len(beers_grouped.keys()))
brewers_grouped = groupby_key(beer_data, 'beer/brewerId')
print 'Unique brewers:\t%s' % "{:,}".format(len(brewers_grouped.keys()))
print
users_grouped = groupby_key(beer_data, 'user/profileName')
print 'Unique users:\t%s' % "{:,}".format(len(users_grouped.keys()))
print 'Top 10 reviewers'
sorted_users = sorted(users_grouped.items(), cmp=lambda u1, u2: cmp(u1[1], u2[1]), reverse=True)
for i in range(10):
print '\t#%2d: %-20s%d' % (i+1, sorted_users[i][0], sorted_users[i][1])
one_review_user_count = sum([1 for u in sorted_users if u[1] == 1])
print '1 review users:\t%s\t%0.2f%%' % ("{:,}".format(one_review_user_count), float(one_review_user_count) / len(users_grouped.keys()) * 100)
print
print 'Avg. rating:\t%0.2f' % ( sum([float(r.get('review/overall')) for r in beer_data]) / len(beer_data) )
print 'Rating distribution:'
reviews_grouped = groupby_key(beer_data, 'review/overall')
for score in sorted([score for score in reviews_grouped.keys()]):
count = reviews_grouped[score]
print '\t%s - %-8s %0.2f%%' % (score, "{:,}".format(count), float(count) / len(beer_data) * 100)
# -
| BeerDatasetReview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:miniconda3-s2s]
# language: python
# name: conda-env-miniconda3-s2s-py
# ---
# # 0.03 Generate Anomalies S2S
#
# ---
#
# Uses the climatology generated in `0.02_generate_climatology_S2S` to create anomalies from the S2S forecasting system.
# +
# %load_ext lab_black
import xarray as xr
import glob
from dask.distributed import Client
# -
client = Client("tcp://10.12.205.11:42722")
clim = xr.open_zarr(
"/glade/scratch/rbrady/abby_S2S/CESM1.S2S.tas_2m.climatology.zarr/",
consolidated=True,
)
raw = xr.open_zarr(
"/glade/scratch/rbrady/abby_S2S/CESM1.S2S.tas_2m.raw.zarr/", consolidated=True
)
# Going to just create ensemble mean anomalies for now. Having a little trouble with chunking strategies to get anomalies generated for each ensemble member. **NOTE**: If you want to have individual ensemble member anomalies for e.g. probabilistic metrics, just do it for each member separately (e.g. `raw.isel(member=0)` and so on) then save them out separately and concatenate them later.
ensmean = raw.mean("member")
# Matching the chunking strategy for `clim` to make the computation more efficient.
ensmean = ensmean.chunk({"lead": 1, "lat": -1, "lon": 180, "init": "auto"}).persist()
anom = ensmean.groupby("init.dayofyear") - clim
# Generally have to rechunk after an operation like this to make sure chunk sizes are uniform. Also doing a groupby command like that returns a result with really small chunks. Keep in mind with `zarr` the power is that they get loaded in chunk-aware. So I make the inits one large chunk, knowing we'll operate on the full init dimension with `climpred` for operations.
anom = anom.chunk({"lat": "auto", "lon": "auto", "init": -1, "lead": 1}).persist()
# %time anom.to_zarr("/glade/scratch/rbrady/S2S/CESM1.S2S.tas_2m.anom.zarr/", consolidated=True)
# Looks like they are in anomaly format!
test = xr.open_zarr(
"/glade/scratch/rbrady/S2S/CESM1.S2S.tas_2m.anom.zarr/", consolidated=True
)
test.TAS.isel(init=10, lead=10).plot()
| 0.03_generate_anomalies_S2S.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Libraries for preprocessing
# +
import pandas as pd
import numpy as np
import matplotlib.dates as md
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from pylab import rcParams
import statsmodels.api as sm
from sklearn.cluster import KMeans
from sklearn.ensemble import IsolationForest
from sklearn.svm import OneClassSVM
from mpl_toolkits.mplot3d import Axes3D
# %matplotlib inline
# -
# ### reading the data and converting object to timestamp
df = pd.read_csv("../data/data.csv", error_bad_lines=False)
pd.to_datetime(df["timestamp"], infer_datetime_format=True)
df.head()
df
# ### 1st day and last day of the dataset
df['timestamp'].min(), df['timestamp'].max()
# +
df_temp = df
df_temp['timestamp'] = pd.to_datetime(df_temp['timestamp'])
df_temp = df_temp.set_index("timestamp")
resamp = df_temp.resample('D').mean()
rcParams['figure.figsize'] = 18, 8
decomposition = sm.tsa.seasonal_decompose(resamp, model='additive')
fig = decomposition.plot()
plt.show()
# -
# ### sort data according to time and convert time into seconds/minutes
df = df.sort_values('timestamp')
df.dtypes
df['timestamp'] = pd.to_datetime(df['timestamp'])
df['date_time_int'] = df.timestamp.astype(np.int64)
# +
data = df[['value']]
scaler = StandardScaler()
np_scaled = scaler.fit_transform(data)
data = pd.DataFrame(np_scaled)
# train oneclassSVM
outliers_fraction=0.01
model = OneClassSVM(nu=outliers_fraction, kernel="rbf", gamma=0.01)
model.fit(data)
df['anomaly_svm'] = pd.Series(model.predict(data))
fig, ax = plt.subplots(figsize=(25,6))
a = df.loc[df['anomaly_svm'] == -1, ['date_time_int', 'value']] #anomaly
ax.plot(df['date_time_int'], df['value'], color='teal', label ='Normal')
ax.scatter(a['date_time_int'],a['value'], color='red', label = 'Anomaly')
plt.legend()
plt.show();
# +
a = df.loc[df['anomaly_svm'] == 1, 'value']
b = df.loc[df['anomaly_svm'] == -1, 'value']
fig, axs = plt.subplots(figsize=(10,6))
axs.hist([a,b], bins=32, stacked=True, color=['blue', 'red'])
plt.show();
# -
df[df['anomaly_svm']==-1]
# +
fig, ax = plt.subplots(figsize=(25,6))
a_no = df.loc[df['timestamp'] > '2014-07-10 00:04:00']
a = df.loc[df['anomaly_svm'] == -1, ['timestamp', 'value']] #anomaly
# a = df.loc[df['timestamp'] > '2014-07-11 00:04:00']
ax.plot(a_no['timestamp'], a_no['value'], color='blue', label = 'Normal')
ax.scatter(a['timestamp'],a['value'], color='red', label = 'Anomaly')
plt.legend()
plt.show();
# -
| nbs/One-Class-SVM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# Cleanup
import boto3
sagemaker = boto3.client('sagemaker')
# +
# Deleting Endpoints
# +
def list_all_endpoints(next_token = None):
result = []
resp = {}
if next_token:
resp = sagemaker.list_endpoints(NextToken=next_token)
else:
resp = sagemaker.list_endpoints()
resources = resp["Endpoints"]
for resource in resources:
result.append(resource)
next_token = resp.get("NextToken")
if next_token:
return resources + list_all_endpoints(next_token)
else:
return resources
def delete_all_endpoints():
for resource in list_all_endpoints():
resource_name = resource["EndpointName"]
print("Deleting endpoint {}".format(resource_name))
# WARN: Are you sure you want do delete endpoints?
# sagemaker.delete_endpoint(EndpointName=resource_name)
delete_all_endpoints()
# -
# # Deleting models
#
# +
def get_models(next_token = None):
result = []
models_resp = {}
if next_token:
models_resp = sagemaker.list_models(NextToken=next_token)
else:
models_resp = sagemaker.list_models()
models = models_resp["Models"]
for model in models:
result.append(model)
next_token = models_resp.get("NextToken")
if next_token:
return models + gen_models(next_token)
else:
return models
def delete_all_models():
for model in get_models():
model_name = model["ModelName"]
print("Deleting model {}".format(model_name))
# WARN: Are you sure you want do delete models?
# sagemaker.delete_model(ModelName=model_name)
delete_all_models()
# -
| mt-cleanup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# 
#
# <h1 align='center'>Exploring Python api for Statistics Canada New Data Model (NDM)</h1>
#
# <h4 align='center'><NAME> $\mid$ October 29 2018</h4>
#
# In this notebook we explore functionality of the Python API for Statistics Canada developed by <NAME> https://github.com/ianepreston
#
# <h2 align='center'>Abstract</h2>
#
# In this section we explore the Python library stats_can. The package can be installed via https://anaconda.org/ian.e.preston/stats_can
# <h2 align='center'>Methods Available</h2>
#
# In this section we explore available methods of stats_can.
import stats_can
import pandas as pd
methods = dir(stats_can)
for item in methods:
print(item)
# <h2 align='center'>Getting Updated Series Lists Method</h2>
#
# Let us first explore the method get_changed_series_list. It is possible to ask the list of series that have been updated at 8:30am EST on a given release up until midnight that same day.
#
# Calling stats_can.get_changed_series_list() will return a list whose entries are dictionaries. From this list we can construct a table using pandas dataframes.
changed_series = stats_can.get_changed_series_list()
changed_series_df = pd.DataFrame.from_dict(changed_series)
short_series_list = changed_series_df.head()
short_series_list
# <h2 align='center'>Downloading Tables Method</h2>
#
# We can use the productID column on our dataframe to download tables. We will only do the first five.
for item in short_series_list["productId"]:
print(stats_can.download_tables(str(item)))
# <h2 align='center'>Get Series Information from Vector ID</h2>
#
# We can use the vectorID column on our dataframe to get further information. We will only do the first five.
pd.DataFrame.from_dict(stats_can.get_series_info_from_vector(short_series_list["vectorId"]))
# <h2 align='center'>Get Tables from Vector ID</h2>
#
# We can use the vectorID column on our dataframe to get tables.
for item in short_series_list["vectorId"]:
print(stats_can.get_tables_for_vectors(str(item)))
print("\n")
# <h2 align='center'>Vector to DataFrame Method</h2>
#
# We can use the vectorID column to get a dataframe
stats_can.vectors_to_df(short_series_list["vectorId"])
# <h2 align='center'>Methods that return errors (at times)</h2>
#
# <h3 align='center'>Get changed cube list method</h3>
#
# When attempting the method get_changed_cube_list(), found two cases:
#
# #### Case 1) Return error
#
# #### Case 2) Does not return error
#
#
# <h4 align='center'>Case 1: get_changed_cube_list returns an error</h4>
#
# We trace the error and find that it has to do with the url it is calling. It appears that if we call at a time where the url https://www150.statcan.gc.ca/t1/wds/rest/getChangedCubeList/2018-10-30 has not been updated, the value found on the website is
#
# {"message":"The input date is a future release date."}.
#
# ---------------------------------------------------------------------------
#
#
# ##### HTTPError Traceback (most recent call last)
#
# ##### <ipython-input-6-56447b1e7e3c> in < module>()
# ----> 2 changed_tables = stats_can.get_changed_cube_list()
#
#
#
# ##### ~/stats_can/stats_can/scwds.py in get_changed_cube_list(date)
# 166 url = SC_URL + 'getChangedCubeList' + '/' + str(date)
# 167 result = requests.get(url)
# ---> 168 result = check_status(result)
# 169 return result['object']
#
#
#
# ##### ~/stats_can/stats_can/scwds.py in check_status(results)
# 40 JSON from an API call parsed as a dictionary
# 41 """
# ---> 42 results.raise_for_status()
# 43 results = results.json()
#
#
#
# ##### /opt/conda/lib/python3.6/site-packages/requests/models.py in raise_for_status(self)
# 934 if http_error_msg:
# ---> 935 raise HTTPError(http_error_msg, response=self)
# 936
# 937 def close(self):
#
#
# ##### HTTPError: 404 Client Error: Not Found for url: https://www150.statcan.gc.ca/t1/wds/rest/getChangedCubeList/2018-10-30
# <h4 align='center'>Case 2: get_changed_cube_list does not return an error</h4>
#
# In the event this method does not return error, upon calling we will obtain an array whose entries are dictionaries, which we can later organize in the form of a pandas dataframe.
changed_tables = stats_can.get_changed_cube_list()
changed_tables_df = pd.DataFrame.from_dict(changed_tables)
changed_tables_df.head()
# <h3 align='center'>Get cube metadata method</h3>
#
# Provided the url has been updated, we can then use the productID column entries to obtain metadata. We will take the first entry in the dataframe we just created "32100116". In this section we will print only those metadata values that are "short" for visual purposes.
metadata_32100116= stats_can.get_cube_metadata("32100116")
metadata_entries = metadata_32100116[0]
keys_names = [item for item in metadata_entries.keys()]
for i in range(len(keys_names)-3):
print(str(keys_names[i]) + ":\t"+ str(metadata_entries[keys_names[i]]))
# <h3 align='center'>H5-related Methods</h3>
#
# At present I found that some h5-related Methods return different errors
stats_can.tables_to_h5('33100036')
stats_can.table_from_h5('33100036')
stats_can.h5_included_keys()
stats_can.metadata_from_h5('33100036')
# <h3 align='center'>SC and SCWDS related Methods</h3>
#
# At present I found that some both sc and scwds are non-callable modules.
stats_can.sc()
stats_can.scwds()
# <h2 align='center'>Summary</h2>
#
# We found the modules to get series lists, get cubes and get metadata are powerful modules that work extremely well along with pandas dataframes.
#
# At this time it seems like there is further testing to be done on modules related to h5, sc, scwds and zip.
#
# It is also interesting to notice that the cube methods work only when the data has been updates for the latest date.
# 
| Python_Packages_Investigations/STATS_CAN_API/Findings_on_stats_can_library.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Metabolon data to KEGG gene mergeing
#
# ## Inputs:
#
# - xlsx file from Metabolon, tab to use
#
# Notes:
#
# Biochemical Name* Indicates compounds that have not been officially confirmed based on a standard, but we are confident in its identity.
# Biochemical Name** Indicates a compound for which a standard is not available, but we are reasonably confident in its identity or the information provided.
#
# Biochemical Name (#) or [#] Indicates a compound that is a structural isomer of another compound in the Metabolon spectral library.
# For example, a steroid that may be sulfated at one of several positions that are indistinguishable by the mass spectrometry data or a diacylglycerol for which more than one stereospecific molecule exists.
#
#
#
# ## Input data saved
#
# - tsv files of sample metadata, compound/peak metadata, and abundance data
#
# ## Reference DBs:
#
# - Web REST accesees current KEGG compound, reaction, ortholog datasets, and compound to reaction, reaction to KEGG ortholog data
# - outputs raw and merged KEGG reference data used
#
# ## data handeling
#
# - filter out very common reaction compounds (CAD(P)/H, A(D/T)P, PO4
# - remove any unidentified cpounds, or those without a KEGG copound ID (drug IDs not used), or a ko
# - use both in the case of duplicate differing KEGG IDs
#
# ## Outputs
#
# - merged compound to keg metadata, and simple chemical ID to ko table
#
#
# -
# ## Import raw metabolon data
#libraries needed
import pandas as pd
import numpy as np
# + tags=["parameters"]
#change settings here
#load tab from file (all data, multindex magles nulls)
infile='../test.xlsx'
infiletab='OrigScale'
cols_metadata=13
rows_metadata=27
# -
cols_metadata=int(cols_metadata)
rows_metadata=int(rows_metadata)
#read file
rawOrigScale=pd.read_excel(infile, sheet_name=infiletab, header=None)
# + slideshow={"slide_type": "slide"}
#get sample meta data
sample_meta=rawOrigScale.iloc[:rows_metadata,cols_metadata:].T
#sample metadata column names
sample_meta_columns=rawOrigScale.iloc[:rows_metadata,cols_metadata-1].to_list()
#row and col metadata are mangles into one cell, split it to correct them
sample_meta_columns=sample_meta_columns[:-1]+[sample_meta_columns[-1].lstrip().split()[0]]
#add in column names
sample_meta.columns=sample_meta_columns
#drop redundant col and index with uniqID
assert (sample_meta.iloc[:,-1]==sample_meta['GROUP_DESCRIPTION']).all()
sample_meta = sample_meta.iloc[:,:-1]
sample_meta = sample_meta.set_index(sample_meta['CLIENT_IDENTIFIER'], verify_integrity=True)
sample_meta.to_csv('_'.join([infile,infiletab,'sampleMETADATA'])+'.tsv')
# + slideshow={"slide_type": "slide"}
#pull out metadata for compounds/peaks
chem_meta=rawOrigScale.iloc[rows_metadata:,:cols_metadata]
#compounds/peaks metadata column names
chem_meta_columns=rawOrigScale.iloc[rows_metadata-1,:cols_metadata].to_list()
#row and col metadata are mangles into one cell, split it to correct them
chem_meta_columns=chem_meta_columns[:-1]+[chem_meta_columns[-1].lstrip().split()[1]]
#add in column names
chem_meta.columns=chem_meta_columns
chem_meta = chem_meta.set_index(chem_meta['BIOCHEMICAL'], verify_integrity=True)
sample_meta.to_csv('_'.join([infile,infiletab,'metabolitesMETADATA'])+'.tsv')
# + slideshow={"slide_type": "slide"}
#get data
rawdata=rawOrigScale.iloc[rows_metadata:,cols_metadata:].astype(np.float64)
metabolomics=rawdata.set_axis(chem_meta.index, axis=0).set_axis(sample_meta.index, axis=1)
sample_meta.to_csv('_'.join([infile,infiletab,'occurance'])+'.tsv')
metabolomics
# +
#how many do we have compounds for?
#should be 958+4 (at end) lines, then 1270-(958+4) without IDs
#now many do we have DB iDs for?
print('number of peaks\t\t%s\n' % len(chem_meta))
for DB in 'CHEMICAL_ID COMP_ID PUBCHEM CAS KEGG HMDB'.split():
#print( len(chem_meta[DB])-sum(chem_meta[DB].isna())
print(chem_meta[DB].describe())
print()
# + slideshow={"slide_type": "slide"}
print("percent of peaks with the following IDs")
nulls=chem_meta['CHEMICAL_ID PUBCHEM CAS KEGG HMDB'.split()].isnull().sum()
(100.0*(len(chem_meta[DB])-nulls)/len(chem_meta[DB])).round(1).sort_values(ascending=False)
# -
chem_meta['SUPER_PATHWAY'].unique()
chem_meta['SUB_PATHWAY'].unique()
chem_meta.head()
#searching for strings
s='methylamine'
chem_meta['BIOCHEMICAL'][chem_meta['BIOCHEMICAL'].str.match(r'.*'+s+'.*', case=False)]
chem_meta[chem_meta.BIOCHEMICAL.str.match(r'.*'+s+'.*', case=False)].T
# + [markdown] slideshow={"slide_type": "slide"}
# ## Retreve all KEGG reactions
#
# See: https://www.kegg.jp/kegg/rest/keggapi.html
#
# Kinds of links avaliable for compounds:
#
# https://www.genome.jp/dbget-bin/get_linkdb?targettype=all&keywords=cpd%3AC00025&targetformat=html&targetdb=alldb
#
#
# -
# ### get all KEGG IDS as tsv files
#
# + language="bash"
#
# mkdir -p kegg
#
# curl http://rest.kegg.jp/info/compound | tee kegg/compound.info
# wget -P kegg http://rest.kegg.jp/list/compound
# curl http://rest.kegg.jp/info/reaction | tee kegg/reaction.info
# wget -P kegg http://rest.kegg.jp/list/reaction
# curl http://rest.kegg.jp/info/ko | tee kegg/ko.info
# wget -P kegg http://rest.kegg.jp/list/ko
#
# wget http://rest.kegg.jp/link/compound/reaction -O kegg/compound-reaction
# wget http://rest.kegg.jp/link/reaction/ko -O kegg/reaction-ko
#
# -
# !wc -l kegg/*
# ### Example information avaliable per entry:
#
# http://rest.kegg.jp/get/cpd:C00025
#
# ENTRY C00025 Compound
# ...
# REACTION R00021 R00093 R00114 R00239 R00241 R00243 R00245 R00248
# R00250 R00251 R00253 R00254 R00256 R00257 R00258 R00259
# ...
# ...
#
# http://rest.kegg.jp/get/rn:R00519
#
# ENTRY R00519 Reaction
# ...
# ORTHOLOGY K00122 formate dehydrogenase [EC:1.17.1.9]
# K00123 formate dehydrogenase major subunit [EC:1.17.1.9]
# K00124 formate dehydrogenase iron-sulfur subunit
# K00126 formate dehydrogenase subunit delta [EC:1.17.1.9]
# K00127 formate dehydrogenase subunit gamma
# K22515 formate dehydrogenase beta subunit [EC:1.17.1.9]
# ...
# ...
#
#
# +
#fuinctions to get and parse KEGG
def getkid(ser):
h=ser.str.split(':').str[0].unique()
assert len(h)==1
return h[0]
def kidprefix_to_colname(df):
df.columns=df.apply(getkid)
return df.apply(lambda x: x.str.split(':').str[1])
def KEGG_retrieve(dbname):
url='http://rest.kegg.jp/list/'+dbname
print('downloading %s' % url)
df=pd.read_csv(url, sep='\t', header=None)
#get kid out of first element, put in column headers
listtkid=getkid(df[0])
df.columns=[listtkid, listtkid+'_description']
#remove kid from first col
df[listtkid]=df[listtkid].str.split(':').str[1]
df.columns.name=dbname
return df
def KEGG_link(db1,db2):
url='http://rest.kegg.jp/link/'+db1+'/'+db2
print('downloading %s' % url)
df=pd.read_csv(url, sep='\t', header=None)
df.columns.name=db1+'_'+db2
return kidprefix_to_colname(df)
# -
# Use REST interface to get the same file as above.
# +
# %%time
c=KEGG_retrieve('compound')
r=KEGG_retrieve('reaction')
k=KEGG_retrieve('ko')
cr=KEGG_link('compound','reaction')
rk=KEGG_link('reaction','ko')
# -
kegg_crk = pd.merge(cr,rk, how='outer', indicator='merged_reactionlinks')
kegg_crk.head()
kegg_crk.describe()
kegg_crk.merged_reactionlinks.value_counts()
# +
#add in desc
for d in c, r, k:
mergeind='merged_'+d.columns.name
kegg_crk = pd.merge(kegg_crk,d, how='outer', indicator=mergeind)
print(kegg_crk[mergeind].value_counts())
kegg_crk.head()
# -
kegg_crk.describe()
kegg_crk=kegg_crk.drop(kegg_crk.columns[kegg_crk.columns.str.match(r'merged_.*')], axis=1)
kegg_crk.to_csv('kegg/merged_crk.tsv', sep='\t')
# ## pull out uniq compounds from metabolon and link to ko to exsplore data
# +
#pull out unique CID for KEGG lookup
metabolonKEGG=pd.Series(chem_meta['KEGG'].dropna().unique())
#occasionally there is a drug / compound cross-ref, er wna tthe compound
def getCID(KEGGstr):
ids=[c for c in KEGGstr.split(',') if c[0]=='C']
if len(ids)==1: return ids[0]
else: return KEGGstr
#drop dupes again in case the drug ones were dupes
metabolonKEGG=metabolonKEGG.apply(getCID).dropna().unique()
metabolonKEGG=pd.DataFrame(metabolonKEGG, columns=['cpd'])
# -
metabolon_crk=pd.merge(metabolonKEGG, kegg_crk, how='left', indicator='merged')
print(len(metabolon_crk))
metabolon_crk.describe()
# ### common cpd are not usefull to trace into rxn and genes
# + slideshow={"slide_type": "slide"}
for desc in metabolon_crk.columns[metabolon_crk.columns.str.match(r'.*_description')]:
print('\n'+desc)
print(metabolon_crk[desc].value_counts(ascending=False).head(20))
# -
#common cpd are not usefull to trace into rxn and genes
metabolon_crk['cpd'].value_counts(ascending=False).head(20)
# + slideshow={"slide_type": "slide"}
#drop ATP, ADP, NAD, NADH, Phosphate, ect
c.head(17)
# -
fromcommon=metabolon_crk[metabolon_crk['cpd'].isin(c.head(17)['cpd'])]
print('WARNING: these very common cofactors are found in the dataset')
fromcommon['cpd_description'].value_counts()
# ### remerge
# +
metabolon_crk=pd.merge(metabolonKEGG, kegg_crk, how='left', indicator='merged')
print('WARNING: dropping H2O, NAD(P)/H, ATP/ADP, and PO4')
print('was %s lines' % len(metabolon_crk))
droplist=set('''C00001
C00002
C00003
C00004
C00005
C00006
C00008
C00009'''.split())
#merge just like above, but remove the common cofactors from metabolon list first, so as not to pull in junk
metabolon_crk=pd.merge(metabolonKEGG[~metabolonKEGG['cpd'].isin(droplist)], kegg_crk, how='left', indicator='merged')
#metabolon_crk=metabolon_crk[~metabolon_crk.cpd.isin(droplist)]
print('now %s lines' % len(metabolon_crk))
metabolon_crk.describe()
# -
#drop those without any ko genes linked to these compounts
metabolon_crk=metabolon_crk.drop('merged', axis=1).dropna(subset=['ko']).drop_duplicates()
metabolon_crk
#get the unique othologs involved
metabolon_ko = metabolon_crk[['ko','ko_description']].dropna(subset=['ko']).drop_duplicates()
metabolon_ko
metabolon_ko.describe()
# # Merge Metabolon with KEGG ortholog data
#there are duplicate KEGGs in the metabolon table, we will let these duplicate data, as at leasr some seen different.
tmp=chem_meta[~chem_meta['KEGG'].isin(droplist)].dropna(subset=['KEGG']).drop_duplicates()
tmp[tmp.duplicated(subset=['KEGG'], keep=False)]
# +
#clean for merge
print('WARNING: dropping H2O, NAD(P)/H, ATP/ADP, and PO4')
droplist=set('''C00001
C00002
C00003
C00004
C00005
C00006
C00008
C00009'''.split())
chem_meta_clean=chem_meta[~chem_meta['KEGG'].isin(droplist)].dropna(subset=['KEGG']).drop_duplicates()
kegg_crk_clean=kegg_crk[~kegg_crk['cpd'].isin(droplist)].dropna(subset=['cpd']).drop_duplicates()
chem_meta_clean_ko=chem_meta_clean.merge(kegg_crk_clean, how='left', indicator='merged', left_on='KEGG', right_on='cpd')
chem_meta_clean_ko
# -
chem_meta_clean_ko.describe()
chem_meta_clean_ko[chem_meta_clean_ko['merged']=='left_only']
chem_meta_clean_ko=chem_meta_clean_ko.drop('merged', axis=1).dropna(subset=['CHEMICAL_ID','KEGG','ko'])
chem_meta_clean_ko.to_csv('_'.join([infile,infiletab,'metabolitesMETADATA_filtered_KEGGmerge'])+'.tsv')
chem_meta_clean_ko.describe()
#get simple CHEMICAL_ID COMP_ID to ko table
chem_ko=chem_meta_clean_ko.copy(deep=True)
chem_ko=chem_ko[['CHEMICAL_ID','KEGG','ko']].dropna(how='any').drop_duplicates()
chem_ko
# + slideshow={"slide_type": "slide"}
chem_ko.to_csv('_'.join([infile,infiletab,'metabolitesCHEMICALID_filtered_KEGG_ko'])+'.tsv')
chem_ko.describe()
# + [markdown] slideshow={"slide_type": "slide"}
# ## quick test comparison to ko found in metagenomics Novogene annotations
# -
# !rsync <EMAIL>:/lab_data/ryan_lab/jamesrh/ERyan_Novogene_contract_H202SC19111892_working/result/05.FunctionAnnotation/KEGG/GeneNums/Unigenes.absolute.ko.xls ./
# !cat Unigenes.absolute.ko.xls | cut -f 1-4 >> tmp.Unigenes.absolute.ko.xls
MG_KEGG=pd.read_csv('tmp.Unigenes.absolute.ko.xls', sep='\t')
MG_KEGG
test = chem_meta_clean_ko.merge(MG_KEGG, how='outer', left_on='ko', right_on='KO_ID', indicator='merged')
print(len(test))
test.describe()
t=test.merged.value_counts().to_frame()
test.merged.value_counts().to_frame()
# +
for v in test.merged.value_counts().to_frame().iterrows():
print("\n%s total %s\n" % (v[0], v[1][0]))
print(test[test['merged']==v[0]]['SUPER_PATHWAY'].value_counts(dropna=False))
c.cpd.value_counts()
# -
test2=test[test['merged']=='both'][['SUPER_PATHWAY','SUB_PATHWAY','rn_description','BIOCHEMICAL']]
test2=test2.groupby(['SUPER_PATHWAY','SUB_PATHWAY']).agg({"rn_description": pd.Series.nunique, "BIOCHEMICAL": pd.Series.nunique})
test2
# + slideshow={"slide_type": "slide"}
#only rarely are there multiple chem per rxn
test2[test2['rn_description']/test2['BIOCHEMICAL']<1]
# + slideshow={"slide_type": "slide"}
#usually, there are more rxn then chemicals
test2[test2['rn_description']/test2['BIOCHEMICAL']>=1]
# -
| analyses/metabolon2keggMERGEnovogenenotebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.6.13 64-bit (''pytorch2'': conda)'
# language: python
# name: python3
# ---
import torch as tn
from torchvision import datasets, transforms
import torchtt as tntt
import torch.nn as nn
import matplotlib.pyplot as plt
import numpy as np
import datetime
device_name = 'cuda:0'
data_dir_test = 'seg_test/'
data_dir_train = 'seg_train/'
N_shape = [10,15]
# +
transform_train = transforms.Compose([transforms.Resize(N_shape[0]*N_shape[1]), transforms.CenterCrop(N_shape[0]*N_shape[1]), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_train = datasets.ImageFolder(data_dir_train, transform=transform_train)
dataloader_train = tn.utils.data.DataLoader(dataset_train, batch_size=32, shuffle=True, pin_memory = True, num_workers = 16)
transform_test = transforms.Compose([transforms.Resize(N_shape[0]*N_shape[1]), transforms.CenterCrop(N_shape[0]*N_shape[1]), transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
dataset_test = datasets.ImageFolder(data_dir_test, transform=transform_test)
dataloader_test = tn.utils.data.DataLoader(dataset_test, batch_size=32, shuffle=True, pin_memory = True, num_workers = 16)
# inputs_train = list(dataloader_train)[0][0].to(device_name)
# labels_train = list(dataloader_train)[0][1].to(device_name)
#
# inputs_test = list(dataloader_test)[0][0].to(device_name)
# labels_test = list(dataloader_test)[0][1].to(device_name)
# +
class BasicTT(nn.Module):
def __init__(self):
super().__init__()
p = 0.33
self.ttl1 = tntt.nn.LinearLayerTT([3]+N_shape+N_shape, [32]+N_shape+N_shape, [1,9,3,3,2,1], initializer = 'He')
self.dropout1 = nn.Dropout(p)
self.ttl2 = tntt.nn.LinearLayerTT([32]+N_shape+N_shape, [32,8,8,8,8], [1,8,4,3,2,1], initializer = 'He')
self.dropout2 = nn.Dropout(p)
self.ttl3 = tntt.nn.LinearLayerTT([32,8,8,8,8], [8,4,4,4,4], [1,4,4,4,4,1], initializer = 'He')
self.dropout3 = nn.Dropout(p)
self.ttl4 = tntt.nn.LinearLayerTT([8,4,4,4,4], [4,4,4,4,4], [1,3,3,3,3,1], initializer = 'He')
self.dropout4 = nn.Dropout(p/2)
self.linear = nn.Linear(4**5, 6, dtype = tn.float32)
self.logsoftmax = nn.LogSoftmax(1)
def forward(self, x):
x = self.ttl1(x)
x = tn.relu(x)
x = self.dropout1(x)
x = self.ttl2(x)
x = tn.relu(x)
x = self.dropout2(x)
x = self.ttl3(x)
x = tn.relu(x)
x = self.dropout3(x)
x = self.ttl4(x)
x = tn.relu(x)
x = self.dropout4(x)
x = x.view(-1,4**5)
x = self.linear(x)
return self.logsoftmax(x)
class BasicTTdeep(nn.Module):
def __init__(self):
super().__init__()
p = 0.5
self.ttl1 = tntt.nn.LinearLayerTT([3]+N_shape+N_shape, [32]+N_shape+N_shape, [1,9,3,3,2,1], initializer = 'He')
self.dropout1 = nn.Dropout(p/2)
self.ttl2 = tntt.nn.LinearLayerTT([32]+N_shape+N_shape, [32,8,8,8,8], [1,8,4,3,2,1], initializer = 'He')
self.dropout2 = nn.Dropout(p/2)
self.ttl3 = tntt.nn.LinearLayerTT([32,8,8,8,8], [8,4,4,4,4], [1,4,4,4,4,1], initializer = 'He')
self.dropout3 = nn.Dropout(p)
self.ttl4 = tntt.nn.LinearLayerTT([8,4,4,4,4], [4,4,4,4,4], [1,2,2,2,2,1], initializer = 'He')
self.dropout4 = nn.Dropout(p)
self.ttl5 = tntt.nn.LinearLayerTT([4,4,4,4,4], [4,4,4,4,4], [1,2,2,2,2,1], initializer = 'He')
self.dropout5 = nn.Dropout(p)
self.ttl6 = tntt.nn.LinearLayerTT([4,4,4,4,4], [4,4,4,4,4], [1,2,2,2,2,1], initializer = 'He')
self.dropout6 = nn.Dropout(p)
self.ttl7 = tntt.nn.LinearLayerTT([4,4,4,4,4], [4,4,4,4,4], [1,2,2,2,2,1], initializer = 'He')
self.dropout7 = nn.Dropout(p)
self.ttl8 = tntt.nn.LinearLayerTT([4,4,4,4,4], [3,3,3,3,3], [1,3,3,3,3,1], initializer = 'He')
self.dropout8 = nn.Dropout(p)
self.linear = nn.Linear(3**5, 6, dtype = tn.float32)
self.logsoftmax = nn.LogSoftmax(1)
def forward(self, x):
x = self.ttl1(x)
x = tn.relu(x)
x = self.dropout1(x)
x = self.ttl2(x)
x = tn.relu(x)
x = self.dropout2(x)
x = self.ttl3(x)
x = tn.relu(x)
x = self.dropout3(x)
x = self.ttl4(x)
x = tn.relu(x)
x = self.dropout4(x)
x = self.ttl5(x)
x = tn.relu(x)
x = self.dropout5(x)
x = self.ttl6(x)
x = tn.relu(x)
x = self.dropout6(x)
x = self.ttl7(x)
x = tn.relu(x)
x = self.dropout7(x)
x = self.ttl8(x)
x = tn.relu(x)
x = self.dropout8(x)
x = x.view(-1,3**5)
x = self.linear(x)
return self.logsoftmax(x)
# +
model = BasicTT()
model.to(device_name)
print('Number of parameters', sum(tn.numel(p) for p in model.parameters()))
optimizer = tn.optim.SGD(model.parameters(), lr=0.001, momentum=0.1)
optimizer = tn.optim.Adam(model.parameters(), lr=0.005)
scheduler = tn.optim.lr_scheduler.StepLR(optimizer, step_size=50,gamma=0.5)
loss_function = tn.nn.CrossEntropyLoss()
# +
def do_epoch(i):
loss_total = 0.0
n_total = 0
n_correct = 0
#tme = datetime.datetime.now()
for k, data in enumerate(dataloader_train):
#tme = datetime.datetime.now() - tme
#print('t0',tme)
#tme = datetime.datetime.now()
inputs, labels = data[0].to(device_name), data[1].to(device_name)
#tme = datetime.datetime.now() - tme
#print('t1',tme)
#tme = datetime.datetime.now()
inputs = tn.reshape(inputs,[-1,3]+2*N_shape)
#tme = datetime.datetime.now() - tme
#print('t2',tme)
#tme = datetime.datetime.now()
optimizer.zero_grad()
# Make predictions for this batch
outputs = model(inputs)
# Compute the loss and its gradients
loss = loss_function(outputs, labels)
# regularization
#l2_lambda = 0.005
#l2_norm = sum(p.pow(2.0).sum() for p in model.parameters())
loss = loss#+l2_lambda*l2_norm
loss.backward()
# Adjust learning weights
optimizer.step()
#tme = datetime.datetime.now() - tme
#print('t3',tme)
n_correct += tn.sum(tn.max(outputs,1)[1] == labels).cpu()
n_total+=inputs.shape[0]
loss_total += loss.item()
# print('\t\tbatch %d error %e'%(k+1,loss))
#tme = datetime.datetime.now()
return loss_total/len(dataloader_train), n_correct/n_total
def test_loss():
loss_total = 0
for data in dataloader_test:
inputs, labels = data[0].to(device_name), data[1].to(device_name)
inputs = tn.reshape(inputs,[-1,3]+2*N_shape)
outputs = model(inputs)
loss = loss_function(outputs, labels)
loss_total += loss.item()
return loss_total/len(dataloader_test)
def test_data():
n_total = 0
n_correct = 0
loss_total = 0
for data in dataloader_test:
inputs, labels = data[0].to(device_name), data[1].to(device_name)
inputs = tn.reshape(inputs,[-1,3]+2*N_shape)
outputs = model(inputs)
loss = loss_function(outputs, labels)
loss_total += loss.item()
n_correct += tn.sum(tn.max(outputs,1)[1] == labels)
n_total+=inputs.shape[0]
return loss_total/len(dataloader_test), n_correct/n_total
def train_accuracy():
n_total = 0
n_correct = 0
for data in dataloader_train:
inputs, labels = data[0].to(device_name), data[1].to(device_name)
inputs = tn.reshape(inputs,[-1,3]+2*N_shape)
outputs = model(inputs)
n_correct += tn.sum(tn.max(outputs,1)[1] == labels)
n_total+=inputs.shape[0]
return n_correct/n_total
# +
n_epochs = 100
history_test_accuracy = []
history_test_loss = []
history_train_accuracy = []
history_train_loss = []
for epoch in range(n_epochs):
print('Epoch %d/%d'%(epoch+1,n_epochs))
time_epoch = datetime.datetime.now()
model.train(True)
train_loss, train_acc = do_epoch(epoch)
model.train(False)
test_loss, test_acc = test_data()
scheduler.step()
time_epoch = datetime.datetime.now() - time_epoch
print('\tTraining loss %e training accuracy %5.4f test loss %e test accuracy %5.4f'%(train_loss,train_acc,test_loss,test_acc))
print('\tTime for the epoch',time_epoch)
history_test_accuracy.append(test_acc)
history_test_loss.append(test_loss)
history_train_accuracy.append(train_acc)
history_train_loss.append(train_loss)
# +
plt.figure()
plt.plot(np.arange(len(history_train_accuracy))+1,np.array(history_train_accuracy))
plt.plot(np.arange(len(history_test_accuracy))+1,np.array(history_test_accuracy))
plt.legend(['training','test'])
plt.figure()
plt.plot(np.arange(len(history_train_loss))+1,np.array(history_train_loss))
plt.plot(np.arange(len(history_test_loss))+1,np.array(history_test_loss))
plt.legend(['training','test'])
# -
max(history_test_accuracy)
# +
#plt.figure()
#fix, axs = plt.subplots(4,8)
#print(axs[1][1])
#for i in range(4):
# for j in range(8):
# axs[i][j].imshow(np.transpose(batch_training[0][i,:,:,:].numpy(),[1,2,0]))
# axs[i][j].set_title('Predicted: '+classes[predicted[i]]+', truth: '+classes[batch_training[1][i]])
classes = {0: 'buildings', 1 : 'forrest', 2 : 'glacier', 3 : 'mountain', 4 : 'sea', 5 : 'street'}
batch_training = next(iter(dataloader_train))
output = model(tn.reshape(batch_training[0],[-1,3]+N_shape+N_shape).to(device_name)).cpu().detach()
predicted = tn.max(output,1)[1]
for i in range(batch_training[0].shape[0]):
plt.figure()
plt.imshow(np.transpose(batch_training[0][i,:,:,:].numpy(),[1,2,0]))
plt.title('Predicted: '+classes[predicted.numpy()[i]]+' p=' +str(np.exp(output[i,predicted.numpy()[i]].numpy()))+ ', truth: '+classes[batch_training[1].numpy()[i]])
# +
classes = {0: 'buildings', 1 : 'forrest', 2 : 'glacier', 3 : 'mountain', 4 : 'sea', 5 : 'street'}
batch_training = next(iter(dataloader_test))
output = model(tn.reshape(batch_training[0],[-1,3]+N_shape+N_shape).to(device_name)).cpu().detach()
predicted = tn.max(output,1)[1]
for i in range(batch_training[0].shape[0]):
plt.figure()
plt.imshow(np.transpose(batch_training[0][i,:,:,:].numpy(),[1,2,0]))
plt.title('Predicted: '+classes[predicted.numpy()[i]]+' p=' +str(np.exp(output[i,predicted.numpy()[i]].numpy()))+ ', truth: '+classes[batch_training[1].numpy()[i]])
# + tags=[]
| intel_image_classification/tt_nn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py3.6Conda5.1GPU
# language: python
# name: py3.6gpu
# ---
import torch
torch.cuda.empty_cache()
import numpy as np
matrixA = np.array([[11, 3, 10, 3], [20, 1, 0, 1]])
matrixB = np.array([[12, 1, 10], [7, 4, 0], [4, 5, 2], [5, 2, 10]])
matrixA.dot(matrixB)
# +
import glob
import json
import re
import pickle
import pandas as pd
import spacy
import string
import numpy as np
import itertools
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process.kernels import RBF
import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
import torch.nn.functional as F
import torchtext.vocab as vocab
nlp = spacy.load('en')
# Set the random seed manually for reproducibility.
torch.manual_seed(1234)
# +
def save(nameFile, toSave):
pickle_out = open(nameFile+".pickle", "wb")
pickle.dump(toSave, pickle_out)
pickle_out.close()
def load(nameFile):
pickle_in = open(nameFile+".pickle", "rb")
return pickle.load(pickle_in)
def main_iter_files():
print('Import data')
output_path = '/people/maurice/ownCloud/outputGentle/'
wordsTimeds = []
for file in sorted(glob.glob(output_path + '*')):
print(file)
if 'wordsTimed' in file and 'pickle' not in file:
wordsTimed = pd.read_csv(file) # load(file)
print(wordsTimed.head())
wordsTimeds.append(wordsTimed)
# wordsTimedGby = wordsTimed.groupby('idSentence')
'''sentenceTimed = wordsTimedGby.apply(lambda x: x.count())
sentenceTimed[1] = sentenceTimed.astype(np.float)/len(g)
print sentenceTimed'''
return wordsTimeds
# -
wts = main_iter_files()
string.punctuation
class sentenceTimed(object):
def __init__(self, wt):
self.reset(0)
def reset(self, i):
self.speaker = wt.iloc[i].speaker
self.sentence_courante = ''
if i > 0:
self.sentence_courante += wt.iloc[i].word
def modif_per_word(self, wt, i):
self.sentence_courante += ' ' + wt.iloc[i].word
def modif_per_sentence(self, wt, df, i):
self.add_sentence_informations_to_dataframe(df)
self.reset(i)
def add_sentence_informations_to_dataframe(self, df):
df.loc[len(df)] = [self.speaker, self.sentence_courante]
# print(df)
# Lent
punctuation_end_sentence = ['!', '.', '?']
sentencesTimeds = []
print(len(wts))
for i, wt in enumerate(wts):
print(i)
sentencesTimed = pd.DataFrame(columns=['speaker', 'sentence_courante'])
st = sentenceTimed(wt)
for word in range(len(wt)):
if word == 0:
st.modif_per_word(wt, word)
elif wt.iloc[word].word[0].isupper() and wt.iloc[word - 1].word in punctuation_end_sentence:
st.modif_per_sentence(wt, sentencesTimed, word)
else:
st.modif_per_word(wt, word)
st.add_sentence_informations_to_dataframe(sentencesTimed)
sentencesTimeds.append(sentencesTimed)
sentencesTimeds[0]
sentencesTimeds[0].iloc[1]
sentencesTimeds[0].iloc[1].speaker
sentencesTimeds[0].iloc[1].sentence_courante
data = pd.concat(sentencesTimeds)
data
data.speaker.values
data.sentence_courante.values
X = [s.lower().split() for s in data.sentence_courante.values]
Y = [s.lower() for s in data.speaker.values]
newY = np.zeros(len(Y)-1)
for i in range(1, len(Y)):
if Y[i] == Y[i-1]:
newY[i-1] = 0
else:
newY[i-1] = 1
Y = Variable(torch.from_numpy(newY))
X = [s.lower().split() for s in data.sentence_courante.values]
Y = [s.lower() for s in data.speaker.values]
X = [s.lower().split() for s in data.sentence_courante.values]
Y = [s.lower() for s in data.speaker.values]
newY = []
for i in range(1, len(Y)):
if Y[i] == Y[i-1]:
newY.append([0])
else:
newY.append([1])
Y = Variable(torch.FloatTensor(newY))
print(X[0:2])
print(Y)
print(Y.shape)
# +
#X_train, X_dt, Y_train, Y_dt = train_test_split(X, Y, test_size=0.2, shuffle=False)
print(len(X), len(Y))
def create_Y(Y):
newY = []
for i in range(1, len(Y)):
if Y[i] == Y[i-1]:
newY.append([0])
else:
newY.append([1])
Y_new = Variable(torch.FloatTensor(newY))
return Y_new
threshold_train_dev = int(len(X)*0.8)
threshold_dev_test = threshold_train_dev + int(len(X)*0.1)
X_train = X[:threshold_train_dev]
Y_train = create_Y(Y[:threshold_train_dev])
X_dev = X[threshold_train_dev:threshold_dev_test]
Y_dev = create_Y(Y[threshold_train_dev:threshold_dev_test])
X_test = X[threshold_dev_test:]
Y_test = create_Y(Y[threshold_dev_test:])
print(len(X_train), len(Y_train), len(X_dev), len(Y_dev), len(X_test), len(Y_test))
# +
#embed = nn.Embedding(num_embeddings, embedding_dim)
# pretrained_weight is a numpy matrix of shape (num_embeddings, embedding_dim)
#embed.weight.data.copy_(torch.from_numpy(pretrained_weight))
#we = vocab.GloVe(name='6B', dim=100)
we = vocab.FastText(language='en')
'''
pretrained_aliases = {
"charngram.100d": partial(CharNGram),
"fasttext.en.300d": partial(FastText, language="en"),
"fasttext.simple.300d": partial(FastText, language="simple"),
"glove.42B.300d": partial(GloVe, name="42B", dim="300"),
"glove.840B.300d": partial(GloVe, name="840B", dim="300"),
"glove.twitter.27B.25d": partial(GloVe, name="twitter.27B", dim="25"),
"glove.twitter.27B.50d": partial(GloVe, name="twitter.27B", dim="50"),
"glove.twitter.27B.100d": partial(GloVe, name="twitter.27B", dim="100"),
"glove.twitter.27B.200d": partial(GloVe, name="twitter.27B", dim="200"),
"glove.6B.50d": partial(GloVe, name="6B", dim="50"),
"glove.6B.100d": partial(GloVe, name="6B", dim="100"),
"glove.6B.200d": partial(GloVe, name="6B", dim="200"),
"glove.6B.300d": partial(GloVe, name="6B", dim="300")
}
'''
def get_word_vector(word):
return we.vectors[we.stoi[word]]
def closest(vec, n=2):#10):
"""
Find the closest words for a given vector
"""
all_dists = [(w, torch.dist(vec, get_word_vector(w))) for w in we.itos]
return sorted(all_dists, key=lambda t: t[1])[:n]
def print_tuples(tuples):
for tuple in tuples:
print('(%.4f) %s' % (tuple[1], tuple[0]))
# In the form w1 : w2 :: w3 : ?
def analogy(w1, w2, w3, n=5, filter_given=True):
print('\n[%s : %s :: %s : ?]' % (w1, w2, w3))
# w2 - w1 + w3 = w4
closest_words = closest(get_word_vector(w2) - get_word_vector(w1) + get_word_vector(w3))
# Optionally filter out given words
if filter_given:
closest_words = [t for t in closest_words if t[0] not in [w1, w2, w3]]
print_tuples(closest_words[:n])
# -
# Lent
print(we.dim,'\n')
print_tuples(closest(get_word_vector('google')))
analogy('king', 'man', 'queen')
print(type(we.stoi),'\n')
a = 0
for k, v in we.stoi.items():
if a < 10:
print(k,v)
a += 1
else:
break
print('iazkcnzejjqsdchj' in we.stoi)
#print(get_word_vector('iazkcnzejjqsdchj'))
print('iazkcnzejjqsdchj' in we.stoi)
nunf = set()
for s in X:
for w in s:#.split():
if w not in we.stoi:
nunf.add(w)
print(len(nunf), nunf)
# +
'''weX = []
for i,s in enumerate(X):
weX.append([])
for w in s.split():
if w in we.stoi:
weX[i].append(get_word_vector(w))'''
to_del = []
for s in X:
for w in s:
if w not in we.stoi:
to_del.append(w)
X = [[w for w in s if w not in to_del] for s in X]
# -
print(X[0:2])
words_set = set()
for s in X:
words_set = words_set.union(set(s))
print(len(words_set))
words_set
we_idx = [we.stoi[w] for w in list(words_set)]
we_idx.append(0) #for padding we need to intialize one row of vector weights
# +
# map sentences to vocab
idw = 1
vocab_X = {'<PAD>':0}
for w in list(words_set):
vocab_X[w] = idw
idw += 1
# fancy nested list comprehension
X_num = [[vocab_X[word] for word in sentence] for sentence in X]
print(len(vocab_X))
print(X[0:2])
# +
# get the length of each sentence
X_lengths = [len(sentence) for sentence in X_num]
print(X_lengths[0:2])
# create an empty matrix with padding tokens
padding_idx = vocab_X['<PAD>']
longest_sent = max(X_lengths)
print(longest_sent)
batch_size = len(X_num)
padded_X = np.ones((batch_size, longest_sent)) * padding_idx
# # copy over the actual sequences
for i, x_len in enumerate(X_lengths):
sequence = X_num[i]
padded_X[i, 0:x_len] = sequence[:x_len]
# padded_X looks like:
print(padded_X[0:2][:])
# -
inp = Variable(torch.LongTensor([[3,4,6], [1,3,5]]))
#inp = Variable(torch.LongTensor([3,4]))
print(inp)
embed(inp)
list(range(5,2,-1))
# +
#https://gist.github.com/Tushar-N/dfca335e370a2bc3bc79876e6270099e
def get_last_modulo(nb, mod):
if nb%mod == 0:
return nb
else:
return int(nb/mod) * mod
### Premier modèle, somme des embedding par phrases
taille_embedding = len(get_word_vector(X[0][0]))
taille_context = 3
idx_set_words = dict(zip(list(words_set), range(1,len(words_set)+1)))
idx_set_words['<PAD>'] = 0
padding_idx = idx_set_words['<PAD>']
vectorized_seqs = [[idx_set_words[w] for w in s]for s in X]
#vectorized_seqs = padded_X
print(vectorized_seqs[0:2])
embed = nn.Embedding(num_embeddings=len(words_set)+1, embedding_dim=taille_embedding, padding_idx=padding_idx)
embed.weight.data.copy_(we.vectors[we_idx])
embed.weight.requires_grad = False
# get the length of each seq in your batch
seq_lengths = torch.LongTensor(list(map(len, vectorized_seqs)))
print('length', seq_lengths)
# dump padding everywhere, and place seqs on the left.
# NOTE: you only need a tensor as big as your longest sequence
seq_tensor = Variable(torch.zeros((len(vectorized_seqs), seq_lengths.max()))).long()
for idx, (seq, seqlen) in enumerate(zip(vectorized_seqs, seq_lengths)):
seq_tensor[idx, :seqlen] = torch.LongTensor(seq)
print(seq_tensor[0:2])
print('nb sentences', len(vectorized_seqs))
# utils.rnn lets you give (B,L,D) tensors where B is the batch size, L is the maxlength, if you use batch_first=True
# Otherwise, give (L,B,D) tensors
seq_tensor = seq_tensor.transpose(0,1) # (B,L,D) -> (L,B,D)
# embed your sequences
seq_tensor = embed(seq_tensor)
# sum over L, all words per sentence
seq_tensor_sumed = torch.sum(seq_tensor, dim=0) #len(vectorized_seqs), taille_embedding
print('after sum', seq_tensor_sumed.shape)
seq_tensor_sumed = seq_tensor_sumed.view(len(vectorized_seqs), 1, taille_embedding)
#seq_tensor_sumed = seq_tensor_sumed.view(seq_tensor_sumed.shape[0], 1, taille_embedding)
#seq_tensor_sumed = seq_tensor_sumed.view(int(get_last_modulo(seq_tensor_sumed.shape[0], taille_context)/taille_context), taille_context, seq_tensor_sumed.shape[1])
print(seq_tensor_sumed.shape)
print('sum', seq_tensor_sumed)
bidirectional = False
input_size = taille_embedding
hidden_size = taille_embedding
num_layers = 3
nb_sentences = len(vectorized_seqs)
lstm_previous = torch.nn.LSTM(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, batch_first=False, bidirectional=bidirectional)
lstm_future = torch.nn.LSTM(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, batch_first=False, bidirectional=bidirectional)
tanh = nn.Tanh()
softmax = nn.Softmax()
#get mini-batch
for i in range(nb_sentences - (2*taille_context - 1) -1): #TODO split train/test/val
indices_previous = torch.tensor(list(range(i,i+taille_context+1)))
indices_future = torch.tensor(list(range(i+2*taille_context+1,i+taille_context,-1)))
input_previous_features = torch.index_select(seq_tensor_sumed, 0, indices_previous)
input_future_features = torch.index_select(seq_tensor_sumed, 0, indices_future)
seq_tensor_output_previous, (ht_previous, ct_previous) = lstm_previous(input_previous_features)
seq_tensor_output_future, (ht_future, ct_future) = lstm_future(input_future_features)
target = Y[i + taille_context]
seq_len = input_previous_features.shape[0]
batch = input_previous_features.shape[1]
#print(input_previous_features.shape) # 4 1 300
num_directions = 1
if bidirectional:
num_directions = 2
seq_tensor_output_sum = torch.cat((seq_tensor_output_previous.view(seq_len, batch, num_directions, hidden_size)[-1], seq_tensor_output_future.view(seq_len, batch, num_directions, hidden_size)[-1]), -1)
print(seq_tensor_output_sum[-1].shape, seq_tensor_output_previous.shape, seq_tensor_output_previous.view(seq_len, batch, num_directions, hidden_size).shape, seq_tensor_output_previous[-1].shape, seq_tensor_output_future[-1].shape, seq_tensor_output_sum.shape, target)
#torch.Size([1, 600]) torch.Size([4, 1, 300]) torch.Size([4, 1, 1, 300]) torch.Size([1, 300]) torch.Size([1, 300]) torch.Size([1, 1, 600]) tensor(1., dtype=torch.float64)
#TODO Attention mechanism
seq_tensor_output_sum = seq_tensor_output_sum.view(1,2*num_directions*taille_embedding) #2 is because we concatenate previous and future embeddings
print(seq_tensor_output_sum.shape)
W = torch.rand(1, num_directions)
print(W.shape, seq_tensor_output_sum.shape, W.mm(seq_tensor_output_sum).shape)
print(tanh(W.mm(seq_tensor_output_sum)).shape)
u = torch.rand(1, 2*taille_embedding)
print(softmax(u.mm(tanh(W.mm(seq_tensor_output_sum)).t())))
break
# throw them through your LSTM (remember to give batch_first=True here if you packed with it)
#lstm = torch.nn.LSTM(input_size=seq_tensor_sumed.shape[2], hidden_size=seq_tensor_sumed.shape[2], num_layers=1, batch_first=False, bidirectional=False)
#print(lstm)
#seq_tensor_output, (ht, ct) = lstm(seq_tensor_sumed)
#print('output', seq_tensor_output)
# Or if you just want the final hidden state?
#print(ht[-1])
# -
class HierarchicalBiLSTM_on_sentence_embedding(nn.Module):
def __init__(self, embedding_dim, hidden_dim, targset_size, num_layers = 3, bidirectional = False):
super(HierarchicalBiLSTM_on_sentence_embedding, self).__init__()
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.bidirectional = bidirectional
self.num_directions = 1
if self.bidirectional:
self.num_directions = 2
self.num_layers = num_layers
# The LSTM takes word embeddings as inputs, and outputs hidden states
# with dimensionality hidden_dim.
self.lstm_previous = torch.nn.LSTM(input_size=embedding_dim, hidden_size=hidden_dim, num_layers=num_layers, batch_first=False, bidirectional=bidirectional)
self.lstm_future = torch.nn.LSTM(input_size=embedding_dim, hidden_size=hidden_dim, num_layers=num_layers, batch_first=False, bidirectional=bidirectional)
# The linear layer that maps from hidden state space to tag space
self.hidden2tag = nn.Linear(2*self.num_directions*hidden_dim, targset_size)#2 because we concatenate the both output of the lstm
self.hidden = self.init_hidden()
def init_hidden(self):
# Before we've done anything, we dont have any hidden state.
# Refer to the Pytorch documentation to see exactly
# why they have this dimensionality.
# The axes semantics are (num_layers, minibatch_size, hidden_dim)
return (torch.zeros(self.num_layers, 1, self.hidden_dim),
torch.zeros(self.num_layers, 1, self.hidden_dim))
def forward(self, input_previous_sentences, input_future_sentences):
seq_tensor_output_previous, self.hidden = self.lstm_previous(input_previous_sentences, self.hidden)
seq_tensor_output_future, self.hidden = self.lstm_future(input_future_sentences, self.hidden)
return seq_tensor_output_previous, seq_tensor_output_future
seq_len = input_previous_sentences.shape[0]
batch = input_previous_sentences.shape[1]
seq_tensor_output_sum = torch.cat((seq_tensor_output_previous.view(seq_len, batch, self.num_directions, self.hidden_dim)[-1,:,:], seq_tensor_output_future.view(seq_len, batch, self.num_directions, self.hidden_dim)[-1,:,:]), -1)
#TODO Attention mechanism
seq_tensor_output_sum = seq_tensor_output_sum.view(batch,2*self.num_directions*self.hidden_dim) #2 is because we concatenate previous and future embeddings
#lstm_out, self.hidden = self.lstm(embeds.view(len(sentence), 1, -1), self.hidden)
tag_space = self.hidden2tag(seq_tensor_output_sum)
tag_space = tag_space[0]
prediction = torch.sigmoid(tag_space)
return prediction
# +
#https://gist.github.com/Tushar-N/dfca335e370a2bc3bc79876e6270099e
import torch.optim as optim
### Premier modèle, somme des embedding par phrases
taille_embedding = len(get_word_vector(X[0][0]))
taille_context = 3
bidirectional = False
num_layers = 3
nb_epoch = 5
targset_size = 1
idx_set_words = dict(zip(list(words_set), range(1,len(words_set)+1)))
idx_set_words['<PAD>'] = 0
padding_idx = idx_set_words['<PAD>']
vectorized_seqs = [[idx_set_words[w] for w in s]for s in X]
#vectorized_seqs = padded_X
print(vectorized_seqs[0:2])
we_idx = [0] #for padding we need to intialize one row of vector weights
we_idx += [we.stoi[w] for w in list(words_set)]
embed = nn.Embedding(num_embeddings=len(words_set)+1, embedding_dim=taille_embedding, padding_idx=padding_idx)
embed.weight.data.copy_(we.vectors[we_idx])
embed.weight.requires_grad = False
# get the length of each seq in your batch
seq_lengths = torch.LongTensor(list(map(len, vectorized_seqs)))
print('length', seq_lengths)
# dump padding everywhere, and place seqs on the left.
# NOTE: you only need a tensor as big as your longest sequence
seq_tensor = Variable(torch.zeros((len(vectorized_seqs), seq_lengths.max()))).long()
for idx, (seq, seqlen) in enumerate(zip(vectorized_seqs, seq_lengths)):
seq_tensor[idx, :seqlen] = torch.LongTensor(seq)
print(seq_tensor[0:2])
print('nb sentences', len(vectorized_seqs))
# utils.rnn lets you give (B,L,D) tensors where B is the batch size, L is the maxlength, if you use batch_first=True
# Otherwise, give (L,B,D) tensors
seq_tensor = seq_tensor.transpose(0,1) # (B,L,D) -> (L,B,D)
print('size seq_tensor before', seq_tensor.shape)
# embed your sequences
seq_tensor = embed(seq_tensor)
print('size seq_tensor embed', seq_tensor.shape)
# sum over L, all words per sentence
seq_tensor_sumed = torch.sum(seq_tensor, dim=0) #len(vectorized_seqs), taille_embedding
print('after sum', seq_tensor_sumed.shape)
seq_tensor_sumed = seq_tensor_sumed.view(len(vectorized_seqs), 1, taille_embedding)
#seq_tensor_sumed = seq_tensor_sumed.view(seq_tensor_sumed.shape[0], 1, taille_embedding)
#seq_tensor_sumed = seq_tensor_sumed.view(int(get_last_modulo(seq_tensor_sumed.shape[0], taille_context)/taille_context), taille_context, seq_tensor_sumed.shape[1])
print(seq_tensor_sumed.shape)
print('sum', seq_tensor_sumed)
bidirectional = False
input_size = taille_embedding
hidden_size = taille_embedding
num_layers = 3
nb_sentences = len(vectorized_seqs)
model = HierarchicalBiLSTM_on_sentence_embedding(taille_embedding, taille_embedding, targset_size, num_layers, bidirectional)
loss_function = nn.BCELoss()#NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
print('début train')
for epoch in range(nb_epoch):
#get mini-batch
#Data loader
for i in range(nb_sentences - (2*taille_context + 1)): #TODO split train/test/val
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
model.zero_grad()
# Also, we need to clear out the hidden state of the LSTM,
# detaching it from its history on the last instance.
model.hidden = model.init_hidden()
# Step 2. Get our inputs ready for the network, that is, turn them into
# Tensors of word indices.
indices_previous = torch.tensor(list(range(i,i+taille_context+1)))
indices_future = torch.tensor(list(range(i+2*taille_context+1,i+taille_context,-1)))
input_previous_features = torch.index_select(seq_tensor_sumed, 0, indices_previous)
input_future_features = torch.index_select(seq_tensor_sumed, 0, indices_future)
# Step 3. Run our forward pass.
#prediction = model(input_previous_features, input_future_features)
seq_tensor_output_previous, seq_tensor_output_future = model(input_previous_features, input_future_features)
print('ok')
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss = loss_function(prediction, Y[i+taille_context]) #targets)
loss.backward()
optimizer.step()
#break
#break
print('fin train')
# See what the scores are after training
with torch.no_grad():
i = 0
indices_previous = torch.tensor(list(range(i,i+taille_context+1)))
indices_future = torch.tensor(list(range(i+2*taille_context+1,i+taille_context,-1)))
input_previous_features = torch.index_select(seq_tensor_sumed, 0, indices_previous)
input_future_features = torch.index_select(seq_tensor_sumed, 0, indices_future)
prediction = model(input_previous_features, input_future_features)
# The sentence is "the dog ate the apple". i,j corresponds to score for tag j
# for word i. The predicted tag is the maximum scoring tag.
# Here, we can see the predicted sequence below is 0 1 2 0 1
# since 0 is index of the maximum value of row 1,
# 1 is the index of maximum value of row 2, etc.
# Which is DET NOUN VERB DET NOUN, the correct sequence!
print(prediction, Y[i+taille_context])
# -
with torch.no_grad():
i = 0
indices_previous = torch.tensor(list(range(i,i+taille_context+1)))
indices_future = torch.tensor(list(range(i+2*taille_context+1,i+taille_context,-1)))
input_previous_features = torch.index_select(seq_tensor_sumed, 0, indices_previous)
input_future_features = torch.index_select(seq_tensor_sumed, 0, indices_future)
prediction = model(input_previous_features, input_future_features)
print(prediction, Y[i+taille_context])
X[0][:3]
set(X[0][:3])
v = torch.Tensor([[[1,2,3,4],[5,6,7,8],[9,10,11,12]],[[13,14,15,16],[17,18,19,20],[21,22,23,24]]]) #(1,2,3,4)
v = v.unsqueeze(0)
print(v.view(1,-1,4).shape[-1])
print(v)
print(v.view(1,-1,4))
print(torch.randperm(v.size()[3]))
m=torch.Tensor([[1,2],[4,5]])
m.shape
m[0,:]
seq_tensor.shape
#indices_previous = torch.tensor(list(range(i,i+taille_context+1)))
#indices_future = torch.tensor(list(range(i+2*taille_context+1,i+taille_context,-1)))
print(indices_previous, indices_future, torch.tensor(list(range(i,i+2*taille_context+2))))
sentences_emb = torch.index_select(seq_tensor, 1, torch.tensor(list(range(i,i+2*taille_context+2))))
sentences_emb.shape
rnn = nn.LSTM(sentences_emb.shape[2], sentences_emb.shape[2], num_layers=3)
h0 = torch.randn(3, sentences_emb.shape[1], sentences_emb.shape[2])
c0 = torch.randn(3, sentences_emb.shape[1], sentences_emb.shape[2])
output, (hn, cn) = rnn(sentences_emb, (h0, c0))
output.shape
output[-1,:,:].unsqueeze(1).shape
output[-1,:,:].unsqueeze(1)[int(sentences_emb.shape[1]/2):,:,:].shape
seq_tensor_sumed.shape
input_previous_features.shape
print(seq_tensor_output_previous.shape, seq_tensor_output_future.shape)
similarity = torch.cat((seq_tensor_output_previous[-1,:,:], seq_tensor_output_future[0,:,:]), 1)
print(similarity.shape)
attn = nn.Linear(similarity.shape[1],10)
tanh = nn.Tanh()
similarity_ = tanh(attn(similarity)).transpose(0,1)
print(similarity_.shape)
#seq_tensor_output_previous[-1,:,:].matmul(seq_tensor_output_future[0,:,:])
#alpha = nn.Linear(similarity_,1)
u = torch.empty(1, similarity_.shape[0]).uniform_(0, 1)
alpha = u.matmul(similarity_)
print(alpha.shape)
print(alpha)
seq_tensor_output_previous[-1,:,:].repeat(seq_tensor_output_future.shape[0],1,1)
print(seq_tensor_output_previous.shape, seq_tensor_output_future.shape)
similarity = torch.cat((seq_tensor_output_previous[-1,:,:].repeat(seq_tensor_output_future.shape[0],1,1), seq_tensor_output_future[:,:,:]), 2)
print(similarity.shape)
attn = nn.Linear(similarity.shape[2],10)
tanh = nn.Tanh()
similarity_ = tanh(attn(similarity))
print(similarity_.shape)
#seq_tensor_output_previous[-1,:,:].matmul(seq_tensor_output_future[0,:,:])
alpha = nn.Linear(similarity_.shape[2],1)
alpha_ = alpha(similarity_)
print(alpha_.shape)
print(alpha_)
softmax = nn.Softmax(dim=0)
alpha_ = softmax(alpha_).transpose(0,1).transpose(1,2)#.unsqueeze(0)
print(alpha_.shape)
print(alpha_)
print(seq_tensor_output_future.transpose(0,1).shape)
m_p = torch.matmul(alpha_, seq_tensor_output_future.transpose(0,1)).transpose(0,1) #(32,1,4) * (32,4,4096)
print(m_p.shape)
print(m_p)
seq_tensor_output_sum = torch.cat((seq_tensor_output_previous[-1,:,:], seq_tensor_output_future[-1,:,:], m_p, m_p), -1)
print(seq_tensor_output_sum.shape)
tensor1 = torch.ones(10, 3, 4)
tensor2 = torch.randn(4)
print(tensor2)
torch.matmul(tensor1, tensor2)#.size()
# +
import csv
import glob
punctuations_end_sentence = ['.', '?', '!']
X = []
Y = []
for f in sorted(glob.glob('/vol/work2/galmant/transcripts/*')):
with open(f, newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=' ', quotechar='|')
X_ = []
Y_ = []
for row in reader:
#print(row[1], ' '.join(row[2:]))
#print(row[2:])
sentence = row[2]
old_word = row[2]
for word in row[3:]:
#print(word, sentence, old_word)
#print(word, any(punctuation in old_word for punctuation in punctuations_end_sentence), word[0].isupper())
if any(punctuation in old_word for punctuation in punctuations_end_sentence) and word and word[0].isupper():
X_.append(sentence)
Y_.append(row[1])
sentence = word
else:
sentence += ' '+word
old_word = word
X_.append(sentence)
Y_.append(row[1])
X.append(X_)
Y.append(Y_)
break
print(X[0])
print(Y[0])
# -
string.punctuation
for x,y in zip(X,Y):
print(x,y)
# +
print(torch.__version__)
embed = nn.Embedding(len(words_set), len(get_word_vector(X[0][0])))
embed.weight.data.copy_(we.vectors[we_idx])
embed.weight.requires_grad = False
idx_set_words = dict(zip(list(words_set), range(len(words_set))))
vectorized_seqs = [[idx_set_words[w] for w in s]for s in X]
print(vectorized_seqs[0:2])
# get the length of each seq in your batch
seq_lengths = torch.LongTensor(list(map(len, vectorized_seqs)))
print(seq_lengths)
# dump padding everywhere, and place seqs on the left.
# NOTE: you only need a tensor as big as your longest sequence
seq_tensor = Variable(torch.zeros((len(vectorized_seqs), seq_lengths.max()))).long()
for idx, (seq, seqlen) in enumerate(zip(vectorized_seqs, seq_lengths)):
seq_tensor[idx, :seqlen] = torch.LongTensor(seq)
# SORT YOUR TENSORS BY LENGTH!
seq_lengths, perm_idx = seq_lengths.sort(0, descending=True)
seq_tensor = seq_tensor[perm_idx]
# utils.rnn lets you give (B,L,D) tensors where B is the batch size, L is the maxlength, if you use batch_first=True
# Otherwise, give (L,B,D) tensors
seq_tensor = seq_tensor.transpose(0,1) # (B,L,D) -> (L,B,D)
print(seq_tensor)
# embed your sequences
seq_tensor = embed(seq_tensor)
print(seq_tensor)
# pack them up nicely
packed_input = pack_padded_sequence(seq_tensor, seq_lengths.cpu().numpy())
print(packed_input)
# throw them through your LSTM (remember to give batch_first=True here if you packed with it)
packed_output, (ht, ct) = lstm(packed_input)
print(packed_output)
# unpack your output if required
output, _ = pad_packed_sequence(packed_output)
print(output)
# Or if you just want the final hidden state?
print(ht[-1])
# +
def flatten(l):
return list(itertools.chain.from_iterable(l))
seqs = ['ghatmasala','nicela','c-pakodas']
# make <pad> idx 0
vocab = ['<pad>'] + sorted(list(set(flatten(seqs))))
print(vocab)
vectorized_seqs = [[vocab.index(tok) for tok in seq]for seq in seqs]
# get the length of each seq in your batch
seq_lengths = map(len, vectorized_seqs)
print(list(seq_lengths))
# -
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size, n_layers=1):
super(EncoderRNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.n_layers = n_layers
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, n_layers)
def forward(self, word_inputs, hidden):
# Note: we run this all at once (over the whole input sequence)
seq_len = len(word_inputs)
embedded = self.embedding(word_inputs).view(seq_len, 1, -1)
output, hidden = self.gru(embedded, hidden)
return output, hidden
def init_hidden(self):
hidden = Variable(torch.zeros(self.n_layers, 1, self.hidden_size))
if USE_CUDA: hidden = hidden.cuda()
return hidden
# +
#https://www.kdnuggets.com/2018/06/taming-lstms-variable-sized-mini-batches-pytorch.html
#https://github.com/EdGENetworks/attention-networks-for-classification/blob/master/model.py
#https://github.com/EdGENetworks/attention-networks-for-classification/blob/master/attention_model_validation_experiments.ipynb
#https://explosion.ai/blog/deep-learning-formula-nlp
#https://github.com/koustuvsinha/hred-py/blob/master/hred_pytorch.py
# +
import os
os.chdir('HierarchicalRNN/')
os.getcwd()
# -
import data as dt
import model as md
import pickle
import numpy as np
import torch
# +
import glob
import json
import re
import pickle
import pandas as pd
import spacy
import string
import numpy as np
import itertools
import csv
from joblib import Parallel, delayed
from sklearn.model_selection import train_test_split
import torchtext.vocab as vocab
nlp = spacy.load('en')
def save(nameFile, toSave):
pickle_out = open(nameFile+".pickle", "wb")
pickle.dump(toSave, pickle_out)
pickle_out.close()
def load(nameFile):
pickle_in = open(nameFile+".pickle", "rb")
return pickle.load(pickle_in)
def get_word_vector(word):
return we.vectors[we.stoi[word]]
def closest(vec, n=2):#10):
"""
Find the closest words for a given vector
"""
all_dists = [(w, torch.dist(vec, get_word_vector(w))) for w in we.itos]
return sorted(all_dists, key=lambda t: t[1])[:n]
def print_tuples(tuples):
for tuple in tuples:
print('(%.4f) %s' % (tuple[1], tuple[0]))
# In the form w1 : w2 :: w3 : ?
def analogy(w1, w2, w3, n=5, filter_given=True):
print('\n[%s : %s :: %s : ?]' % (w1, w2, w3))
# w2 - w1 + w3 = w4
closest_words = closest(get_word_vector(w2) - get_word_vector(w1) + get_word_vector(w3))
# Optionally filter out given words
if filter_given:
closest_words = [t for t in closest_words if t[0] not in [w1, w2, w3]]
print_tuples(closest_words[:n])
def load_data(path_transcripts='/vol/work2/galmant/transcripts/', type_sentence_embedding='lstm'):
punctuations_end_sentence = ['.', '?', '!']
we = None
if type_sentence_embedding == 'lstm':
we = vocab.FastText(language='en')
'''
pretrained_aliases = {
"charngram.100d": partial(CharNGram),
"fasttext.en.300d": partial(FastText, language="en"),
"fasttext.simple.300d": partial(FastText, language="simple"),
"glove.42B.300d": partial(GloVe, name="42B", dim="300"),
"glove.840B.300d": partial(GloVe, name="840B", dim="300"),
"glove.twitter.27B.25d": partial(GloVe, name="twitter.27B", dim="25"),
"glove.twitter.27B.50d": partial(GloVe, name="twitter.27B", dim="50"),
"glove.twitter.27B.100d": partial(GloVe, name="twitter.27B", dim="100"),
"glove.twitter.27B.200d": partial(GloVe, name="twitter.27B", dim="200"),
"glove.6B.50d": partial(GloVe, name="6B", dim="50"),
"glove.6B.100d": partial(GloVe, name="6B", dim="100"),
"glove.6B.200d": partial(GloVe, name="6B", dim="200"),
"glove.6B.300d": partial(GloVe, name="6B", dim="300")
}
'''
X_all = []
Y_all = []
words_set = set()
for f in sorted(glob.glob(path_transcripts+'*')):
with open(f, newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=' ', quotechar='|')
X_ = []
Y_ = []
for row in reader:
sentence = row[2]
old_word = row[2]
for word in row[3:]:
if any(punctuation in old_word for punctuation in punctuations_end_sentence) and word and word[0].isupper():
X_.append(sentence)
Y_.append(row[1])
sentence = word
else:
sentence += ' '+word
old_word = word
if sentence and row[1]:
X_.append(sentence)
Y_.append(row[1])
Y = [s.lower() for s in Y_]
if type_sentence_embedding == 'lstm':
X = [s.lower().split() for s in X_]
#Y = [s.lower() for s in Y_]
to_del = []
for s in X:
for w in s:
if w not in we.stoi:
to_del.append(w)
X = [[w for w in s if w not in to_del] for s in X]
for words_per_sentence in X:
words_set = words_set.union(set(words_per_sentence))
else:
X = X_
Y = Y_
if len(X)>0 and len(Y)>0:
X_all.append(X)
Y_all.append(Y)
assert len(X) == len(Y)
threshold_train_dev = int(len(X_all)*0.8)
threshold_dev_test = threshold_train_dev + int(len(X_all)*0.1)
X_train = X_all[:threshold_train_dev]
Y_train = Y_all[:threshold_train_dev]
X_dev = X_all[threshold_train_dev:threshold_dev_test]
Y_dev = Y_all[threshold_train_dev:threshold_dev_test]
X_test = X_all[threshold_dev_test:]
Y_test = Y_all[threshold_dev_test:]
return X_train, Y_train, X_dev, Y_dev, X_test, Y_test, words_set, we
# +
device = 'cuda:0'
is_trained = False
type_sentence_embedding='infersent'
X_train, Y_train, X_dev, Y_dev, X_test, Y_test, words_set, we = load_data(type_sentence_embedding=type_sentence_embedding)
hidden_size = 300
batch_size=32
if type_sentence_embedding == 'lstm':
taille_embedding = len(we.vectors[we.stoi[X_train[0][0][0]]])
else:
taille_embedding = 4096
# +
import torch
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
import torch.nn as nn
from torch.autograd import Variable
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
import torch.nn.functional as F
import torchtext.vocab as vocab
import numpy as np
import matplotlib.pyplot as plt
import nltk
import math
import itertools
from random import shuffle
from models import InferSent
# Set the random seed manually for reproducibility.
torch.manual_seed(1234)
V = 2
MODEL_PATH = '/vol/work3/maurice/encoder/infersent%s.pickle' % V
params_model = {'bsize': 64, 'word_emb_dim': 300, 'enc_lstm_dim': 2048,
'pool_type': 'max', 'dpout_model': 0.0, 'version': V}
infersent = InferSent(params_model)
infersent.load_state_dict(torch.load(MODEL_PATH))
W2V_PATH = '/vol/work3/maurice/dataset/fastText/crawl-300d-2M-subword.vec'
infersent.set_w2v_path(W2V_PATH)
# +
from random import sample
def split_by_context(X, Y, taille_context, batch_size, device='cpu'): #(8..,1,4096)
list_tensors_X = []
list_tensors_Y = []
for i in range(X.shape[0] - (2*taille_context + 1)): #NOT 1 NOW 0 FOR THE NUMBER OF SENTENCES
list_tensors_X.append(torch.index_select(X, 0, torch.tensor(list(range(i,i+2*(taille_context+1))), device=device))) #(8,1,4096)
list_tensors_Y.append(torch.index_select(Y, 0, torch.tensor([taille_context], device=device))) #(1)
tensor_split_X = torch.stack(list_tensors_X).transpose(0,1).view(2*(taille_context+1),-1,X.shape[-1]) #(n-8,8,1,4096) -> (8,n,1,4096) -> (8,n,4096)
tensor_split_Y = torch.stack(list_tensors_Y).view(1,-1)#.squeeze(0) #.transpose(0,1) #(n,1) -> (1,n)
minis_batch_X = {}
minis_batch_Y = {}
nb_batches = int(tensor_split_X.shape[1]/batch_size)
for i in range(nb_batches):
minis_batch_X[i] = tensor_split_X[:,i*batch_size:(i+1)*batch_size,:]
minis_batch_Y[i] = tensor_split_Y[:,i*batch_size:(i+1)*batch_size]
minis_batch_X[nb_batches] = tensor_split_X[:,nb_batches*batch_size:,:] #n/32 tensors of (8,32,4096)
minis_batch_Y[nb_batches] = tensor_split_Y[:,nb_batches*batch_size:] #n/32 tensors of (1,32)
shuffle_ids = sample(list(range(nb_batches+1)), k=nb_batches+1)
return [minis_batch_X[i] for i in shuffle_ids], [minis_batch_Y[i] for i in shuffle_ids]
# +
X_all=X_train
Y_all=Y_train
taille_context=3
bidirectional=False
num_layers=1
nb_epoch=5
targset_size=1
idx_set_words = None
embed = None
dim = 0
model = md.HierarchicalBiLSTM_on_sentence_embedding(taille_embedding, hidden_size, targset_size, num_layers, bidirectional=bidirectional, device=device, type_sentence_embedding=type_sentence_embedding)
model = model.to(device)
criterion = nn.BCEWithLogitsLoss(pos_weight=None)#pos_weight)#BCELoss()#NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, nesterov=True)
#scheduler = StepLR(optimizer, step_size=math.ceil(nb_epoch/5), gamma=0.2)
# Concatenate all the datas in pytorch lists of tensors and create the mini-batch (8,32,4096) or (109,8*32,300)
season_episode = 0
poucentages_majority_class = []
inputs_embeddings = []
outputs_refs = []
for X_,Y_ in zip(X_all,Y_all):
if season_episode == 2:
break
print('file',season_episode,'on',len(X_all))
infersent.build_vocab(X_, tokenize=True)
words_embeddings = infersent.encode(X_, tokenize=True) #In fact it's sentences embeddings, just to have the same name !!! (B,D)
words_embeddings = torch.from_numpy(words_embeddings).unsqueeze(1)# (B,D) -> (L,B,D)
words_embeddings = words_embeddings.to(device)
#words_embeddings : (8..,1,4096) or (109,8..,300); 8.. -> number of sentences, 8 -> context size, 109 -> max number of words per sentence, 300 or 4096 -> embeddings size
Y, poucentage_majority_class = md.create_Y(Y_, device) #(8..)
#TODO CURRENTLY ONLY FOR INFERSENT
inputs_embeddings_, outputs_refs_ = split_by_context(words_embeddings, Y, taille_context, batch_size, device=device) #(n/32,8,32,4096) and (n/32,1,32)
inputs_embeddings += inputs_embeddings_
outputs_refs += outputs_refs_
poucentages_majority_class.append(poucentage_majority_class)
season_episode += 1
print('mean poucentages majority class', sum(poucentages_majority_class)/len(poucentages_majority_class))
# +
import pickle
with open('inputs_embeddings.pickle', 'wb') as handle:
pickle.dump(inputs_embeddings, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('outputs_refs.pickle', 'wb') as handle:
pickle.dump(outputs_refs, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('inputs_embeddings.pickle', 'rb') as handle:
b = pickle.load(handle)
# +
import pathlib
device = 'cuda:0'
is_trained = False
pre_load_data = False
type_sentence_embedding='infersent'
subset_data = 'big_bang_theory_:'
path_save = '/vol/work3/maurice/HierarchicalRNN/'
'''
Data will be store like this :
path_save/type_sentence_embedding/subset_data/models/pytorch_model_epoch_0.pth.tar
path_save/type_sentence_embedding/subset_data/pytorch_best_model_epoch_0.pth.tar
path_save/type_sentence_embedding/subset_data/pre_trained_features/train/inputs_embeddings_0.pickle
path_save/type_sentence_embedding/subset_data/pre_trained_features/train/outputs_refs_0.pickle
path_save/type_sentence_embedding/subset_data/pre_trained_features/dev/inputs_embeddings_0.pickle
path_save/type_sentence_embedding/subset_data/pre_trained_features/dev/outputs_refs_0.pickle
path_save/type_sentence_embedding/subset_data/pre_trained_features/test/inputs_embeddings_0.pickle
path_save/type_sentence_embedding/subset_data/pre_trained_features/test/outputs_refs_0.pickle
Where type_sentence_embedding is one of : lstm, infersent, ...
and subset_data if one of : big_bang_theory_:, big_bang_theory_season_1:3, big_bang_theory_:_game_of_thrones_::
'''
pathlib.Path(path_save+type_sentence_embedding+'/'+subset_data+'/models/').mkdir(parents=True, exist_ok=True)
pathlib.Path(path_save+type_sentence_embedding+'/'+subset_data+'pre_trained_features/train/').mkdir(parents=True, exist_ok=True)
pathlib.Path(path_save+type_sentence_embedding+'/'+subset_data+'pre_trained_features/dev/').mkdir(parents=True, exist_ok=True)
pathlib.Path(path_save+type_sentence_embedding+'/'+subset_data+'pre_trained_features/test/').mkdir(parents=True, exist_ok=True)
# -
# ls /vol/work3/maurice/HierarchicalRNN/infersent/big_bang_theory_\:/models
sentences_emb=inputs_embeddings[0]
input_previous_sentences = sentences_emb[:int(sentences_emb.shape[0]/2),:,:]#(B,L,D) -> (4,32,4096) -> (L,B,D)
input_future_sentences = sentences_emb[int(sentences_emb.shape[0]/2):,:,:]#(B,L,D) -> (4,32,4096) -> (L,B,D)
print(input_previous_sentences.shape)
# Launch training
print('début train')
losses = []
ids_iter=list(range(len(inputs_embeddings)))
for epoch in range(nb_epoch):
print('epoch',epoch,'on',nb_epoch)
shuffle(ids_iter)
losses_ = []
#scheduler.step(epoch)
#for id_, it_ in enumerate(iter_):
for id_, it_ in enumerate(ids_iter):
print(id_,'on',len(ids_iter),'epoch',epoch,'on',nb_epoch)
sentences_emb = inputs_embeddings[it_] #(8,32,4096)
ref = outputs_refs[it_] #(1,32)
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
#print(i, nb_sentences, taille_context)
#model.zero_grad()
# zero the parameter gradients
optimizer.zero_grad()
# Step 2. Get our inputs ready for the network, that is, turn them into
# Tensors of word indices.
# Also, we need to clear out the hidden state of the LSTM,
# detaching it from its history on the last instance.
if type_sentence_embedding == 'lstm':
model.hidden_sentences = model.init_hidden(batch_size=sentences_emb.shape[1])
model.hidden = model.init_hidden(batch_size=sentences_emb.shape[1])
# Step 3. Run our forward pass.
prediction = model(sentences_emb) #(32,1) #(1,32,4096) or (109,8*32,300) ?
prediction = torch.squeeze(prediction, 1)
ref = torch.squeeze(ref, 0)
#print(prediction.shape, ref.shape)
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss = criterion(prediction, ref) #targets)
losses_.append(loss.item())
loss.backward()
optimizer.step()
#break
print(sum(losses_)/len(losses_))
losses.append(losses_)
#model.get_prediction(X_, Y_, idx_set_words, embed, model, taille_context=taille_context, device=device)
#break
torch.save(model.state_dict(), '/people/maurice/HierarchicalRNN/last_model.pth.tar')
#break
print('fin train')
a = torch.ones(4,32,300)
b = torch.ones(32,4,1)
aa = a.transpose(0,1).transpose(1,2)
print(a.shape, aa.shape, b.shape)
m = torch.matmul(aa, b)
print(m.shape) #torch.Size([32, 300, 1])
mm = m.transpose(0,1).transpose(0,2)
print(mm.shape) #torch.Size([1, 32, 300])
m.transpose(0,1).transpose(0,2).shape #(300,32,1) -> (32,300,1) -> torch.Size([1, 300, 32])
#(1,32,300)
print(torch.unsqueeze(a[-1,:,:],0).shape)
sum_ = torch.cat((torch.unsqueeze(a[-1,:,:],0), torch.unsqueeze(a[-1,:,:],0), mm, mm), -1)
print('SUM', sum_.shape)
import model
idx_set_words, embed, model_trained, losses = model.launch_train(X_train, Y_train, words_set, we, taille_embedding, batch_size=32, hidden_size=hidden_size, taille_context=3, bidirectional=False, num_layers=1, nb_epoch=100, targset_size=1, device=device, is_trained=is_trained, type_sentence_embedding=type_sentence_embedding)
if is_trained:
model_trained.load_state_dict(torch.load('/people/maurice/HierarchicalRNN/last_model.pth.tar'))
else:
np.save('losses.npy', np.asarray(losses))
model.get_predictions(X_train, Y_train, idx_set_words, embed, model_trained, batch_size=32, taille_context=3, device=device, is_eval=True, type_sentence_embedding=type_sentence_embedding)
a = [[1,2,3], [4,5,6]]
import pickle
import numpy as np
b = np.asarray(a)
np.save('b.npy', b)
a = range(10161 - (2*3 + 1))
print(len(a), a[-1])
b.flatten()
# +
def get_prediction(X, Y, idx_set_words, embed, model, taille_embedding, taille_context=3, device='cpu', is_eval=False):
if is_eval:
model.eval()
vectorized_seqs = [[idx_set_words[w] for w in s]for s in X]
words_embeddings = create_X(X, vectorized_seqs, device)
sentences_embeddings = sentence_embeddings_by_sum(words_embeddings, embed, vectorized_seqs, taille_embedding)
Y = create_Y(Y, device)
nb_sentences = len(vectorized_seqs)
#print('nb_sentences', nb_sentences)
# See what the scores are after training
with torch.no_grad():
error_global = 0
iter_sentences = range(nb_sentences - (2*taille_context + 1))
for i in iter_sentences:
indices_previous = torch.tensor(list(range(i,i+taille_context+1)), device=device)
indices_future = torch.tensor(list(range(i+2*taille_context+1,i+taille_context,-1)), device=device)
input_previous_features = torch.index_select(sentences_embeddings, 0, indices_previous)
input_future_features = torch.index_select(sentences_embeddings, 0, indices_future)
prediction = model(input_previous_features, input_future_features).item()
ref = Y[i+taille_context].item()
#print(prediction, ref)
if abs(ref - prediction) >= 0.5:
error_global += 1
error_global /= len(iter_sentences)
print('error_global', error_global)
X_train, Y_train, X_dev, Y_dev, X_test, Y_test, words_set, we = data.load_data()
taille_embedding = len(we.vectors[we.stoi[X_train[0][0]]])
idx_set_words, embed, model_trained, losses = model.launch_train(X_train, Y_train, words_set, we, taille_embedding, taille_context=3, bidirectional=False, num_layers=3, nb_epoch=100, targset_size=1, device=device, is_trained=is_trained)
model_trained.load_state_dict(torch.load('/people/maurice/HierarchicalRNN/last_model.pth.tar'))
model.get_prediction(X_train, Y_train, idx_set_words, embed, model_trained, taille_embedding, taille_context=3, device=device, is_eval=True)
# -
import numpy as np
losses = np.load('HierarchicalRNN/losses.npy')
for i in range(losses.shape[0]):
print(losses[i,0])
# +
def count_change(Y):
Y_positive = [y for y in Y if y == 1]
return len(Y_positive)
print(len(Y), count_change(Y), len(Y) - count_change(Y), count_change(Y)/len(Y), 1-count_change(Y)/len(Y))
# -
Y_positives = np.load('HierarchicalRNN/Y_positives.npy')
Y_negatives = np.load('HierarchicalRNN/Y_negatives.npy')
import matplotlib.pyplot as plt
plt.plot(Y_positives)
plt.plot(Y_negatives)
plt.show()
plt.hist(Y_positives, bins='auto')
plt.hist(Y_negatives, bins='auto')
plt.show()
from pyannote.metrics import binary_classification
# +
import glob
import json
import re
import pickle
import pandas as pd
import spacy
import string
import numpy as np
import itertools
import csv
from joblib import Parallel, delayed
import torch
from torch.autograd import Variable
from random import shuffle
from random import sample
from models import InferSent
from sklearn.model_selection import train_test_split
import sys
import torchtext.vocab as vocab
nlp = spacy.load('en')
def load_data(config, path_transcripts='/vol/work2/galmant/transcripts/'):
type_sentence_embedding = config['type_sentence_embedding']
dev_set_list = config['dev_set_list']
test_set_list = config['test_set_list']
punctuations_end_sentence = ['.', '?', '!']
punctuations = string.punctuation #['!','(',')',',','-','.','/',':',';','<','=','>','?','[','\\',']','^','_','{','|','}','~'] #!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
we = None
if type_sentence_embedding == 'lstm':
we = vocab.FastText(language='en')
X_train = []
Y_train = []
X_dev = []
Y_dev = []
X_test = []
Y_test = []
words_set = set()
for file in sorted(glob.glob(path_transcripts+'*')):
with open(file, newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=' ', quotechar='|')
X_ = []
Y_ = []
for row in reader:
#print(row)
sentence = row[2]
old_word = row[2]
for word in row[3:]:
if any(punctuation in old_word for punctuation in punctuations_end_sentence) and word and word[0].isupper():
sentence = sentence.strip()
n = 0
for i,s in enumerate(sentence):
if s in punctuations:
sentence_ = list(sentence)
sentence_.insert(i + n + 1,' ')
sentence_.insert(i + n,' ')
sentence = ''.join(sentence_)
n += 2
#print(sentence)
X_.append(sentence)
Y_.append(row[1])
sentence = word
else:
sentence += ' '+word
old_word = word
if sentence and row[1]:
sentence = sentence.strip()
n = 0
for i,s in enumerate(sentence):
if s in punctuations:
sentence_ = list(sentence)
sentence_.insert(i + n + 1,' ')
sentence_.insert(i + n,' ')
sentence = ''.join(sentence_)
n += 2
#print(sentence)
X_.append(sentence)
Y_.append(row[1])
Y = [s.lower() for s in Y_]
if type_sentence_embedding == 'lstm':
X = [s.lower().split() for s in X_]
#print(X)
#Y = [s.lower() for s in Y_]
to_del = []
for s in X:
#print(s)
for w in s:
#print(w)
if w not in we.stoi:
to_del.append(w)
print('to del', w)
X = [[w.strip() for w in s if w not in to_del] for s in X]
for words_per_sentence in X:
words_set = words_set.union(set(words_per_sentence))
else:
X = X_
Y = Y#_
if len(X)>0 and len(Y)>0:
names_episode = file.split('/')[-1]
names_season = '.'.join(names_episode.split('.')[:-1])
names_serie = '.'.join(names_episode.split('.')[0])
if names_episode in dev_set_list or names_season in dev_set_list or names_serie in dev_set_list:
X_dev.append(X)
Y_dev.append(Y)
elif names_episode in test_set_list or names_season in test_set_list or names_serie in test_set_list:
X_test.append(X)
Y_test.append(Y)
else:
X_train.append(X)
Y_train.append(Y)
assert len(X) == len(Y)
#print(X)
break
return X_train, Y_train, X_dev, Y_dev, X_test, Y_test, words_set, we
config = {}
config['dev_set_list']=['TheBigBangTheory.Season02']
config['test_set_list']=['TheBigBangTheory.Season01']
config['type_sentence_embedding']='lstm'#'infersent'
print('Load corpus dataset')
X_train, Y_train, X_dev, Y_dev, X_test, Y_test, words_set, we = load_data(config)
# -
import datetime
x = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S").replace(' ','_')
print(x)
a = ['BBT.S1', '*.S1']
for a_ in a:
print(a_)
if '*' in a_:
print('True')
break
| RNNConsecutiveSentences.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + raw_mimetype="text/restructuredtext" active=""
# .. _nb_version:
# -
# # Versions
# + raw_mimetype="text/restructuredtext" active=""
# .. _version_0_4_2:
# -
# #### 0.4.2
#
# - New Test Problems Suites (Constrained): DAS-CMOP and MW (contributed by cyrilpic)
# - New Operators for Permutations: OrderCrossover and InversionMutation and a usage to optimize routes for the TSP and Flowshop problem (contributed by Peng-YM )
#
# + raw_mimetype="text/restructuredtext" active=""
# .. _version_0_4_1:
# -
# #### 0.4.1
#
# - New Feature: Riesz s-Energy Method to generate a well-spaced point-set on the unit simplex (reference directions) of arbitrary size.
# - New Algorithm: An implementation of Hooke and Jeeves Pattern Search (well-known single-objective algorithm)
# - New Documentation: We have re-arranged the documentation and explain now the minimize interface in more detail.
# - New Feature: The problem can be parallelized by directly providing a starmapping callable (Contribution by <NAME>).
# - Bugfix: MultiLayerReferenceDirectionFactory did not work because the scaling was disabled.
#
#
# + raw_mimetype="text/restructuredtext" active=""
# .. _version_0_4_0:
# -
# #### 0.4.0 [[Documentation](https://www.egr.msu.edu/coinlab/blankjul/pymoo-0.4.0-doc.zip)]
#
# - New Algorithm: CMA-ES (Implementation published by the Author)
# - New Algorithm: Biased-Random Key Genetic Algorithm (BRKGA)
# - New Test Problems: WFG
# - New Termination Criterion: Stop an Algorithm based on Time
# - New Termination Criterion: Objective Space Tolerance for Multi-objective Problems
# - New Display: Easily modify the Printout in each Generation
# - New Callback: Based on a class now to allow to store data in the object.
# - New Visualization: Videos can be recorded to follow the algorithm's progress.
# - Bugfix: NDScatter Plot
# - Bugfix: Hypervolume Calculations (Vendor Library)
#
#
# + raw_mimetype="text/restructuredtext" active=""
# .. _version_0_3_2:
# -
# #### 0.3.2 [[Documentation](https://www.egr.msu.edu/coinlab/blankjul/pymoo-0.3.2-doc.zip)]
#
# - New Algorithm: Nelder Mead with box constraint handling in the design space
# - New Performance indicator: Karush Kuhn Tucker Proximity Measure (KKTPM)
# - Added Tutorial: Equality constraint handling through customized repair
# - Added Tutorial: Subset selection through GAs
# - Added Tutorial: How to use custom variables
# - Bugfix: No pf given for problem, no feasible solutions found
#
# + raw_mimetype="text/restructuredtext" active=""
# .. _version_0_3_1:
# -
# #### 0.3.1 [[Documentation](https://www.egr.msu.edu/coinlab/blankjul/pymoo-0.3.1-doc.zip)]
#
# - Merging pymop into pymoo - all test problems are included
# - Improved Getting Started Guide
# - Added Visualization
# - Added Decision Making
# - Added GD+ and IGD+
# - New Termination Criteria "x_tol" and "f_tol"
# - Added Mixed Variable Operators and Tutorial
# - Refactored Float to Integer Operators
# - Fixed NSGA-III Normalization Variable Swap
# - Fixed casting issue with latest NumPy version for integer operators
# - Removed the dependency of Cython for installation (.c files are delivered now)
#
#
# + raw_mimetype="text/restructuredtext" active=""
# .. _version_0_3_0:
# -
# #### 0.3.0
#
# - New documentation and global interface
# - New crossovers: Point, HUX
# - Improved version of DE
# - New Factory Methods
# + raw_mimetype="text/restructuredtext" active=""
# .. _version_0_2_2:
# -
# #### 0.2.2
#
# - Several improvements in the code structure
# - Make the cython support optional
# - Modifications for pymop 0.2.3
# + raw_mimetype="text/restructuredtext" active=""
# .. _version_0_2_1:
# -
# #### 0.2.1
#
# - First official release providing NSGA2, NSGA3 and RNSGA3
| doc/source/versions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Analysis of rat working-memory electrode data
# This notebook describes statistical analysis of intracranial EEG data collected from the brains of 8 rats while they performed an alternating T-maze task. 60 trials of data were collected from each rat. The first 30 were performed (?with/without?) drug, while the remaining 30 were performed (?with/without?). Classic single-unit analysis of the neural responses showed no differences in either PFC or dvStriatum neurons with/without drug, yet (1) rat performance was better with the drug and (2) methodological control ensured that the main site of action for the drug was in PFC. A central question is: How did performance improve if individual neuron behavior is not detectably different with the drug?
#
# One hypothesis is that the drug affects a distributed neural population code, rather than affecting the behavior of individual neurons. To test this hypothesis, we will apply multivariate pattern classifiers to intracranial voltage potential data collected at 250 Hz, to see if it is possible to predict the experimental condition (drug/no drug) from the time-series of these data.
#
# There are several potential approaches, but this notebook documents a simple initial analysis. From data collected from a single PFC electrode in each animal we will use regularized logistic regression, as implemented in the glmnet package, to predict drug condition for each animal individually.
# ## Libraries
library(glmnet)
# ## Custom scripts
# The following custom scripts should be in the same working directory as this notebook.
trim.nas <- dget("trimnas.r") #Removes columns of data containing NAs
get.hoerr <- dget("get_hoerr.r") #Computes hold-out error for a model across n folds
#plot.hoerr <- dget("plot_hoerr.r") #Plots hold-out error data for all animals across n folds
# ## Load data
# +
f <- paste("../../data/r", 1:8, ".csv", sep="") #Vector of file names || Might need to modify this for Windows
ratlist <- list(0) #Initialize list to contain data
for(i1 in c(1:8)) ratlist[[i1]] <- trim.nas(read.csv(f[i1], header = F))
# -
# Raw data are now stored in a list, with each list element containing a matrix in which rows are trials and columns are points in time sampled eery 4ms. NOTE that data were recorded over differing windows of time for different rats; consequently the number of columsn in each matrix is different:
dim(ratlist[[2]])
dim(ratlist[[2]])
# Finally, we create a vector containing binary labels for the conditions we wish to discrinimate (drug/no drug):
y <- c(rep(0, times = 30), rep(1, times = 30))
# ## Model fitting
# To fit a single model we use the cv.glmnet function from glmnet, which fits a glm with elastic net regularization. It takes an additional parameter alpha, which controls the mixing of L1 and L2 norms in the regularizer. The algorithm then fits a series of glms with different weights on the regularization parameter, using cross-validation to find the best-performing model. For a single animal this looks like this:
m <- cv.glmnet(ratlist[[1]], y, family="binomial", alpha = 0.5)
plot(m)
# To see how well the model fits the training data, you can use the predict function. This will return a vector of predictions, one for each training item, but not in the original binary [0,1] code. Instead numbers will range from -real to +real. To convert these to the original code, recode all positive numbers as 1 and all negatives as 0. This can then be compared to the true vector of labels:
p <- as.numeric(predict(m, ratlist[[1]]) > 0)
sum(p==y)/length(y)
# So model does well on the training set. This is obviously not that interesting! Instead we want to fit a model on a subset of the data, then test it on held-out items to estimate the true model performance. With such a small data set, we probably want to do this multiple times, using a different held-out set each time. That is what the function get.hoerr loaded above does:
tmp <- get.hoerr(ratlist[[1]], y, pho=0.1, a=.5)
tmp
# Here pho is _proportion held out_. It determines what proportion of data will be used for the hold-out set in each fold, and hence the number of folds _n_. I often use 10%. The script then fits _n_ models, each with a different hold-out set, and returns prediction accuracy for each hold-out set. The mean hold-out error is the mean of the elements in the returned vector.
mean(tmp)
# **NOTE** that you have to specify the mixing parameter alpha (denoted _a_ in the script). Different mixing parameters will yield different results. For a complete analysis we should search many values of alpha in the inner cross-validation loop before selecting parameters and evaluating on the hold-out set.
# It is worth noting that, while rat 1's data allow for excellent prediction, this is not true of all animals. Here is rat 2:
tmp <- get.hoerr(ratlist[[2]], y, pho=0.1, a=.5)
tmp
mean(tmp)
# But you can see that performance improves with different mixing values. Alpha=0 is pure ridge regression---it takes longer for glmnet to fit because coefficients never go to zero. But for rat2 it produces better results:
tmp <- get.hoerr(ratlist[[2]], y, pho=0.1, a=0)
tmp
mean(tmp)
# ## Estimating hoerr for all animals using ridge regression
# NOTE: This takes awhile to run since ridge is so slow
rat.hoerr <- matrix(0, 8, 10) #Initialize matrix to hold data hold-our error data for each animal
for(i1 in c(1:8)) rat.hoerr[i1,] <- get.hoerr(ratlist[[i1]], y, pho=.1, a=0)
plot.hoerr(rat.hoerr)
| analysis/R/Rat ritalin one electrode.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Computing intersections with polygons
#
# <img align="right" src="https://anitagraser.github.io/movingpandas/pics/movingpandas.png">
#
# [](https://mybinder.org/v2/gh/anitagraser/movingpandas-examples/main?filepath=1-tutorials/5-intersecting-with-polygons.ipynb)
#
# Clipping and intersection functions can be used to extract trajectory segments that are located within an area of interest polygon.
# +
import pandas as pd
import geopandas as gpd
from geopandas import GeoDataFrame, read_file
from shapely.geometry import Point, LineString, Polygon
from datetime import datetime, timedelta
import movingpandas as mpd
import warnings
warnings.filterwarnings('ignore')
print(f'MovingPandas version {mpd.__version__}')
# -
gdf = read_file('../data/geolife_small.gpkg')
traj_collection = mpd.TrajectoryCollection(gdf, 'trajectory_id', t='t')
# ## Clipping a Trajectory
help(mpd.Trajectory.clip)
# +
xmin, xmax, ymin, ymax = 116.365035,116.3702945,39.904675,39.907728
polygon = Polygon([(xmin,ymin), (xmin,ymax), (xmax,ymax), (xmax,ymin), (xmin,ymin)])
polygon_gdf = GeoDataFrame(pd.DataFrame([{'geometry':polygon, 'id':1}]), crs=31256)
my_traj = traj_collection.trajectories[2]
intersections = my_traj.clip(polygon)
print("Found {} intersections".format(len(intersections)))
# -
ax = my_traj.plot()
polygon_gdf.plot(ax=ax, color='lightgray')
intersections.plot(ax=ax, color='red', linewidth=5)
# ## Clipping a TrajectoryCollection
# Alternatively, using **TrajectoryCollection**:
clipped = traj_collection.clip(polygon)
clipped
clipped.plot()
# ## Computing intersections
help(mpd.Trajectory.intersection)
polygon_feature = {
"geometry": polygon,
"properties": {'field1': 'abc'}
}
my_traj = traj_collection.trajectories[2]
intersections = my_traj.intersection(polygon_feature)
intersections
intersections.to_point_gdf()
| 1-tutorials/5-intersecting-with-polygons.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python venv3
# language: python
# name: envname
# ---
# # %load /home/biswadip/autoreload_jn.py
# %load_ext autoreload
# %autoreload 2
# +
# # %load sentiment\ analysis-attention.py
# #!/usr/bin/env python
# In[2]:
import os
# In[49]:
import numpy as np
import pandas as pd
# In[53]:
from models.text import CustomTokenizer
from models.model import word_list, max_sentences, maxlen
# -
tokenizer = CustomTokenizer(word_list = word_list)
# ## Load Data
# +
batch_s = 32
path = "negativeReviews/"
neg_reviews = []
for f in os.listdir(path):
file = os.path.join(path, f)
with open(file, "r") as fl:
neg_reviews.append(fl.read())
# -
path = "positiveReviews//"
pos_reviews = []
for f in os.listdir(path):
file = os.path.join(path, f)
with open(file, "r") as fl:
pos_reviews.append(fl.read())
# +
data = pd.DataFrame(
{"text":neg_reviews, "sentiment":0}
).append(pd.DataFrame(
{"text":pos_reviews, "sentiment":1}
))
print("Data Shape {}".format(data.shape))
# data.to_csv("tagged_data.csv")
print("Class Distribution {}".format(
data.sentiment.value_counts())
)
# -
data.text = data.text.apply(tokenizer.clean_text)
# +
data = data.reset_index()
data = data.filter(["text","sentiment"])
data = data.sample(frac=1)
# -
data = data[:5000]
# +
# =================================================
import tensorflow as tf
inp = tokenizer.doc_to_sequences(data.text.tolist())
# -
len(inp)
# ## Prepare Inputs
inputs = []
for doc in inp:
inputs.append(
tf.keras.preprocessing.sequence.pad_sequences(
doc, padding="post", value=0, maxlen=maxlen, dtype=None
)
)
# +
a = np.zeros((len(inputs),max_sentences,maxlen))
for row,x in zip(a, inputs):
row[:len(x)] = x[:50]
# +
# Define Model
from models.model import get_model, ModelCheckpoint, max_sentences, maxlen
from models.model import HierarchicalAttentionLayer, SequenceAttentionLayer, AdditiveAttention
from models.data import Sequence_generator
# from models.tuner import tuner
# -
tf
model = tf.keras.models.load_model(
"model/attention-wh_100_sh_400-10-0.87.h5",
custom_objects={"HierarchicalAttentionLayer": HierarchicalAttentionLayer}
)
# +
hirer_layer = model.layers[1]
# model.layers
sentence_layer = hirer_layer.sentence_layer
document_layer = hirer_layer.document_layer
doc_attention_layer = document_layer.attention_layer
def get_doc_attention_scores(sent):
"""Get sentences in attention weight order"""
# sent = data.text.values[1]
print("Original Document: \n\n {}".format(sent))
print("\n"*3)
sent_tokenized_index = tokenizer.doc_to_sequences([sent])
padded_input = tf.keras.preprocessing.sequence.pad_sequences(
sent_tokenized_index[0], padding="post", value=0, maxlen=maxlen, dtype=None
)
padded_input = [padded_input]
aa = np.zeros((1,max_sentences,maxlen))
for row,x in zip(aa, padded_input):
row[:len(x)] = x[:max_sentences]
print(model.predict(aa))
sent_output = sentence_layer(aa)
doc_lstm_hiden_states = document_layer.lstm(sent_output)
doc_atten_scores = doc_attention_layer.get_attention_scores(
[doc_lstm_hiden_states, doc_lstm_hiden_states]
)
scores = doc_atten_scores[0][0]
sentencess = tokenizer.tokenize_sentence(sent)
print(tf.sort(scores, direction="DESCENDING")[:len(sentencess)])
print("\n"*3)
print([sentencess[i] for i in tf.argsort(
scores, direction="DESCENDING") if i<len(sentencess) ])
print([len(sentencess[i]) for i in tf.argsort(
scores, direction="DESCENDING") if i<len(sentencess) ])
return [sentencess[i] for i in tf.argsort(
scores, direction="DESCENDING") if i<len(sentencess) ]
# +
hirer_layer = model.layers[1]
# model.layers
sentence_layer = hirer_layer.sentence_layer
document_layer = hirer_layer.document_layer
doc_attention_layer = document_layer.attention_layer
def get_sent_attention_scores(sent):
# sent = data.text.values[1]
# sent = "The Movie sucks!"
print("Original Sentence: \n\n {}".format(sent))
print("\n"*3)
sent_tokenized_index = tokenizer.texts_to_sequences([sent])
padded_input = tf.keras.preprocessing.sequence.pad_sequences(
sent_tokenized_index, padding="post", value=0, maxlen=maxlen, dtype=None
)
padded_input = [padded_input]
aa = np.zeros((1,max_sentences,maxlen))
for row,x in zip(aa, padded_input):
row[:len(x)] = x[:max_sentences]
print(model.predict(aa))
sent_output = sentence_layer(aa)
inputs = sentence_layer.embed(aa)
# putting every sentence in a single axis
inputs_mask = inputs._keras_mask
inputs = tf.reshape(
inputs, shape = (-1 ,maxlen ,sentence_layer.embedding_len)
)
mask = tf.reshape(
inputs_mask,
shape=(-1, maxlen)
)
lstm_out = sentence_layer.lstm(inputs, mask=mask)
lstm_mask = lstm_out._keras_mask
attention_scores = sentence_layer.attention_layer.get_attention_scores(
[lstm_out,lstm_out],
mask = [lstm_mask, lstm_mask]
)
word_indexes = tokenizer.texts_to_sequences([sent])[0]
for i in range(len(word_indexes)):
sent_attention_scores = attention_scores[i]
first_set_attention_scores = sent_attention_scores[0]
sorted_indexes = tf.argsort(
first_set_attention_scores,
direction="DESCENDING"
)
ranked_words = [tokenizer.index_word[word_indexes[wx]]
for wx in sorted_indexes if wx<len(word_indexes)]
print(ranked_words)
# -
from nltk import sent_tokenize
import re
# ## Sentence Ranking
# i=10
i+=1
text = data.text.values[i]
text = re.sub(r"<br /><br />", " ", text).strip()
sent_tokenize(text)
# +
# text = "The Movie Was awesome"
# text
# -
xxx = get_doc_attention_scores(text)
most_important_text = xxx[0]
# ### Most Important sentence
most_important_text
# ## Word Ranking + Interaction
# Word attention weights has ranking as well as interaction information.
#
# Attention weights has a dimension of ```(t_steps, tsteps)``` ```t_step``` is the number of words in the sentence. Normally, ```(i,j)``` element of the matrix provides how much interaction there is between ```ith and jth``` words irrespective of their distance. In general you can also take any row and the ```t_steps``` sized vector will produce the ranking of words.
#
# It is also observed that all the rows almost same **in case sentence attention vectors**. The attention weights in case sentences are more useful for ranking. This is expected as it is very difficult to interact sentences as more or less they provide independent information.
#
# **In case of Word attention,** the attention weights vary across the rows(not a lot). This provides a sense that the attention weights not only create ranking but also measures the level of interctions between words to create more useful features.
get_sent_attention_scores(most_important_text)
| analyze attention.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Create a synthetic database
# ## Set options
# Import the sampler module
from os import path
from src.sampler import SinSampler
# Data options
save_repo = '/home/younesz/Desktop/SUM'
data_type = 'single'
epoch_length = 180
noise_level = 1.5 # times sinus variance
frequency_range = (20, 40) # nb of oscillations within the epoch
amplitude_range = (49, 50) # has to be narrower than frequency range
# Make database
sampler = SinSampler(data_type, epoch_length, noise_level, frequency_range, amplitude_range, save_repo)
# Generate an epoch
epoch = sampler.sample()[0]
# +
import matplotlib.pyplot as plt
# %matplotlib inline
def plot_epoch(epoch, color='b'):
plt.plot(epoch, color=color);
plt.xlabel('Time (hours)');
plt.ylabel('Amplitude (USD)');
# -
# Display data
plot_epoch(epoch)
# # Make the environment
# ## Set options
# State options
state_size = 32 # samples
time_difference = True
wavelet_channels = 0 # No wavelet transform
# Action options
action_range = [-1,1]
action_labels = ['short-all', 'short-half', 'hold', 'long-half', 'long-all']
""" Note: from the action labels, the environment figures out how many different actions the agent can take
and splits the action range accordingly - in this case the agent can select the actions 0 to 4 and the environment
interprets them as [-1, -0.5, 0, 0.5, 1]. If only one label is provided, the action is continuous by default """
# Reward options
open_cost = 3 # percentage fee for each buy transaction
# Make the environment
from src.emulator import Market
env = Market(sampler, state_size, open_cost, time_difference=time_difference,
wavelet_channels=wavelet_channels, action_range=action_range, action_labels=action_labels)
# ## Interact with the environment
# Get state
state, valid_actions = env.reset()
# +
# Display current state
series = (env.prices - np.mean(env.prices)) / np.std(env.prices)
tdiff = series[1:]-series[:-1]
plot_epoch(series);
plot_epoch(np.append(float('NaN'), state.T), color='r');
plt.plot([env.t, env.t], [-2, 2], '--k');
plt.xlim([0, 40]);
plt.ylim([-2.1,2.1]);
"""The red trace is the state observed by the agent, which is the differenciated and normalized price (blue trace)"""
# -
# Act on the environment
""" The agent observe the space up to the vertical dashed bar and picks actions with respect to his belief on the
price at the next time point"""
action = 4 # Agent selects: long-all (convert all dollars to stock)
next_state, reward, terminated, _ = env.step(action)
# Check return on investment at next step
print('The stock value of the agent moved by %.2f USD' %(reward) )
# # Train an agent
# ## Agent options
# Let's make a DQN agent
from generic.agents import Agent
agent_opt = {'type': 'DQN', # other choice is DDPG
'acSpace': len(action_labels), # how many actions to select from
'lr': 1e-3, # learning rate:
'nz': 'dummy', # no noise on the actions, other choice is OrnsteinUhlenbeck
'batch_size':16}
agent = Agent(agent_opt['type'],
state_size,
agent_opt['acSpace'],
layer_units=[80, 60],
noise_process=agent_opt['nz'],
learning_rate=agent_opt['lr'],
batch_size=agent_opt['batch_size'])
agent.p_model = agent.model
# Set visualizer
from src.visualizer import Visualizer
rootStore= open('dbloc.txt', 'r').readline().rstrip('\n')
fld_save = path.join(rootStore, 'results', sampler.title, agent_opt['type'],
str((env.window_state, sampler.window_episode, agent_opt['batch_size'], agent_opt['lr'],
agent.discount_factor, wavelet_channels, env.open_cost)))
visualizer= Visualizer(env.action_labels)
# Set the simulator
from src.simulators import Simulator
simulator = Simulator(agent, env, visualizer=visualizer, fld_save=fld_save)
simulator.agent_opt = agent_opt
# Train the agent
simulator.train(200, # Number of episodes for training
save_per_episode=1, # nb of log entries per episode
exploration_decay=0.99,
learning_rate=agent_opt['lr'],
exploration_min=0.05,
print_t=False,
exploration_init=0.8);
# Visualize testing performance
from IPython.display import Image
img_path = path.join(fld_save, 'in-sample testing', 'total_rewards.png')
Image(filename=img_path)
| scripts/notebooks/.ipynb_checkpoints/build_sin_database-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# The **slice operation** we saw with strings also work on lists. Remember that the first index is the starting point for the slice and the second number is one index past the end of the slice (up to but not including that element). Recall also that if you omit the first index (before the colon), the slice starts at the beginning of the sequence. If you omit the second index, the slice goes to the end of the sequence.
a_list = ['a', 'b', 'c', 'd', 'e', 'f']
print(a_list[1:3])
print(a_list[:4])
print(a_list[3:])
print(a_list[:])
b_list = [3, 67, "cat", [56, 57, "dog"], [ ], 3.14, False]
print(b_list[4:])
| basics/List Slices.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Design and test a Butterworth lowpass filter
#
# This document describes how to design a Butterworth lowpass filter with a cutoff frequency $\omega_c$ and compute the discrete coefficients so that it can be implemented on hardware.
# Packages and adjustments to the figures
from scipy import signal
import matplotlib.pyplot as plt
import numpy as np
import math
plt.rcParams["figure.figsize"] = 10,5
plt.rcParams["font.size"] = 16
plt.rcParams.update({"text.usetex": True,"font.family": "sans-serif","font.sans-serif": ["Helvetica"]})
# ## 1. Generate a test signal
#
# * A simple test signal $\boldsymbol{y} = \{ y_i \}$ is generated with a fixed sampling frequency using the function:
#
# $$y(t) = m_0 \sin(2\pi f_0 t) + m_1 \sin(2\pi f_1 t)$$
#
# * The power spectrum is plotted as the magnitude of the discrete fourier transform (DFT): $|\hat{\boldsymbol{y}}|$
# +
# Generate a signal
samplingFreq = 1000; # sampled at 1 kHz = 1000 samples / second
tlims = [0,1] # in seconds
signalFreq = [2,50]; # Cycles / second
signalMag = [1,0.2]; # magnitude of each sine
t = np.linspace(tlims[0],tlims[1],(tlims[1]-tlims[0])*samplingFreq)
y = signalMag[0]*np.sin(2*math.pi*signalFreq[0]*t) + signalMag[1]*np.sin(2*math.pi*signalFreq[1]*t)
# Compute the Fourier transform
yhat = np.fft.fft(y);
fcycles = np.fft.fftfreq(len(t),d=1.0/samplingFreq); # the frequencies in cycles/s
# Plot the signal
plt.figure()
plt.plot(t,y);
plt.ylabel("$y(t)$");
plt.xlabel("$t$ (s)");
plt.xlim([min(t),max(t)]);
# Plot the power spectrum
plt.figure()
plt.plot(fcycles,np.absolute(yhat));
plt.xlim([-100,100]);
plt.xlabel("$\omega$ (cycles/s)");
plt.ylabel("$|\hat{y}|$");
# -
# ## 2. Butterworth low-pass filter transfer function
#
# This document does not derive the formula for a Butterworth filter. Instead, it uses the standard form with DC gain $G=1$.
#
# * A cutoff frequency $\omega_c$ is selected
# * The Butterworth low-pass filter transfer function with $\omega_c = 1$ can be written as (see https://en.wikipedia.org/wiki/Butterworth_filter)
# $$H(s) = \frac{1}{\sum_1^{n} a_k s^k}$$
# where $n$ is the order of the filter. The coefficients are given by the recursion formula:
# $$a_{k+1} = \frac{\cos( k \gamma )}{\sin((k+1)\gamma)}$$
# with $a_0 = 1$ and $\gamma = \frac{\pi}{2n}$.
#
# * Because the Butterworth polynomial is
# $$B_n(s) = \sum_{k=0}^n a_k s^k$$
# and we want to set a new cutoff frequency of $\omega_c$, substitute
# $$B_n = \sum_{k=0}^n a_k \left(\frac{s}{\omega_c}\right)^k = \sum_{k=0}^n \frac{a_k}{{\omega_c}^k} s^k$$
# for convenience set
# $$B_n(s) = \sum_{k=0}^n c_k s^k$$
# with $c_k = \frac{a_k}{{\omega_c}^k}$
#
# +
# Butterworth filter
wc = 2*np.pi*5; # cutoff frequency (rad/s)
n = 2; # Filter order
# Compute the Butterworth filter coefficents
a = np.zeros(n+1);
gamma = np.pi/(2.0*n);
a[0] = 1; # first coef is always 1
for k in range(0,n):
rfac = np.cos(k*gamma)/np.sin((k+1)*gamma);
a[k+1] = rfac*a[k]; # Other coefficients by recursion
print("Butterworth polynomial coefficients a_i: " + str(a))
# Adjust the cutoff frequency
c = np.zeros(n+1);
for k in range(0,n+1):
c[n-k] = a[k]/pow(wc,k)
print("Butterworth coefficients with frequency adjustment c_i: " + str(c))
# +
# Low-pass filter
w0 = 2*np.pi*5; # pole frequency (rad/s)
num = [1]; # transfer function numerator coefficients
den = c; # transfer function denominator coefficients
lowPass = signal.TransferFunction(num,den) # Transfer function
# Generate the bode plot
w = np.logspace( np.log10(min(signalFreq)*2*np.pi/10), np.log10(max(signalFreq)*2*np.pi*10), 500 )
w, mag, phase = signal.bode(lowPass,w)
# Magnitude plot
plt.figure()
plt.semilogx(w, mag)
for sf in signalFreq:
plt.semilogx([sf*2*np.pi,sf*2*np.pi],[min(mag),max(mag)],'k:')
plt.ylabel("Magnitude ($dB$)")
plt.xlim([min(w),max(w)])
plt.ylim([min(mag),max(mag)])
# Phase plot
plt.figure()
plt.semilogx(w, phase) # Bode phase plot
plt.ylabel("Phase ($^\circ$)")
plt.xlabel("$\omega$ (rad/s)")
plt.xlim([min(w),max(w)])
plt.show()
# -
# ## 3. Discrete transfer function
#
# To implement the low-pass filter on hardware, you need to compute the discrete transfer function using the signal's sampling frequency.
# * The time step is $\Delta t = 1/f_s$
# * Compute the discrete transfer function using Tustin's method by setting $s = \frac{2}{\Delta t} \left( \frac{1-z^{-1}}{1+z^{-1}} \right)$
# * Why do it yourself? The <code>to_discrete</code> method computes the bilinear transform (Tustin's method when $\alpha = 1/2$)
# Compute the discrete low pass with delta_t = 1/samplingFrequency
dt = 1.0/samplingFreq;
discreteLowPass = lowPass.to_discrete(dt,method='gbt',alpha=0.5)
print(discreteLowPass)
# ## 4. Filter coefficients
#
# We want to find the filter coefficients for the discrete update:
# $$y[n] = a_1 y[n-1] + a_2 y[n-2] + ... + b_0 x[n] + b_1 x[n-1] + ...$$
#
# The coefficients can be taken directly from the discrete transfer function of the filter in the form:
# $$H(z) = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \ldots}{1 - a_1 z^{-1} - a_2 z^{-2} + \ldots}$$
#
# (This is a result of taking the Z-transform which is not shown here)
#
# Compare this to a transfer function with coefficients
# <code>
# num = [b_0, b_1, b_2]
# den = [1, a_1, a_2]
# </code>
# is
# $$H(z) = \frac{b_0 z^2 + b_1 z + b_2}{z^2 + a_1 z + a_2}$$
# which is equivalent to
# $$H(z) = \frac{b_0 + b_1 z^{-1} + b_2 z^{-2}}{1 + a_1 z^{-1} + a_2 z^{-2}}$$
# So you can take the coefficients in the same order that they are defined in the numerator and denominator of the transfer function object. The only difference is that the **coefficients in the denominator need a negative sign**.
#
# * To filter the signal, apply the filter using the discrete update
# * The filtered signal and filtered signal power spectrum are plotted alongside the unfiltered signal
# +
# The coefficients from the discrete form of the filter transfer function (but with a negative sign)
b = discreteLowPass.num;
a = -discreteLowPass.den;
print("Filter coefficients b_i: " + str(b))
print("Filter coefficients a_i: " + str(a[1:]))
# Filter the signal
Nb = len(b)
yfilt = np.zeros(len(y));
for m in range(3,len(y)):
yfilt[m] = b[0]*y[m];
for i in range(1,Nb):
yfilt[m] += a[i]*yfilt[m-i] + b[i]*y[m-i];
# View the result
# Plot the signal
plt.figure()
plt.plot(t,y);
plt.plot(t,yfilt);
plt.ylabel("$y(t)$")
plt.xlim([min(t),max(t)]);
# Generate Fourier transform
yfilthat = np.fft.fft(yfilt)
fcycles = np.fft.fftfreq(len(t),d=1.0/samplingFreq)
plt.figure()
plt.plot(fcycles,np.absolute(yhat));
plt.plot(fcycles,np.absolute(yfilthat));
plt.xlim([-100,100]);
plt.xlabel("$\omega$ (cycles/s)");
plt.ylabel("$|\hat{y}|$");
# -
| docs/filter/ButterworthFilter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: bm
# language: python
# name: python3
# ---
# #### SMILES input of a noncovalent isopentane--water complex, create Gaussian input files, CSEARCH with CREST
# ###### Step 1: CSEARCH conformational sampling (creates SDF files)
# +
import os, glob
from pathlib import Path
from aqme.csearch import csearch
from aqme.qprep import qprep
name = 'isopent-water-complex'
smi = 'CCC(C)C.O'
w_dir_main = Path(os.getcwd())
sdf_path = w_dir_main.joinpath(name)
# run CSEARCH conformational sampling, specifying:
# 1) Working directory (w_dir_main=w_dir_main)
# 2) PATH to create the new SDF files (destination=sdf_path)
# 3) SMILES string (smi=smi)
# 4) Name for the output SDF files (name=name)
# 5) CREST sampling (program='crest')
# 6) Additional CREST keywords (crest_keywords='--nci')
# 7) Include CREGEN post-analysis (cregen=True)
# 8) Additional CREGEN keywords (cregen_keywords='--ewin 3')
csearch(w_dir_main=w_dir_main,destination=sdf_path,smi=smi,
name=name,program='crest',crest_keywords='--nci',
cregen=True,cregen_keywords='--ewin 3')
# -
# ###### Step 2: Writing Gaussian input files with the sdf obtained from CSEARCH
# +
# set SDF filenames and directory where the new com files will be created
com_path = sdf_path.joinpath(f'com_files')
sdf_rdkit_files = glob.glob(f'{sdf_path}/*.sdf')
# run QPREP input files generator, with:
# 1) Working directory (w_dir_main=sdf_path)
# 2) PATH to create the new SDF files (destination=com_path)
# 3) Files to convert (files=file)
# 4) QM program for the input (program='gaussian')
# 5) Keyword line for the Gaussian inputs (qm_input='wb97xd/6-31+G* opt freq')
# 6) Memory to use in the calculations (mem='24GB')
# 7) Processors to use in the calcs (nprocs=8)
qprep(w_dir_main=sdf_path,destination=com_path,files=sdf_rdkit_files,program='gaussian',
qm_input='wb97xd/6-31+G* opt freq',mem='24GB',nprocs=8)
| Example_workflows/CSEARCH_CMIN_conformer_generation/CSEARCH_CREST_NCI_complex.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="vfE1E92iuW2R" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 544} outputId="7e01ff7a-4e6e-49ba-c999-cf5d6ff52685" executionInfo={"status": "ok", "timestamp": 1555896859477, "user_tz": 300, "elapsed": 8587, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-cC1e-3g9rp0/AAAAAAAAAAI/AAAAAAAAAAc/yBUiSVYeQFs/s64/photo.jpg", "userId": "09019116604359351204"}}
# !pip install bert-serving-server
# !pip install bert-serving-client
# + id="zympqHoyvsg8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="371b15ea-7692-45e6-e468-2362a65a2728" executionInfo={"status": "ok", "timestamp": 1555896886135, "user_tz": 300, "elapsed": 34874, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-cC1e-3g9rp0/AAAAAAAAAAI/AAAAAAAAAAc/yBUiSVYeQFs/s64/photo.jpg", "userId": "09019116604359351204"}}
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve('https://github.com/naver/biobert-pretrained/releases/download/v1.0-pubmed-pmc/biobert_pubmed_pmc.tar.gz', 'BioBert.tar.gz')
# + id="lh2wgpetx_py" colab_type="code" colab={}
import os
# + id="ucatEknGyBoi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5611f0e6-6a5e-4899-8e50-d246eca003b9" executionInfo={"status": "ok", "timestamp": 1555896887301, "user_tz": 300, "elapsed": 35245, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-cC1e-3g9rp0/AAAAAAAAAAI/AAAAAAAAAAc/yBUiSVYeQFs/s64/photo.jpg", "userId": "09019116604359351204"}}
os.listdir()
# + id="NW6eIdpZyiLy" colab_type="code" colab={}
if not os.path.exists('BioBertFolder'):
os.makedirs('BioBertFolder')
# + id="Wa1N5tXtwGH7" colab_type="code" colab={}
import tarfile
tar = tarfile.open("BioBert.tar.gz")
tar.extractall(path='BioBertFolder/')
tar.close()
# + id="mdu9X0f-y12C" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="84a83150-2b2e-4939-f012-81e2f81097ab" executionInfo={"status": "ok", "timestamp": 1555896899025, "user_tz": 300, "elapsed": 45618, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-cC1e-3g9rp0/AAAAAAAAAAI/AAAAAAAAAAc/yBUiSVYeQFs/s64/photo.jpg", "userId": "09019116604359351204"}}
os.listdir()
# + id="xbB42ikZyWoK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="92c87435-e268-4ea7-9a41-ab89b0dccbe2" executionInfo={"status": "ok", "timestamp": 1555896899963, "user_tz": 300, "elapsed": 46040, "user": {"displayName": "<NAME>", "photoUrl": "https://lh5.googleusercontent.com/-cC1e-3g9rp0/AAAAAAAAAAI/AAAAAAAAAAc/yBUiSVYeQFs/s64/photo.jpg", "userId": "09019116604359351204"}}
os.listdir('BioBertFolder/pubmed_pmc_470k')
# + id="IS5Zvcutz8Ph" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 137491, "output_embedded_package_id": "1tErhKYBwQ9HP4_7o6rUixp-AGu8q1kXs"} outputId="b8674932-f611-4b14-9798-65663180ab2a"
# !bert-serving-start -model_dir BioBertFolder/pubmed_pmc_470k/ -tuned_model_dir=BioBertFolder/pubmed_pmc_470k/ -ckpt_name=biobert_model.ckpt -num_worker=4 -max_seq_len=512
# + id="Df9Hye8TXQqG" colab_type="code" colab={}
| notebooks/BertService.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''detr'': conda)'
# name: python3
# ---
# ## Plot DETR log
import json
import torch
import matplotlib.pyplot as plt
from PIL import Image
import json
from tqdm import tqdm_notebook
from pathlib import Path
import torchvision
from notebook_utils import *
log_directory = Path('pretrained/radiogalaxy/2021-11-17_r50_150ep')
log_directory_list = [log_directory]
weight_path = log_directory / Path('checkpoint0149.pth')
CLASSES = ['No-Object', 'galaxy', 'source', 'sidelobe']
COLORS = [[0.000, 0.447, 0.741], [0.850, 0.325, 0.098], [0.929, 0.694, 0.125],
[0.494, 0.184, 0.556]]
CONFIDENCE_THR = 0.9
# +
fields_of_interest = (
'loss',
'mAP',
)
plot_logs(log_directory_list,
fields_of_interest)
# +
fields_of_interest = (
'loss_ce',
'loss_bbox',
'loss_giou',
)
plot_logs(log_directory_list,
fields_of_interest)
# +
fields_of_interest = (
'class_error',
'cardinality_error_unscaled',
)
plot_logs(log_directory_list,
fields_of_interest)
# -
# ### Load the trained model
# +
num_classes= len(CLASSES)
model = torch.hub.load('facebookresearch/detr',
'detr_resnet50',
pretrained=False,
num_classes=num_classes)
checkpoint = torch.load(weight_path,
map_location='cuda')
pretrained = torch.hub.load_state_dict_from_url(
'https://dl.fbaipublicfiles.com/detr/detr-r50-e632da11.pth', map_location='cuda', check_hash=True)
model.load_state_dict(checkpoint['model'],
strict=False)
model = model.cuda()
model.eval()
# -
out_box_file = log_directory / Path('rg-boxes.json')
no_pred = log_directory / Path('no_pred.txt')
if no_pred.exists():
no_pred.unlink()
# ## Visualization functions
test_dir = Path('data/radio-galaxy/test')
# +
def prepare_img(pil_img):
transforms = make_coco_transforms()
img, _ = transforms(pil_img, None)
img = img.unsqueeze(0)
img = img.cuda()
return img
def run_inference(img, original_size):
outputs = model(img)
img = img.squeeze(0)
labels, pred_boxes, confidence = format_output(outputs, CONFIDENCE_THR, 0)
if pred_boxes is not None:
bboxes_scaled = rescale_bboxes(pred_boxes, original_size)
else:
return None, None, None
return labels, bboxes_scaled, confidence
# -
# ### Run inference on dataset
# +
plt.ioff()
pred_folder_path = Path('predictions')
pred_folder_path.mkdir(exist_ok=True)
out_boxes = {}
batch_idx = 0
from tqdm import tqdm
for img_path in tqdm(test_dir.glob('*[.jpeg .png]')):
pil_img = Image.open(img_path).convert("RGB")
img = prepare_img(pil_img)
# Takes tensor image to run inference on (rescaled image based on training sizes)
# and original image size
labels, boxes, confidence = run_inference(img, pil_img.size)
img_name = img_path.stem
if labels is None or boxes is None or confidence is None:
with open(no_pred, 'a') as f:
f.write(f'No predictions for img {img_name} at score {CONFIDENCE_THR}\n')
continue
else:
out_boxes[img_name] = {}
out_boxes[img_name]['labels'] = [CLASSES[cl_idx] for cl_idx in labels.tolist()]
out_boxes[img_name]['boxes'] = boxes.tolist()
out_boxes[img_name]['scores'] = confidence.tolist()
with open(out_box_file, 'w') as out_json:
json.dump(out_boxes, out_json)
# -
import gc
gc.collect()
torch.cuda.empty_cache()
# ## Log single images
# +
with open('data/radio-galaxy/annotations/test.json') as infile:
annotations = json.load(infile)
with open(out_box_file) as infile:
pred_boxes = json.load(infile)
id_to_filename = {}
for img in annotations['images']:
id_to_filename[img['id']] = img['file_name'].split('.')[0]
gt_boxes = {}
# placeholder = Image.open('data/radio-galaxy/val/sample1_galaxy0011.png')
for ann in annotations['annotations']:
img_id = ann['image_id']
img_name = id_to_filename[img_id]
w, h = annotations['images'][img_id]['width'], annotations['images'][img_id]['height']
if img_name not in gt_boxes:
gt_boxes[img_name] = {'boxes': [], 'labels': []}
bbox = ann['bbox'].copy()
bbox[2] += bbox[0]
bbox[3] += bbox[1]
# bbox[0] = bbox[0] / w * 800
# bbox[2] = bbox[2] / w * 800
# bbox[1] = bbox[1] / h * 800
# bbox[3] = bbox[3] / h * 800
class_id = ann['category_id']
gt_boxes[img_name]['boxes'].append(bbox)
gt_boxes[img_name]['labels'].append(class_id)
# +
import torch
pred_list = []
no_pred = []
for img in sorted(pred_boxes):
for k, v in pred_boxes[img].items():
if not isinstance(v, torch.Tensor):
pred_boxes[img][k] = torch.tensor(v)
if not pred_boxes[img]:
# Handle missing predictions
pred_boxes[img]['boxes'] = torch.tensor([[0.,0.,0.,0.]])
pred_boxes[img]['labels'] = torch.tensor([0])
pred_boxes[img]['scores'] = torch.tensor([0.])
pred_list.append(pred_boxes[img])
gt_list = []
for img in sorted(gt_boxes):
for k, v in gt_boxes[img].items():
gt_boxes[img][k] = torch.tensor(v)
gt_list.append(gt_boxes[img])
# -
len(pred_list)
import torchmetrics
map50 = torchmetrics.MAP(class_metrics=True)
map50.update(pred_list, gt_list)
map50.compute()
# +
import random
img_path = Path('data/radio-galaxy/val/sample18_galaxy0091.png')
# img_path = random.sample(list(test_dir.glob('*.png')), 1)[0]
batch_idx = 0
img = prepare_img(img_path)
labels, boxes, scores = run_inference(img)
denorm_img = inv_normalize(img.squeeze(0))
orig_image = torchvision.transforms.functional.to_pil_image(denorm_img)
fig = log_image(orig_image, labels, boxes, scores, 'Prediction', CLASSES, COLORS)
img_name = img_path.stem
print(img_name)
gtb, gtl = gt_boxes[img_name].values()
gts = [1] * len(gtl)
fig = log_image(orig_image, gtl, gtb, gts, 'GT', CLASSES, COLORS)
# +
fig = plt.figure(figsize=(16,10))
plt.imshow(orig_image)
ax = plt.gca()
ax.add_patch(plt.Rectangle((10, 10), 100, 100,
fill=False, color=COLORS[1], linewidth=3))
text = f'{CLASSES[1]}: {0.9:0.2f}'
ax.text(0, 0, text, fontsize=15,
bbox=dict(facecolor='yellow', alpha=0.5))
# -
| plot_logs_and_inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
import numpy as np
import json
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
from witwidget.notebook.visualization import WitWidget, WitConfigBuilder
# -
from google.cloud import bigquery
query="""
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
publicdata.samples.natality
WHERE year > 2000
LIMIT 10000
"""
df = bigquery.Client().query(query).to_dataframe()
df.head()
df.describe()
df = df.dropna()
df = shuffle(df, random_state=2)
labels = df['weight_pounds']
data = df.drop(columns=['weight_pounds'])
data['is_male'] = data['is_male'].astype(int)
data.head()
x,y = data,labels
x_train,x_test,y_train,y_test = train_test_split(x,y)
model = Sequential([
Dense(64, activation='relu', input_shape=(len(x_train.iloc[0]),)),
Dense(32, activation='relu'),
Dense(1)]
)
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.MeanSquaredError(),
metrics=['mae', 'mse'])
model.summary()
model.fit(x_train, y_train, epochs=10, validation_split=0.1)
num_examples = 10
predictions = model.predict(x_test[:num_examples])
for i in range(num_examples):
print('Predicted val: ', predictions[i][0])
print('Actual val: ',y_test.iloc[i])
print()
wit_data = pd.concat([x_test, y_test], axis=1)
def custom_predict(examples_to_infer):
preds = model.predict(examples_to_infer)
return preds
try:
import google.colab
# !pip install --upgrade witwidget
except:
pass
from witwidget.notebook.visualization import WitConfigBuilder
from witwidget.notebook.visualization import WitWidget
config_builder = (WitConfigBuilder(wit_data[:500].values.tolist(), data.columns.tolist() + ['weight_pounds'])
.set_custom_predict_fn(custom_predict)
.set_target_feature('weight_pounds')
.set_model_type('regression'))
WitWidget(config_builder, height=800)
| outreachy codelab.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="hJg6tXyUoXNO"
import tensorflow as tf
# + colab={"base_uri": "https://localhost:8080/"} id="TdWt39Joonp4" outputId="6b0227be-cc18-459c-d99e-57e6996b26cb"
x = tf.range(10) ## any data tensor
dataset = tf.data.Dataset.from_tensor_slices(x)
# show type of dataset?
dataset
# + colab={"base_uri": "https://localhost:8080/"} id="ktH_pqKVpQQK" outputId="56d29186-3e97-4411-bd32-1b76148ac038"
for item in dataset:
print(item)
# + [markdown] id="wjGuVCwgq9YC"
# ## Chaining Transformations
# - applying transformations to the dataset for manipulation
# + colab={"base_uri": "https://localhost:8080/"} id="CbvcGWfMqw2f" outputId="910c1cc1-9df3-4db2-ccfb-b66947564852"
dataset = dataset.repeat(3).batch(7)
for item in dataset:
print(item)
# + colab={"base_uri": "https://localhost:8080/", "height": 374} id="UwcYwAm-rOmX" outputId="5ceff9df-3af1-4dc9-e2ee-9f2f07aa873e"
dataset = dataset.map(lambda x: x*2)
# following line will have a deprecated asset for unbatch
# dataset = dataset.apply(tf.data.experimental.unbatch() )
### microcontroller audio ai assistant translator WITH AR glasses to display the text
# + [markdown] id="YtC4400kusPa"
# ## Shuffling the Dataset
#
# + colab={"base_uri": "https://localhost:8080/"} id="qEIJmSEVuwEV" outputId="aaf0a6b0-1b2f-44b5-c31f-7ba5a78537b4"
dataset = tf.data.Dataset.range(10).repeat(3)
dataset = dataset.shuffle(buffer_size=5, seed=42).batch(7)
#for loop to print items in the dataset that uses tensors to hold data
for item in dataset:
print(item)
# + id="vy3RUxxsv4M6"
# repeat() - on a shuffled dataset will generate a new order at every iteration
# can set reshuffle_each_iteration=False
# + [markdown] id="Z13-a7_ewflz"
# ## Interleaving Lines from Multiple Files
# + id="m4ciJ_ETwrTq"
| code-reference/jupyter-notebooks/testerData.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Decision Tree Classification
# **Importing Packages**
# +
# Numpy allows us to work with array.
import numpy as np
# Maptplotlib which allows us to plot some chart.
import matplotlib.pyplot as plt
# Pandas allows us to not only import the datasets but also create the matrix of features(independent) and
# dependent variable.
import pandas as pd
# -
# **Importing Dataset**
# - The independent variable usally in the first columns of dataset and dependent variable usally in the last columns of the data sets.
# - X is Independent Variable.
# - Y is Dependent Variable.
# +
dataset = pd.read_csv('Social_Network_Ads.csv')
x = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
print(x)
# -
print(y)
# + [markdown] colab_type="text" id="WemVnqgeA70k"
# **Splitting the dataset into the Training set and Test set**
# +
# Importing Package
from sklearn.model_selection import train_test_split
# Dividing training and test set.
# The best ratio is 80 - 20 for trainging and testing respectively.
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.25, random_state = 0)
print(x_train)
# -
print(y_train)
# + [markdown] colab_type="text" id="YS8FeLHYS-nI"
# **Feature Scaling**
# +
# Importing Package
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
# Fitting and Transforming
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
# -
print(x_train)
print(x_test)
# + [markdown] colab_type="text" id="eiU6D2QFRjxY"
# **Training the Naive Bayes on the Training set**
# +
# Importing Package
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion = "entropy", random_state = 0)
# Fitting
classifier.fit(x_train, y_train)
# -
# **Predicting a new result**
print(classifier.predict(sc.transform([[30, 87000]])))
# + [markdown] colab_type="text" id="aPYA5W1pDBOE"
# **Predicting the Test set results**
# +
# Predicting
y_pred = classifier.predict(x_test)
# Concatenating and Reshaping
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
# First column is the predicted value and Second column is the real value.
# -
# **Making the confusion Matrix**
# +
# Importing Package
from sklearn.metrics import confusion_matrix, accuracy_score
# Confusion Maxtrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
# Accuracy Score
accuracy_score(y_test, y_pred)
# -
# **Visualising the Training Set results**
# +
# Importing Package
from matplotlib.colors import ListedColormap
x_set, y_set = sc.inverse_transform(x_train), y_train
x1, x2 = np.meshgrid(np.arange(start = x_set[:, 0].min() - 10, stop = x_set[:, 0].max() + 10, step = 1),
np.arange(start = x_set[:, 1].min() - 1000, stop = x_set[:, 1].max() + 1000, step = 1))
plt.contourf(x1, x2, classifier.predict(sc.transform(np.array([x1.ravel(), x2.ravel()]).T)).reshape(x1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(x1.min(), x1.max())
plt.ylim(x2.min(), x2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
# -
# **Visualising the Test Set results**
# +
# Importing Package
from matplotlib.colors import ListedColormap
x_set, y_set = sc.inverse_transform(x_test), y_test
x1, x2 = np.meshgrid(np.arange(start = x_set[:, 0].min() - 10, stop = x_set[:, 0].max() + 10, step = 1),
np.arange(start = x_set[:, 1].min() - 1000, stop = x_set[:, 1].max() + 1000, step = 1))
plt.contourf(x1, x2, classifier.predict(sc.transform(np.array([x1.ravel(), x2.ravel()]).T)).reshape(x1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(x1.min(), x1.max())
plt.ylim(x2.min(), x2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Naive Bayes (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
| Decision Tree Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:jcop]
# language: python
# name: conda-env-jcop-py
# ---
from jcopdl.callback import set_config
# +
data = "damped_sine"
# data = "jkse"
config = set_config({
"input_size": 1,
"seq_len": 0,
"batch_size": 0,
"output_size": 1,
"hidden_size": 0,
"num_layers": 0,
"dropout": 0.,
"bidirectional": "____", # True/False
"cell_type": "____" # rnn/gru/lstm
})
lr = "_____"
# -
# # Jangan edit code di bawah ini
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import torch
from torch import nn, optim
from jcopdl.callback import Callback
from jcopdl.utils.dataloader import TimeSeriesDataset
from torch.utils.data import DataLoader
from utils import data4pred, pred4pred
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device
class RNN(nn.Module):
def __init__(self, cell_type, input_size, output_size, hidden_size, num_layers, dropout, bidirectional):
super().__init__()
if cell_type == "rnn":
rnn_block = nn.RNN
elif cell_type == "lstm":
rnn_block = nn.LSTM
elif cell_type == "gru":
rnn_block = nn.GRU
self.rnn = rnn_block(input_size, hidden_size, num_layers, dropout=dropout, bidirectional=bidirectional)
if bidirectional:
hidden_size = 2*hidden_size
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x, hidden):
x, hidden = self.rnn(x, hidden)
x = self.fc(x)
return x, hidden
if data == "damped_sine":
df = pd.read_csv("data/sine_new.csv", parse_dates=["Date"], index_col="Date")
df.value = df.value.transform(lambda x: (x-x.mean())/x.std())
col = "value"
elif data == "jkse":
df = pd.read_csv("data/jkse.csv", parse_dates=["Date"], index_col="Date")
df = df[~df.price.isna()]
df.price = df.price.transform(lambda x: (x-x.mean())/x.std())
col = "price"
ts_train, ts_test = train_test_split(df, test_size=0.2, shuffle=False)
train_set = TimeSeriesDataset(ts_train, col, config.seq_len)
trainloader = DataLoader(train_set, batch_size=config.batch_size)
test_set = TimeSeriesDataset(ts_test, col, config.seq_len)
testloader = DataLoader(test_set, batch_size=config.batch_size)
model = RNN(config.cell_type, config.input_size, config.output_size, config.hidden_size,
config.num_layers, config.dropout, config.bidirectional).to(device)
criterion = nn.MSELoss(reduction='mean')
optimizer = optim.AdamW(model.parameters(), lr=lr)
callback = Callback(model, config, outdir=f'model/{data}/')
from tqdm.auto import tqdm
def loop_fn(mode, dataset, dataloader, model, criterion, optimizer, device):
if mode == "train":
model.train()
elif mode == "test":
model.eval()
cost = 0
for feature, target in tqdm(dataloader, desc=mode.title()):
feature, target = feature.to(device), target.to(device)
output, hidden = model(feature, None)
loss = criterion(output, target)
if mode == "train":
loss.backward()
optimizer.step()
optimizer.zero_grad()
cost += loss.item() * feature.shape[0]
cost = cost / len(dataset)
return cost
while True:
train_cost = loop_fn("train", train_set, trainloader, model, criterion, optimizer, device)
with torch.no_grad():
test_cost = loop_fn("test", test_set, testloader, model, criterion, optimizer, device)
# Logging
callback.log(train_cost, test_cost)
# Checkpoint
callback.save_checkpoint()
# Runtime Plotting
callback.cost_runtime_plotting()
# Early Stopping
if callback.early_stopping(model, monitor="test_cost"):
callback.plot_cost()
break
# Forecast
train_forecast_set = TimeSeriesDataset(ts_train, col, 1)
trainforecastloader = DataLoader(train_forecast_set)
test_forecast_set = TimeSeriesDataset(ts_test, col, 1)
testforecastloader = DataLoader(test_forecast_set)
plt.figure(figsize=(15, 15))
plt.subplot(311)
data4pred(model, train_forecast_set, trainforecastloader, device)
plt.title("Train")
plt.subplot(312)
data4pred(model, test_forecast_set, testforecastloader, device)
plt.title("Test")
plt.subplot(313)
pred4pred(model, test_forecast_set, testforecastloader, device, n_prior=400, n_forecast=100)
plt.title("Test");
| 16 - Recurrent Neural Network/Part 7 - Tuning Exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Plan for Dominant Del Assay plasmid pPS1
from IPython.display import Image
Image(url='http://cancerres.aacrjournals.org/content/66/7/3480/F1.medium.gif')
# ### The del assay
#
# The image above depicts the principle of the original del assay.
#
# http://cancerres.aacrjournals.org/content/66/7/3480.abstract?sid=9297eb2f-00bd-466f-89d8-b38af175d67f
#
# The RS112 yeast strain contains a plasmid carrying the LEU2 gene and an internal fragment of the yeast HIS3 gene integrated into the genome at the HIS3 locus. This resulted in two copies of the his3 gene, one with a terminal deletion at the 3'-end, and the other with a terminal deletion at the 5'-end. There are ~400 bp of homology between the two copies (striped region). B, DNA strand breakage leads to bidirectional degradation until homologous single-stranded regions are exposed. C, annealing of homologous regions. D, reversion to HIS+ phenotype and deletion of plasmid.
#
# ### the pPS1 plasmid
#
# This cassette consists of two dominant markers HphMX4 and the kanamycin resistance gene from the E. coli transposon TN903 "kan".
#
# The HphMX4 marker is the Hygromycin B resistance gene from an E. coli [plasmid](http://www.ncbi.nlm.nih.gov/pubmed/6319235) under control of the Ashbya gossypii TEF1 promoter and terminator.
#
# The idea is to split the HphMX4 marker in two pieces so that there is a shared homology, like the HIS3 gene of the del assay. The kan gene will be controlled by the promoter and terminator from the Kluyveromyces lactis TEF1 homolog.
#
# The TEF1 promoter-kan-TEF1 terminator fragments are cloned inside the HphMX4 marker in such a way that there is a region of homology on each side by which the TEF1 promoter-kan-TEF1 terminator can be lost and the HphMX4 gene reconstituted.
#
# The whole construct is made by gap repair in one reaction.
#
#
# ###material
#
# |DNA | Source -80 |
# |---------|------------------|
# |pAG32 | box 3 pos 45 |
# |pMEC1030 | Filipa #114 |
# |pUG6 | box 3 pos 55 |
# |YIplac128| box 1 pos 81 |
#
#
# K. lactis is on plate
import pydna
# The plasmid pAG32 contains the HphMX4 marker gene. It is available from [EUROSCARF](http://www.euroscarf.de/plasmid_details.php?accno=P30106). It was constructed by [<NAME>](http://www.ncbi.nlm.nih.gov/pubmed/10514571).
#
# The sequence is not available from Genbank, but the EUROSCARF website provides it. Unfortunately, the LOCUS line is malformed in this record (genbank format). For this reason I made my own copy of the sequence [here](https://gist.github.com/BjornFJohansson/c5424b7ebbf553c52053). The size of the plasmid is 4160 bp.
text = pydna.download_text("https://gist.githubusercontent.com/BjornFJohansson/c5424b7ebbf553c52053/raw/64318ead495bc7ade8bb598ab192e76a3569a724/pAG32.gb")
pAG32 = pydna.read(text)
pAG32
pAG32.list_features()
# We can inspect the features to see that the HphMX4 cassete starts at 90 and ends at 1727
hyg_cassette = pAG32[90:1727]
# This makes the HphMX4 cassette 1637 bp long.
hyg_cassette
middle = int(len(hyg_cassette)/2)
overlap = 200
# We split the HphMX4 in two parts:
first_part = hyg_cassette[:middle+overlap]
second_part = hyg_cassette[(middle-overlap):]
pydna.eq( first_part[-400:], second_part[:400] )
str(first_part[-400:].seq)
# Now we need to define the promoter and terminator to use for the kan gene.
#
# K. lactis sequences are from the [Yeast Gene Order Browser](http://ygob.ucd.ie/)
#
#
# The Kl TEF1 promoter has the following [sequence](http://ygob.ucd.ie/cgi/browser/intergenic.pl?ver=Latest&gene=KLLA0B09020g&org=klac&nbr=KLLA0B08998g&dir=inverted)
#
# The Kl TEF1 promoter has the following [sequence](http://ygob.ucd.ie/cgi/browser/intergenic.pl?ver=Latest&gene=KLLA0B08998g&org=klac&nbr=KLLA0B08976g&dir=inverted)
promoter_link ="http://ygob.ucd.ie/cgi/browser/intergenic.pl?ver=Latest&gene=KLLA0B09020g&org=klac&nbr=KLLA0B08998g&dir=inverted"
terminator_link = "http://ygob.ucd.ie/cgi/browser/intergenic.pl?ver=Latest&gene=KLLA0B08998g&org=klac&nbr=KLLA0B08976g&dir=inverted"
from bs4 import BeautifulSoup
html = pydna.download_text(promoter_link)
TEF1prom = pydna.read( ''.join( BeautifulSoup( html, "lxml").findAll( text = True ) ) )
TEF1prom
# About 400bp is sufficient for the promoter
TEF1prom = TEF1prom[-400:]
# We establish the terminator in the same manner
html = pydna.download_text(terminator_link)
TEF1term = pydna.read( ''.join( BeautifulSoup( html, "lxml").findAll( text = True ) ) )
TEF1term
# Likewise, 400bp is more than enough for the terminator
TEF1term = TEF1term[:400]
# The kan gene can be found in the pUG6 plasmid. It was constructed by [Güldener et al.](http://nar.oxfordjournals.org/content/24/13/2519.full).
# The sequence is available from [Genbank](http://www.ncbi.nlm.nih.gov/nuccore/AF298793.1). The plasmid itself can be obtained from [EUROSCARF](http://www.euroscarf.de/plasmid_details.php?accno=P30114).
#
# We will download the sequence from Genbank:
gb = pydna.Genbank("<EMAIL>")
pUG6 = gb.nucleotide("AF298793")
# The size is 4009bp
len(pUG6)
pUG6
# We can inspect features to obtain the coding sequence:
pUG6.list_features()
# The feature number 4 is the coding sequence for the kan gene:
kan_orf = pUG6.extract_feature(4)
# Now we have defined five DNA fragments between 0.4 and 1.1 kb
pMEC1030 = pydna.read("pMEC1030.gb")
# ## We could not prep the pMEC1030, so we use pSU0 instead. The name is still pMEC1030 in the code below.
pSU0 = gb.nucleotide("AB215109.1")
pMEC1030 = pSU0
URA3_2micron = pMEC1030[1041:3620]
frags = (URA3_2micron,
first_part,
TEF1prom,
kan_orf,
TEF1term,
second_part)
frags
# We will also need a vector backbone for the construction. We will use YIplac128.
YIplac128 = gb.nucleotide("X75463").looped()
from Bio.Restriction import SmaI
YIplac128_smaI = YIplac128.linearize(SmaI)
from Bio.Restriction import XhoI, SpeI
# # There is a bug below! The SpeI site was added to the end of the "first_part" and not to the beginning!
#
#
#
#
((p1, p2),
(p3, p4),
(p5, p6),
(p7, p8),
(p9, p10),
(p11, p12))= pydna.assembly_primers((second_part,
pydna.Dseqrecord( XhoI.site ),
URA3_2micron,
pydna.Dseqrecord( SpeI.site ),
first_part,
TEF1prom,
kan_orf,
TEF1term),
vector=YIplac128_smaI, target_tm=50)
p1.id= "dda1_2nd_f"
p2.id= "dda2_2nd_r"
p3.id= "dda3_URA3_2my_f"
p4.id= "dda4_URA3_2my_r"
p5.id= "dda5_1st_f"
p6.id= "dda6_1st_r"
p7.id= "dda7_Kl_pr_f"
p8.id= "dda8_Kl_pr_r"
p9.id= "dda9_kan_f"
p10.id= "dda10_kan_r"
p11.id= "dda11_Kl_tr_f"
p12.id= "dda12_Kl_tr_r"
(p2, p3, p4, p5, p6, p7, p8, p9, p10, p11) = [p[-40:] for p in (p2, p3, p4, p5, p6, p7, p8, p9, p10, p11)]
p1=p1[-50:]
p12=p12[-50:]
second_part_prd = pydna.pcr(p1,p2, pAG32)
URA3_2micron_prd = pydna.pcr(p3,p4, pMEC1030)
first_part_prd = pydna.pcr(p5,p6, pAG32)
prom_prd = pydna.pcr(p7, p8, TEF1prom)
kan_prd = pydna.pcr(p9, p10, pUG6)
term_prd = pydna.pcr(p11, p12, TEF1term)
prods = (URA3_2micron_prd,
first_part_prd,
prom_prd,
kan_prd,
term_prd,
second_part_prd)
names = ("URA3_2my",
"prom-Hph",
"KlTEF1prom",
"kan_orf",
"KlTEF1term",
"Hph-term")
for f,n in zip(prods, names):
f.name = n
asm = pydna.Assembly(( YIplac128_smaI,
URA3_2micron_prd,
first_part_prd,
prom_prd,
kan_prd,
term_prd,
second_part_prd), limit = 26)
asm
candidate = asm.circular_products[0]
candidate.figure()
pPS1 = candidate
pPS1.cseguid()
primers = (p1, p2, p3, p4, p5, p6, p7, p8, p9, p10, p11, p12)
[len(p) for p in primers]
for p in primers:
print(p.format("tab"))
pPS1.name = "pPS1"
pPS1.description=""
pPS1.stamp()
pPS1.write("pPS1.gb")
r = pydna.read("pPS1.gb")
r.verify_stamp()
# ##PCR conditions
for prd in prods:
print("product name:", prd.name)
print("template:", prd.template.name)
print(prd.program())
print("----------------------------------------------------------")
from IPython.display import Image
#Image("Generuler_1kb_marker_Fermentas_SM0331.jpg")
Image("Paulo_GeneRuler.png")
# # Use
#
# The plan is to integrate the cassette so that the HIS3 marker is removed. We chose HIS3 since this is the location of the del construct in RS112 and derivatives.
#
# We should use a leu2 HIS3 background for the integration.
#
# The resulting strain should be G418R his3 LEU2 and HygS. After recombination, the cells will be G418S his3 eu2 and HygR.
#
# The cassette will be amplified so that the 2µ and URA3 sequences are left out.
#
#
#
#
#
from pygenome import sg
his3 = pydna.Dseqrecord( sg.gene["HIS3"].locus() )
# The his3 sequence contain the whole HIS3 locus including promoter and terminator.
his3.write("his3.gb")
# Paulo suggested these primers for integration of cassette in the HIS3 locus
intprim1, intprim2 = pydna.parse('''
>F
CTT TCC CGC AAT TTT CTT TTT CTA TTA CTC TTG GCC TCC T aaaactgtattataagta
>R
TAT ATA TAT CGT ATG CTG CAG CTT TAA ATA ATC GGT GTC A gcg TT AGT ATC GAA TCG ACA G
''')
# After discussion we arrived at the final primers below:
intprim1, intprim2 = pydna.parse('''
>pPS1_his_f
CTT TCC CGC AAT TTT CTT TTT CTA TTA CTC TTG GCC TCC T agagcttcaatttaattatatcagttattatcc
>pPS1_his_r
TAT ATA TAT CGT ATG CTG CAG CTT TAA ATA ATC GGT GTC A gcg TT AGT ATC GAA TCG ACA G
''')
print(intprim1.format("tab"))
print(intprim2.format("tab"))
intprim1
intprim2
prd = pydna.pcr(intprim1, intprim2, pPS1)
prd.figure()
prd.program()
prd.dbd_program()
# The prd variable below contains the 8049 bp PCR product.
#
# limit for [phusion](https://www.neb.com/protocols/1/01/01/pcr-protocol-m0530) polymerase seems to be 10 kb.
#
# The PCR product sequence can be downloaded below.
prd.write("prd.gb")
# The integration of the cassette is simulated below:
asm = pydna.Assembly((his3,prd), max_nodes=3)
asm
cassette_integrated_in_HIS3_locus = asm.linear_products[0]
cassette_integrated_in_HIS3_locus.figure()
cassette_integrated_in_HIS3_locus.write("cassette_integrated_in_HIS3_locus.gb")
# http://webpcr.appspot.com/
# Screening primers for colony PCR:
#
# >A-HIS3
# TGACGACTTTTTCTTAATTCTCGTT
#
# >D-HIS3
# GCTCAGTTCAGCCATAATATGAAAT
#
| notebooks/.old/.dominant_del_assay_oldest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # VERIFICATION TESTING
#
# # HER2 One Scanner - Aperio NIH
#
# - 5-Fold (80/20) split, No Holdout Set
# - Truth = Categorical from Mean of 7 continuous scores
# - Epoch at automatic Stop when loss<.001 change
# - LeNet model, 10 layers, Dropout (0.7)
import numpy as np
import pandas as pd
import random
from keras.callbacks import EarlyStopping
from PIL import Image
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D, Lambda
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator
from sklearn import metrics
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_curve, auc, classification_report
import csv
import cv2
import scipy
import os
# %matplotlib inline
import matplotlib.pyplot as plt
# +
#For single scanner
BASE_PATH = '/home/diam/Desktop/1Scanner_VerificationTest_HER2data/Aperio_NIH/'
#BASE PATH for working from home:
#BASE_PATH = '/home/OSEL/Desktop/HER2_data_categorical/'
batch_size = 32
num_classes = 3
# -
# ## Get Data - Practice
# +
#This is the version from Ravi's code:
#FDA
#X_FDA = []
#idx_FDA = []
#for index, image_filename in list(enumerate(BASE_PATH)):
# img_file = cv2.imread(BASE_PATH + '/' + image_filename)
# if img_file is not None:
#img_file = smisc.imresize(arr = img_file, size = (600,760,3))
# img_file = smisc.imresize(arr = img_file, size = (120,160,3))
# img_arr = np.asarray(img_file)
# X_FDA.append(img_arr)
# idx_FDA.append(index)
#X_FDA = np.asarray(X_FDA)
#idx_FDA = np.asarray(idx_FDA)
#random.seed(rs)
#random_id = random.sample(idx_FDA, len(idx_FDA)/2)
#random_FDA = []
#for i in random_id:
# random_FDA.append(X_FDA[i])
#random_FDA = np.asarray(random_FDA)
# -
# ## Get Data - Real
def get_data(folder):
X = []
y = []
filenames = []
for hclass in os.listdir(folder):
if not hclass.startswith('.'):
if hclass in ["1"]:
label = 1
else: #label must be 1 or 2
if hclass in ["2"]:
label = 2
else:
label = 3
for image_filename in os.listdir(folder + hclass):
filename = folder + hclass + '/' + image_filename
img_file = cv2.imread(folder + hclass + '/' + image_filename)
if img_file is not None:
img_file = scipy.misc.imresize(arr=img_file, size=(120, 160, 3))
img_arr = np.asarray(img_file)
X.append(img_arr)
y.append(label)
filenames.append(filename)
X = np.asarray(X)
y = np.asarray(y)
z = np.asarray(filenames)
return X,y,filenames
# +
X, y, z = get_data(BASE_PATH)
#print(X)
#print(y)
#print(z)
print(len(X))
print(len(y))
print(y)
print(len(z))
#INTEGER ENCODE
#https://machinelearningmastery.com/how-to-one-hot-encode-sequence-data-in-python/
encoder = LabelEncoder()
y_cat = np_utils.to_categorical(encoder.fit_transform(y))
print(y_cat)
# -
# ### Old Code
# +
#encoder = LabelEncoder()
#encoder.fit(y)
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
#encoded_y_train = encoder.transform(y_train)
#encoded_y_test = encoder.transform(y_test)
#y_train = np_utils.to_categorical(encoded_y_train)
#y_test = np_utils.to_categorical(encoded_y_test)
# +
#X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
# -
# ## Fit Model with K-Fold X-Val
# +
kf = KFold(n_splits = 5, random_state=5, shuffle=True)
print(kf.get_n_splits(y))
print(kf)
#for train_index, test_index in kf.split(y):
# X_train, X_test = X[train_index], X[test_index]
# print(train_index, test_index)
# +
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(y_cat):
fold+=1
print("fold #{}".format(fold))
X_train = X[train]
y_train = y_cat[train]
X_test = X[test]
y_test = y_cat[test]
#encoder = LabelEncoder()
#encoder.fit(y_test)
#y_train = np_utils.to_categorical(encoder.transform(y_train))
#y_test = np_utils.to_categorical(encoder.transform(y_test))
model = Sequential()
model.add(Lambda(lambda x: x * 1./255., input_shape=(120, 160, 3), output_shape=(120, 160, 3)))
model.add(Conv2D(32, (3, 3), input_shape=(120, 160, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.7))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=25, verbose=1, mode='auto')
model.fit(
X_train,
y_train,
validation_data=(X_test,y_test),
callbacks=[monitor],
shuffle=True,
batch_size=batch_size,
verbose=0,
epochs=1000)
pred = model.predict(X_test)
oos_y.append(y_test)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
#measure the fold's accuracy
y_compare = np.argmax(y_test,axis=1) #for accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print("Fold Score (accuracy): {}".format(score))
print(y_test)
# -
| HER2/1 Scanner Multi-Class/Verification Testing/VerificationTest_multiclassHER2_1scanner-Aperio_NIH.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: conda-env-python-py
# ---
# You will need the class Car for the next exercises. The class Car has four data attributes: make, model, colour and number of owners (owner_number). The method <code> car_info() </code> prints out the data attributes and the method <code>sell()</code> increments the number of owners.
class Car(object):
def __init__(self,make,model,color):
self.make=make;
self.model=model;
self.color=color;
self.owner_number=0
def car_info(self):
print("make: ",self.make)
print("model:", self.model)
print("color:",self.color)
print("number of owners:",self.owner_number)
def sell(self):
self.owner_number=self.owner_number+1
# <h3> Create a Car object </h3>
# Create a <code> Car </code> object my_car with the given data attributes:
# +
make="BMW"
model="M3"
color="red"
#my_car=Car("BMW","M3","red")
my_car=Car(make="Honda",model="Accord",color="blue")
# -
# <h3> Data Attributes </h3>
# Use the method car_info() to print out the data attributes
# + jupyter={"outputs_hidden": false}
my_car.car_info()
# -
# <h3> Methods </h3>
# Call the method <code> sell() </code> in the loop, then call the method <code> car_info()</code> again
# + jupyter={"outputs_hidden": false}
for i in range(5):
print(i)
my_car.sell()
my_car.car_info()
# -
# <hr>
# <small>Copyright © 2018 IBM Cognitive Class. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license/).</small>
| PY0101EN-3.4_notebook_quizz_objects.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#
#
# TensorFlow Quantum example, adapted from
# [*TensorFlow Quantum: A Software Framework for Quantum Machine Learning*](http://arxiv.org/abs/2003.02989)
# + [markdown] pycharm={"name": "#%% md\n"}
# Loading of libraries and initialization
# +
import random
import cirq
from cirq.contrib.svg import SVGCircuit
import matplotlib.pyplot as plt
import numpy as np
import sympy
import tensorflow as tf
import tensorflow_quantum as tfq
if not tf.config.list_physical_devices('GPU'):
print("Warning: GPU was not found, so simulations can be very slow")
# + [markdown] pycharm={"name": "#%% md\n"}
# TensorFlow Quantum example
# + pycharm={"name": "#%%\n"}
def generate_dataset(
qubit, theta_a, theta_b, num_samples):
q_data = []
labels = []
blob_size = abs(theta_a - theta_b) / 5
for _ in range(num_samples):
coin = random.random()
spread_x, spread_y = np.random.uniform(-blob_size, blob_size, 2)
if coin < 0.5:
label = [1, 0]
angle = theta_a + spread_y
else:
label = [0, 1]
angle = theta_b + spread_y
labels.append(label)
q_data.append(cirq.Circuit(
cirq.Ry(rads=-angle)(qubit),
cirq.Rx(rads=-spread_x)(qubit)
))
return tfq.convert_to_tensor(q_data), np.array(labels)
# Dataset generation
qubit = cirq.GridQubit(0, 0)
theta_a = 1
theta_b = 4
num_samples = 200
q_data, labels = generate_dataset(qubit, theta_a, theta_b, num_samples)
# Quantum parametric model
theta = sympy.Symbol("theta")
q_model = cirq.Circuit(cirq.Ry(rads=theta)(qubit))
q_data_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)
expectation = tfq.layers.PQC(q_model, cirq.Z(qubit))
expectation_output = expectation(q_data_input)
classifier = tf.keras.layers.Dense(2, activation=tf.keras.activations.softmax)
classifier_output = classifier(expectation_output)
model = tf.keras.Model(inputs=q_data_input, outputs=classifier_output)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.1)
loss = tf.keras.losses.CategoricalCrossentropy()
model.compile(optimizer=optimizer, loss=loss)
history = model.fit(x=q_data, y=labels, epochs=50)
test_data, _ = generate_dataset(qubit, theta_a, theta_b, 1)
p = model.predict(test_data)[0]
print(f"prob(a)={p[0]:.4f}, prob(b)={p[1]:.4f}")
| examples/binary_classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Loading the Data
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("data/train.csv", sep=',')
df.head()
df.count()
df.describe()
df.shape
df['Pclass'].unique()
df.dtypes
# +
survived = df[df.Survived == 1]
dead = df[df.Survived == 0]
# Comptage
print(survived.shape)
print(dead.shape)
# -
survived
df.isna().sum()
#Pour savoir les valeurs possible ( group-by )
df['Cabin'].unique()
def plot_hist(feature, bins=40):
plt.rcParams["figure.figsize"] = [16,9]
x1 = np.array(dead[feature].dropna())
x2 = np.array(survived[feature].dropna())
plt.hist([x1, x2], label=["Victime", "Survivant"], bins=bins, color=['r', 'b'])
plt.legend(loc="upper left")
plt.title('distribution relative de %s' %feature)
plt.show()
plot_hist('Cabin')
# ## Fonction qui remplace les valeurs NaN
# mediane si quantative
# mod si qualitative
cabin = df['Cabin'].unique()[5]
def remplaceNanValues(columnName) :
cpt =0
if columnName == 'Age' :
x = df[columnName].median()
print(x)
elif columnName == 'Cabin' :
x= cabin
else :
x = df[columnName].unique
for index, row in df.iterrows() :
print("avant test : ", row[columnName])
if pd.isna(row[columnName]) :
cpt +=1
#print("avant changement : ",df.loc[index, columnName])
df.loc[index, columnName] = x
#print("après changement : ",df.loc[index, columnName])
print(cpt)
df['Age'].median()
df['Age'].unique()
remplaceNanValues('Age')
df['Age'].unique()
df.isna().sum()
remplaceNanValues('Embarked')
df.isna().sum()
remplaceNanValues('Cabin')
df.isna().sum()
# ## Transformation des int en string
# Transfomer en String les colonnes :
# * Parch
# * Pclass
df['Pclass'] = df['Pclass'].apply(str)
df['Parch'] = df['Parch'].apply(str)
def convertColumnTypeToStr(columnName) :
for index, row in df.iterrows() :
original = row[columnName]
new = str(original)
row[columnName] = new
# +
#convertColumnTypeToStr('Pclass')
# -
df.dtypes
# ### Choix des Features
features_columns = ['SibSp', 'Fare','Parch']
#to_dummify = ['SibSp','Parch']
to_dummify = ['Parch']
X = df[features_columns]
y = df['Survived']
# ### Dummification des variables
X.head()
X = pd.get_dummies(X, columns=to_dummify)
df['SibSp'].unique()
df['Parch'].unique()
X.head()
y.head(800)
# # Model
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
def trainAndScore(model, X, y) :
list_test_size = [a/20.0 for a in list(range(0,20,1))][1:]
scores = []
for ts in list_test_size:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=ts, random_state=0)
clf = model.fit(X_train, y_train)
scores.append(clf.score(X_test, y_test))
print("scores : ",scores)
print(np.array(scores).mean())
model = LogisticRegression()
trainAndScore(model, X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05, random_state=0)
clf = model.fit(X_train, y_train)
score = clf.score(X_test, y_test)
score
# ### Ajout de la feature Pclass
features_columns = ['SibSp', 'Fare','Parch','Pclass']
to_dummify = ['Parch', 'Pclass']
X = df[features_columns]
y = df['Survived']
X.head()
X = pd.get_dummies(X, columns=to_dummify)
X.head()
model = LogisticRegression()
trainAndScore(model, X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05, random_state=0)
clf = model.fit(X_train, y_train)
score = clf.score(X_test, y_test)
score
# ### Ajout de la feature Age
features_columns = ['SibSp', 'Fare','Parch','Pclass', 'Age']
to_dummify = ['Parch', 'Pclass']
X = df[features_columns]
y = df['Survived']
X = pd.get_dummies(X, columns=to_dummify)
X.head()
model = LogisticRegression()
trainAndScore(model, X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05, random_state=0)
df.dtypes
clf = model.fit(X_train, y_train)
score = clf.score(X_test, y_test)
score
# ### Feature Enginnering - Construction de la variable 'is_child'
features_columns = ['SibSp', 'Fare','Parch','Pclass', 'Age']
to_dummify = ['Parch', 'Pclass']
X = df[features_columns]
y = df['Survived']
X.head()
plot_hist('Age')
X['is_child'] = X["Age"] <= 11
X.head()
isChild = X[X.is_child == 1]
isChild.shape
isChild['Age'].mean()
X.head(12)
model = LogisticRegression()
trainAndScore(model, X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05, random_state=0)
model = LogisticRegression()
clf = model.fit(X_train, y_train)
score = clf.score(X_test, y_test)
score
# ### Ajout de la feature Cabin
features_columns = ['SibSp', 'Fare','Parch','Pclass', 'Age' , 'Cabin']
to_dummify = ['Parch', 'Pclass', 'Cabin']
X = df[features_columns]
y = df['Survived']
X['is_child'] = X["Age"] <= 11
X.head(12)
X = pd.get_dummies(X, columns=to_dummify)
X.head()
model = LogisticRegression()
trainAndScore(model, X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05, random_state=0)
model = LogisticRegression()
clf = model.fit(X_train, y_train)
score = clf.score(X_test, y_test)
score
# ### Feature engineering de la variable best_cabine_to_survive
df = pd.read_csv("data/train.csv", sep=',')
df.head(12)
df['Survived'].isna().sum()
# +
#copy = df[['Cabin','Survived']]
# -
survived = df[df.Survived == 1]
mask = df["Survived"] ==1
X = df[['Survived', 'Cabin']]
X[mask]
X[mask].isna().sum()
c = X[mask]['Cabin'].dropna()
c.value_counts()
l = c.value_counts()
for e in l :
if e>= 3: print(e)
best_cabine_to_survive = ['B96 B98','F33','E101']
best_cabine_to_survive
features_columns = ['SibSp', 'Fare','Parch','Pclass', 'Age' ,'Cabin']
to_dummify = ['Parch', 'Pclass', 'Cabin']
X = df[features_columns]
y = df['Survived']
remplaceNanValues('Age')
remplaceNanValues('Cabin')
X['Cabin']
X['best_cabine_to_survive'] = [1 if x in best_cabine_to_survive else 0 for x in X['Cabin']]
X['best_cabine_to_survive'].value_counts()
X['is_child'] = X['Age'] <= 4
X = pd.get_dummies(X, columns=to_dummify)
X.head()
model = LogisticRegression()
trainAndScore(model, X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05, random_state=0)
model = LogisticRegression()
clf = model.fit(X_train, y_train)
score = clf.score(X_test, y_test)
score
# ### Ajout de la Feature Title à partir du nom
df = pd.read_csv("data/train.csv", sep=',')
# +
import re
str = df['Name'][0]
print(str)
x = re.findall("[,]{1}[ ][a-zA-Z]*[.]{1}", str)
print(x)
# -
def getTitle(str):
return str.split(',')[1].split('.')[0].split(' ')[1]
getTitle(df['Name'][50])
df['Title'] = df['Name'].apply(lambda x : getTitle(x))
df['Surname'] = df['Name'].apply(lambda x : '(' in x)
df.head(12)
remplaceNanValues('Age')
remplaceNanValues('Cabin')
features_columns = ['SibSp', 'Fare','Parch','Pclass', 'Age' ,'Title', 'Surname']
to_dummify = ['Parch', 'Pclass', 'Title']
X = df[features_columns]
y = df['Survived']
X['is_child'] = X['Age'] <= 4
X = pd.get_dummies(X, columns=to_dummify)
X.head()
X.isna().sum()
model = LogisticRegression()
trainAndScore(model, X, y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05, random_state=0)
model = LogisticRegression()
clf = model.fit(X_train, y_train)
score = clf.score(X_test, y_test)
score
| Titanic.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.0 64-bit (''.base_env'': venv)'
# name: python380jvsc74a57bd0b91eb80f53693f01d2f84f840e171702e80a093d6b6f2f8996da71dfff3a5230
# ---
# # INDEX
# 1. `Import`
# 2. `Function` : 사용할 함수 파트
# 3. `Data Load` : 데이터를 불러오고, train df 의 경우 StratifiedKFold 학습을 위해 미리 fold feature 생성
# 4. `Feature engineering`
# FE 파트. 특정 FE 는 오래걸리므로 pickle dump, load 사용하여 시간 단축.
# 오래걸리는 FE 는 pickle dump 상위에 배치
# 5. `Feature selection`
# 학습에 이용할 Feature 선택
# 6. `Modeling and Training with Cross Validation`
# modeling and training
# 변수 **STRATIFY_TRAINING** : `True` or `False` 에 따라 Stratifying 학습을 할 지, train_valid 학습을 할 지 선택.
# 7. `Prediction 분석` : 예측한 값을 분석합니다.
# 8. `TO DO` : 개선파트 기록
# # 1. Import
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
import seaborn as sns
import lightgbm as lgb
from lightgbm import LGBMClassifier
import os
from sklearn.metrics import roc_auc_score, accuracy_score
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.preprocessing import LabelEncoder
from warnings import filterwarnings
from time import time
import time
import datetime
from datetime import datetime
import pickle
from glob import glob
from tqdm import tqdm
filterwarnings('ignore')
pd.options.display.max_columns = 999
# -
# # 2. Function
# train과 test 데이터셋은 사용자 별로 묶어서 분리를 해주어야함
random.seed(42)
def custom_train_test_split(df, ratio=0.7, split=True):
'''
사용자 기준으로 묶어서 분리함.
:param df: dataframe
:param ratio: 전체 interaction 데이터 대비 비율
:param split: True
:return:
train : train 데이터
test : valid 데이터 (마지막 interaction 만 추출)
'''
users = list(zip(df['userID'].value_counts().index, df['userID'].value_counts()))
random.shuffle(users)
max_train_data_len = ratio*len(df)
sum_of_train_data = 0
user_ids =[]
for user_id, count in users:
sum_of_train_data += count
if max_train_data_len < sum_of_train_data:
break
user_ids.append(user_id)
train = df[df['userID'].isin(user_ids)]
test = df[df['userID'].isin(user_ids) == False]
#test데이터셋은 각 유저의 마지막 interaction만 추출
test = test[test['userID'] != test['userID'].shift(-1)]
return train, test
# 사용자별로 묶음과 동시에 사용자별 문제풀이 수를 고려하여 Stratify 분리 (Stratifying CV 학습과 동시 고려)
def make_train_valid_skf(train, n_split=5):
'''
데이터에서 문제를 많이 풀 수록 정답률이 올라가므로, 푼 문제 수에 대해서도 균등한 분리를 해주어야 정확한 학습.
shuffle + 문제풀이 수 고려하여 분리.
:param train: 학습을 위한 train 데이터
:param n_split: Stratify split number
:return:
균등하게 fold 가 구성된 train df
'''
skf = StratifiedKFold(n_splits = n_split, shuffle=True)
train_stratify = train['userID'].value_counts().to_frame('num')
train_stratify['group'] = train_stratify['num']//170
train_stratify['fold'] = 0
for i, (train_index, valid_index) in enumerate(skf.split(train_stratify, train_stratify.group)):
train_stratify.loc[train_stratify.iloc[valid_index].index, 'fold'] = i
train_stratify.reset_index(inplace=True)
train_stratify.rename(columns = {'index':'userID'}, inplace=True)
train = pd.merge(train, train_stratify[['userID','fold']], how='left', on = 'userID')
return train
# +
FEATURES = []
def split_kfold(train, test, fold_num, FEATS=FEATURES):
'''
Fold 에 따라 train/valid set 을 나누고 학습과 추론에 필요한 dataframe 을 구축하는 함수
:param train: train df
:param test: test df
:param fold_num: train/valid set 을 구분하기 위한 KFold number
:param FEATS: 학습에 사용할 FEATURES
:return:
X_train_df, y_train, X_valid_df, y_valid, test_df
'''
X_train = train[train['fold'] != fold_num]
X_valid = train[train['fold'] == fold_num]
X_valid = X_valid[X_valid['userID'] != X_valid['userID'].shift(-1)]
test = test[test['userID'] != test['userID'].shift(-1)]
# 사용할 Feature 설정 (기존 baseline 은 numeric data 만 이용)
# FEATS = ['KnowledgeTag', 'user_correct_answer', 'user_total_answer',
# 'user_acc', 'test_mean', 'test_sum', 'tag_mean','tag_sum']
FEATS = FEATURES
y_train = X_train['answerCode']
X_train = X_train.drop(['answerCode'], axis=1)
y_valid = X_valid['answerCode']
X_valid = X_valid.drop(['answerCode'], axis=1)
test = test.drop(['answerCode'], axis=1)
# print(f"X_train : {str(X_train.shape):15s} / y_train : {y_train.shape}")
# print(f"X_valid : {str(X_valid.shape):15s} / y_valid : {y_valid.shape}")
# print(f"test : {str(test.shape):15s}")
# lgb_train = lgb.Dataset(X_train[FEATS], y_train)
# lgb_valid = lgb.Dataset(X_valid[FEATS], y_valid)
X_train_df = X_train[FEATS]
X_valid_df = X_valid[FEATS]
test_df = test[FEATS]
# print(f"\nUse FEATURES : {FEATS}\n")
# print(f"X_train_df : {str(X_train_df.shape):15s} / y_train : {y_train.shape}")
# print(f"X_valid_df : {str(X_valid_df.shape):15s} / y_valid : {y_valid.shape}")
# print(f"test_df : {str(test_df.shape):15s}")
return X_train_df, y_train, X_valid_df, y_valid, test_df
# +
def write_pickle(data, file):
'''
pickle 로 덮어쓰기
:param data: dataframe
:param file: file 이름
:return:
None
'''
pickle_path = os.path.join(data_dir, f'{file}_data.pickle')
with open(pickle_path, 'wb') as f:
pickle.dump(data, f, pickle.HIGHEST_PROTOCOL)
def load_pickle(file):
'''
pickle 파일 불러오기
:param file: file 이름
:return:
file 이름에 해당하는 dataframe 반환
'''
pickle_path = os.path.join(data_dir, f'{file}_data.pickle')
if pickle_path in glob(data_dir + '/*'):
with open(pickle_path, 'rb') as f:
data = pickle.load(f)
return data
# -
def distribution_draw(y_valid, valid_result):
'''
valid_y_true 값과 valid 예측값을 인수로 받아 분포를 그리는 함수
:param y_valid: y_true 값
:param valid_result: valid 예측값
:return:
check_pred: 예측값과 true 값에 해당하는 dataframe 을 반환
check_zero: check_pred 중 true 값이 0 인 df 를 반환
check_one: check_pred 중 true 값이 1 인 df 를 반환
'''
check_pred = pd.DataFrame(y_valid)
check_pred.rename(columns = {'answerCode':'answer'}, inplace=True)
check_pred['pred'] = valid_result
check_zero = check_pred[check_pred['answer'] == 0].sort_values(by=['pred'], ascending=False)
check_one = check_pred[check_pred['answer'] == 1].sort_values(by=['pred'])
plt.figure(figsize=(16,5))
plt.hist(list(check_zero['pred']),
bins=20,
rwidth=0.9,
label='pred zero',
alpha = 0.5);
plt.hist(list(check_one['pred']),
bins=20,
rwidth=0.9,
label='pred one',
alpha = 0.5);
plt.legend(loc='upper left')
plt.title('Prediction for 0 and 1 - Distribution',
fontweight='bold',
loc='left', size=20)
plt.show();
return check_pred, check_zero, check_one
# ----
# # 3. Data Load
# +
# %%time
data_dir = '/opt/ml/input/data/train_dataset'
csv_file_path = os.path.join(data_dir, 'train_data.csv')
test_csv_file_path = os.path.join(data_dir, 'test_data.csv')
sub_csv_file_path = os.path.join(data_dir, 'sample_submission.csv')
train = pd.read_csv(csv_file_path)
test = pd.read_csv(test_csv_file_path)
# Fold group 생성
train = make_train_valid_skf(train, n_split=5)
display(train.head(), train.shape, test.head(), test.shape)
# -
# # 4. FEATURE ENGINEERING
# 1. 숫자 통계량 추가
# 2. 카테고리 통계량 추가
# 3. 숫자 + 카테고리 통계량 추가
# 4. 교호작용
# 5. Feature engineering (ex-lag feature, time series)
# %%time
# 유저별 시퀀스를 고려하기 위해 아래와 같이 정렬
train.sort_values(by=['userID', 'Timestamp'], inplace=True)
test.sort_values(by=['userID', 'Timestamp'], inplace=True)
# +
# %%time
def convert_time(s):
timestamp = time.mktime(datetime.strptime(s, '%Y-%m-%d %H:%M:%S').timetuple())
return int(timestamp)
train['time'] = train['Timestamp'].apply(convert_time)
test['time'] = test['Timestamp'].apply(convert_time)
# -
train['time'] = train['time'] - train['time'].shift(1)
test['time'] = test['time'] - test['time'].shift(1)
train.loc[0, 'time'] = 0
test.loc[0,'time'] = 0
# 숫자가 큰 값은 아직 처리하지 않고 테스트 해보자.
# +
train['testId'] = train['testId'].apply(lambda x: int(x[2:]))
test['testId'] = test['testId'].apply(lambda x: int(x[2:]))
train['temp_test'] = train['testId'].shift(1)
train.loc[0,'temp_test'] = 0
test['temp_test'] = test['testId'].shift(1)
test.loc[0,'temp_test'] = 0
train['temp_test'] = train['temp_test'] - train['testId']
test['temp_test'] = test['temp_test'] - test['testId']
# +
train['temp_test'] = train['temp_test'].apply(lambda x: 0 if x != 0.0 else 1)
train['time'] = train['time'] * train['temp_test']
train['time_cum'] = train.groupby(['userID','testId'])['time'].transform(lambda x: x.cumsum())
test['temp_test'] = test['temp_test'].apply(lambda x: 0 if x != 0.0 else 1)
test['time'] = test['time'] * test['temp_test']
test['time_cum'] = test.groupby(['userID','testId'])['time'].transform(lambda x: x.cumsum())
# -
write_pickle(train, 'train')
write_pickle(test, 'test')
print(f"train shape : {train.shape}")
print(f"test shape : {test.shape}")
# ----
# `시간이 오래걸리는 FE 는 pickle 위에 추가하고 pickle 로 덮어씌우기`
train = load_pickle('train')
test = load_pickle('test')
print(f"train shape : {train.shape}")
print(f"test shape : {test.shape}")
display(train.head(), train.shape)
le = LabelEncoder()
# cols = ['assessmentItemID','testId','KnowledgeTag']
# for col in cols:
# train[col] = le.fit_transform(train[col])
# test[col] = le.fit_transform(test[col])
# +
# 문제별 정답률 추가 -> 보류
# train['prob_num'] = train['assessmentItemID'].apply(lambda x: x[-3:])
# test['prob_num'] = test['assessmentItemID'].apply(lambda x: x[-3:])
# prob = (train.groupby('prob_num')['answerCode'].sum()/train.groupby('prob_num')['answerCode'].count()).to_frame('prob_ratio').reset_index()
# train = pd.merge(train, prob, how='left', on='prob_num')
# prob = (test.groupby('prob_num')['answerCode'].sum()/test.groupby('prob_num')['answerCode'].count()).to_frame('prob_ratio').reset_index()
# test = pd.merge(test, prob, how='left', on='prob_num')
# -
train['time'] = train['time'].shift(-1) # 문제별 푼 시간
total_used_time = train.groupby('userID')['time'].cumsum().shift(1).to_frame('total_used_time').fillna(0)
train = pd.concat([train, total_used_time],1)
train
# ## 4-1. 숫자 관련 통계량
# +
# train set
#유저들의 문제 풀이수, 정답 수, 정답률을 시간순으로 누적해서 계산
# category : user, assessment, test, time, tag
# user
train['user_correct_answer'] = train.groupby('userID')['answerCode'].transform(lambda x: x.cumsum().shift(1)) # 정답 수
train['user_total_answer'] = train.groupby('userID')['answerCode'].cumcount() # 총 문제 풀이 수
#train['future_correct_answer'] =
train['user_acc'] = train['user_correct_answer']/train['user_total_answer'] # 정답률
# testId와 KnowledgeTag의 전체 정답률은 한번에 계산
# 아래 데이터는 제출용 데이터셋에 대해서도 재사용
correct_t = train.groupby(['testId'])['answerCode'].agg(['mean', 'sum']) # test 시험지 별 정답률 평균, 총합
correct_t.columns = ["test_mean", 'test_sum']
correct_k = train.groupby(['KnowledgeTag'])['answerCode'].agg(['mean', 'sum']) # 문제 tag 별 정답률 평균, 총합
correct_k.columns = ["tag_mean", 'tag_sum']
train = pd.merge(train, correct_t, on=['testId'], how="left")
train = pd.merge(train, correct_k, on=['KnowledgeTag'], how="left")
train.head(15)
# +
# test set
#유저들의 문제 풀이수, 정답 수, 정답률을 시간순으로 누적해서 계산
test['user_correct_answer'] = test.groupby('userID')['answerCode'].transform(lambda x: x.cumsum().shift(1))
test['user_total_answer'] = test.groupby('userID')['answerCode'].cumcount()
test['user_acc'] = test['user_correct_answer']/test['user_total_answer']
# testId와 KnowledgeTag의 전체 정답률은 한번에 계산
# 아래 데이터는 제출용 데이터셋에 대해서도 재사용
correct_t = test.groupby(['testId'])['answerCode'].agg(['mean', 'sum'])
correct_t.columns = ["test_mean", 'test_sum']
correct_k = test.groupby(['KnowledgeTag'])['answerCode'].agg(['mean', 'sum'])
correct_k.columns = ["tag_mean", 'tag_sum']
test = pd.merge(test, correct_t, on=['testId'], how="left")
test = pd.merge(test, correct_k, on=['KnowledgeTag'], how="left")
test.head()
# -
# ----
# # 5. FEATURE SELECTION
display(train.head(), test.head())
# +
# 학습에 사용할 FEATURES
# FEATURES = ['KnowledgeTag']
# FEATURES = ['KnowledgeTag', 'user_correct_answer', 'user_total_answer', 'user_acc',
# 'test_mean', 'test_sum', 'tag_mean','tag_sum',
# 'time', 'time_cum']
FEATURES = ['user_correct_answer', 'user_total_answer', 'user_acc',
'test_mean', 'test_sum', 'tag_mean','tag_sum',
'time', 'time_cum']
# -
display(train[FEATURES].head(), test[FEATURES].head())
# # 6. Modeling and Training with Cross Validation
#
# +
STRATIFY_TRAINING = True
valid_loglosses = []
valid_roc_auc_scores = []
valid_acc_scores = []
valid_result = 0
result = 0
start = time.time()
for i in [0,1,2,3,4]:
print(f'\n####################################### SKF {i+1} TIMES #######################################\n')
# split_kfold 함수
X_train_df, y_train, X_valid_df, y_valid, test_df = split_kfold(train, test, fold_num=0, FEATS=FEATURES)
params = {
'n_estimators':500, # num_boost_round
'num_leaves':31,
'learning_rate':0.1,
}
sk_lgb = LGBMClassifier(**params)
sk_lgb.fit(X_train_df, y_train,
eval_set = [(X_valid_df, y_valid)],
verbose = 100,
early_stopping_rounds = 100,)
valid_pred = sk_lgb.predict_proba(X_valid_df)[:,1]
test_pred = sk_lgb.predict_proba(test_df)
valid_result += valid_pred
result += test_pred
valid_logloss = sk_lgb.evals_result_['valid_0']['binary_logloss'][-1]
acc = accuracy_score(y_valid, np.where(valid_pred >= 0.5, 1, 0))
auc = roc_auc_score(y_valid, valid_pred)
valid_loglosses.append(valid_logloss)
valid_acc_scores.append(acc)
valid_roc_auc_scores.append(auc)
if not STRATIFY_TRAINING:
break
if STRATIFY_TRAINING:
result /= 5
valid_result /= 5
print(f"\ntime : {time.time() - start:.2f}s\n")
# -
print(' ------ SCORE ------ ')
print(f"| LogLoss : {np.mean(valid_loglosses):.4f} |")
print(f"| Accuracy : {np.mean(valid_acc_scores):.4f} |")
print(f"| ROC_AUC : {np.mean(valid_roc_auc_scores):.4f} |")
print(' ------------------- ')
print(f"{np.mean(valid_loglosses):.4f}, {np.mean(valid_acc_scores):.4f}, {np.mean(valid_roc_auc_scores):.4f}")
# 0.6818 : 0.6398 0.7276
# 0.6851, 0.6060, 0.6878
# 0.6848, 0.6052, 0.6861 : le 한 'assessmentItemID','testId' 컬럼 추가 (점수하락)
# 0.6141, 0.6784, 0.7469 : 문제 사이의 푼 시간 추가 (단, 매우 큰 값 : 이상치에 대해서는 아직 처리하지 않음) : 이때, feature importance 에서도 time 이 가장 큰 영향을 미침.
#*0.6099, 0.6754, 0.7467 : time feature 숫자 처리 + time_cum FE : 이때는 오히려 Tag 의 중요도가 높았다. 아마 time 이 time_cum 과 정보가 겹쳐서 분산되었을 것.
# 0.6112, 0.6724, 0.7432 : test, tag -> median, std 추가
# 0.6121, 0.6754, 0.7415 : 위에거 삭제 & test 난이도별 group 추가
# 0.6118, 0.6716, 0.7437 : 위에거 삭제 & hour 추가
# 0.5661, 0.7097, 0.7738 : prob_ratio 추가 (13개 문제 카테고리별 정답률) : LB score 는 떨어짐. -> 보류
# 0.5885, 0.6828, 0.7492 : 시간 추가 한 것에서 문제별 푼 시간을 shift 하여 재조정(문제별 푼 시간의 위치가 잘못 되었었음) -> 근데 잘못된 것보다 점수 하락
lgb.plot_importance(sk_lgb);
# # 7. Prediction 분석
check_pred, check_zero, check_one = distribution_draw(y_valid, valid_result)
# X_valid_df 와 합쳐 학습에 이용된 data 와 정답률 분석하기
check_df = pd.concat([X_valid_df, check_pred], axis=1)
check_df
# ## Submission
sub = pd.read_csv(sub_csv_file_path)
sub['prediction'] = result[:,1]
sub.to_csv(f"/opt/ml/output/lgbm_.csv", index=0)
sub
# # 8. TODO
# 1. Feature engineering
# 2. parameter tuning
# 3. 정답을 맞춘 것에 따라 user 추출하기
| ref_code/lgbm_view_comp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
from PIL import Image
import numpy as np
import torch
from torch.utils.data import DataLoader
from input_pipeline import PairDataset
# -
# # Create an input pipeline
# +
dataset = PairDataset(
first_dir='',
second_dir='',
num_samples=1000,
image_size=64
)
data_loader = DataLoader(
dataset=dataset,
batch_size=30, shuffle=True,
num_workers=1, pin_memory=True
)
# -
# # Get random images
# +
for i, (x, y) in enumerate(data_loader):
break
x = 255.0*x.permute(0, 2, 3, 1).numpy()
y = 255.0*y.permute(0, 2, 3, 1).numpy()
# -
# # Show a grid
images1 = np.concatenate(x, axis=0)
images2 = np.concatenate(y, axis=0)
Image.fromarray(np.concatenate([images1, images2], axis=1).astype('uint8'))
| test_input_pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
st = pd.read_csv('nombres.csv')
st['Proyecto'] = np.random.permutation(['mc-area', 'leave-one-out-estimator', 'grid-robot']*6 + ['mc-area'])
st
| ejercicios-tareas-proyectos/proyectos-final/choosing-util/problem-selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Grade: 8 / 8
# All API's: http://developer.nytimes.com/
# Article search API: http://developer.nytimes.com/article_search_v2.json
# Best-seller API: http://developer.nytimes.com/books_api.json#/Documentation
# Test/build queries: http://developer.nytimes.com/
#
# Tip: Remember to include your API key in all requests! And their interactive web thing is pretty bad. You'll need to register for the API key.
import requests
# 1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
dates = ['2009-05-10', '2010-05-09', '2009-06-21', '2010-06-20']
for date in dates:
response = requests.get('https://api.nytimes.com/svc/books/v3/lists//.json?list-name=hardcover-fiction&published-date=' + date + '&api-key=1a25289d587a49b7ba8128badd7088a2')
data = response.json()
print('On', date, 'this was the hardcover fiction NYT best-sellers list:')
for item in data['results']:
for book in item['book_details']:
print(book['title'])
print('')
# 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
cat_dates = ['2009-06-06', '2015-06-06']
for date in cat_dates:
cat_response = requests.get('https://api.nytimes.com/svc/books/v3/lists/names.json?published-date=' + date + '&api-key=1a25289d587a49b7ba8128badd7088a2')
cat_data = cat_response.json()
print('On', date + ', these were the different book categories the NYT ranked:')
categories = []
for result in cat_data['results']:
categories.append(result['list_name'])
print(', '.join(set(categories)))
print('')
# 3) <NAME>'s name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?
#
# Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy.
# +
gaddafis = ['Gadafi', 'Gaddafi', 'Kadafi', 'Qaddafi']
for gaddafi in gaddafis:
g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=' + gaddafi + '+libya&api-key=1a25289d587a49b7ba8128badd7088a2')
g_data = g_response.json()
print('There are', g_data['response']['meta']['hits'], 'instances of the spelling', gaddafi + '.')
# +
# TA-COMMENT: As per usual, your commented code is excellent! I love how you're thinking through what might work.
# +
# #HELP try 1.
# #Doesn't show next pages.
# gaddafis = ['Gadafi', 'Gaddafi', 'Kadafi', 'Qaddafi']
# for gaddafi in gaddafis:
# g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=' + gaddafi + '+libya&page=0&api-key=1a25289d587a49b7ba8128badd7088a2')
# g_data = g_response.json()
# print('There are', len(g_data['response']['docs']), 'instances of the spelling', gaddafi)
# +
# #HELP try 2. What I want to do next is
# #if the number of articles != 10 , stop
# #else, add 1 to the page number
# #Tell it to loop until the end result is not 10
# #but right now it keeps crashing
# #Maybe try by powers of 2.
# import time, sys
# pages = range(400)
# total_articles = 0
# for page in pages:
# g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=gaddafi+libya&page=' + str(page) + '&api-key=1a25289d587a49b7ba8128badd7088a2')
# g_data = g_response.json()
# articles_on_pg = len(g_data['response']['docs'])
# total_articles = total_articles + articles_on_pg
# print(total_articles)
# time.sleep(0.6)
# +
#HELP try 3. Trying by powers of 2.
#OMG does 'hits' means the number of articles with this text?? If so, where could I find that in the README??
# numbers = range(10)
# pages = []
# for number in numbers:
# pages.append(2 ** number)
# #temp
# print(pages)
# import time, sys
# total_articles = 0
# for page in pages:
# g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=gaddafi+libya&page=' + str(page) + '&api-key=1a25289d587a49b7ba8128badd7088a2')
# g_data = g_response.json()
# articles_on_pg = len(g_data['response']['docs'])
# #temp
# meta_on_pg = g_data['response']['meta']
# print(page, articles_on_pg, meta_on_pg)
# time.sleep(1)
# +
# #HELP (troubleshooting the page number that returns a keyerror)
# #By trial and error, it seems like "101" breaks it. 100 is fine.
# g_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=gadafi+libya&page=101&api-key=1a25289d587a49b7ba8128badd7088a2')
# g_data = g_response.json()
# articles_on_pg = len(g_data['response']['docs'])
# print(articles_on_pg)
# -
# 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
hip_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&begin_date=19950101&sort=oldest&api-key=1a25289d587a49b7ba8128badd7088a2')
hip_data = hip_response.json()
first_hipster = hip_data['response']['docs'][0]
print('The first hipster article of 1995 was titled', first_hipster['headline']['main'] + '.\nCheck it out:\n' + first_hipster['lead_paragraph'])
# 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
#
# Tip: You'll want to put quotes around the search term so it isn't just looking for "gay" and "marriage" in the same article.
#
# Tip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959.
# +
decade_range = range(5)
date_attributes = []
for decade in decade_range:
date_attributes.append('begin_date=' + str(1950 + decade*10) +'0101&end_date=' + str(1959 + decade*10) + '1231')
date_attributes.append('begin_date=20100101')
for date in date_attributes:
gm_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q="gay+marriage"&' + date + '&api-key=1a25289d587a49b7ba8128badd7088a2')
gm_data = gm_response.json()
hits = gm_data['response']['meta']['hits']
print(hits)
# -
# 6) What section talks about motorcycles the most?
#
# Tip: You'll be using facets
# +
#I searched for motorcyle or motorcycles
# for motorcyles:
# {'count': 10, 'term': 'New York and Region'}
# {'count': 10, 'term': 'New York and Region'}
# {'count': 7, 'term': 'World'}
# {'count': 6, 'term': 'Arts'}
# {'count': 6, 'term': 'Business'}
# {'count': 5, 'term': 'U.S.'}
# for motorcycle:
# {'count': 24, 'term': 'Sports'}
# {'count': 24, 'term': 'Sports'}
# {'count': 20, 'term': 'New York and Region'}
# {'count': 16, 'term': 'U.S.'}
# {'count': 14, 'term': 'Arts'}
# {'count': 8, 'term': 'Business'}
moto_response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcyle+OR+motorcyles&facet_field=section_name&api-key=1a25289d587a49b7ba8128badd7088a2')
moto_data = moto_response.json()
# #temp. Answer: dict
# print(type(moto_data))
# #temp. Answer: ['status', 'copyright', 'response']
# print(moto_data.keys())
# #temp. Answer: dict
# print(type(moto_data['response']))
# #temp. Answer: ['docs', 'meta', 'facets']
# print(moto_data['response'].keys())
# #temp. Answer: dict
# print(type(moto_data['response']['facets']))
# #temp. Answer: 'section_name'
# print(moto_data['response']['facets'].keys())
# #temp. Answer: dict
# print(type(moto_data['response']['facets']['section_name']))
# #temp. Answer:'terms'
# print(moto_data['response']['facets']['section_name'].keys())
# #temp. Answer: list
# print(type(moto_data['response']['facets']['section_name']['terms']))
# #temp. It's a list of dictionaries, with a count and a section name for each one.
# print(moto_data['response']['facets']['section_name']['terms'][0])
sections = moto_data['response']['facets']['section_name']['terms']
the_most = 0
for section in sections:
if section['count'] > the_most:
the_most = section['count']
the_most_name = section['term']
print(the_most_name, 'talks about motorcycles the most, with', the_most, 'articles.')
# #Q: WHY DO SO FEW ARTICLES MENTION MOTORCYCLES?
# #A: MAYBE BECAUSE MANY ARTICLES AREN'T IN SECTIONS?
# #temp. Answer: {'hits': 312, 'offset': 0, 'time': 24}
# print(moto_data['response']['meta'])
# #temp. Answer: ['document_type', 'blog', 'multimedia', 'pub_date',
# #'news_desk', 'keywords', 'byline', '_id', 'headline', 'snippet',
# #'source', 'lead_paragraph', 'web_url', 'print_page', 'slideshow_credits',
# #'abstract', 'section_name', 'word_count', 'subsection_name', 'type_of_material']
# print(moto_data['response']['docs'][0].keys())
# #temp. Answer: Sports
# #print(moto_data['response']['docs'][0]['section_name'])
# #temp.
# # Sports
# # Sports
# # Sports
# # None
# # Multimedia/Photos
# # Multimedia/Photos
# # Multimedia/Photos
# # New York and Region
# # None
# # New York and Region
# # New York and Region
# for article in moto_data['response']['docs']:
# print(article['section_name'])
# #temp. 10. There are only 10 because only 10 show up in search results.
# print(len(moto_data['response']['docs']))
# -
# 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?
# <p>Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.
# +
offsets = range(3)
picks_by_group = []
for offset in offsets:
picks_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=' + str(offset * 20) + '&api-key=1a25289d587a49b7ba8128badd7088a2')
picks_data = picks_response.json()
results = picks_data['results']
picks = 0
for result in results:
if result['critics_pick'] == 1:
picks = picks + 1
picks_by_group.append(picks)
print('In the most recent', offset * 20, 'to', offset * 20 + 20, 'movies, the critics liked', picks, 'movies.')
print('In the past', (offset + 1) * 20, 'reviews, the critics liked', sum(picks_by_group), 'movies.')
print('')
# +
# #temp. Answer: ['has_more', 'status', 'results', 'copyright', 'num_results']
# print(picks_data.keys())
# #temp. 20
# #not what we're looking for
# print(picks_data['num_results'])
# #temp. Answer: list
# print(type(picks_data['results']))
# #temp.
# print(picks_data['results'][0])
# #temp. Answer: ['display_title', 'headline', 'mpaa_rating', 'critics_pick',
# #'publication_date', 'link', 'summary_short', 'byline', 'opening_date', 'multimedia', 'date_updated']
# print(picks_data['results'][0].keys())
# -
# 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?
# +
offsets = range(2)
bylines = []
for offset in offsets:
picks_response = requests.get('https://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=' + str(offset * 20) + '&api-key=1a25289d587a49b7ba8128badd7088a2')
picks_data = picks_response.json()
for result in picks_data['results']:
bylines.append(result['byline'])
print(bylines)
# +
# I tried Counter, but there were two most common results, and it only gave me one.
# from collections import Counter
# print(collections.Counter(bylines))
# print(Counter(bylines).most_common(1))
# +
sorted_bylines = (sorted(bylines))
numbers = range(40)
most_bylines = 0
for number in numbers:
if most_bylines < sorted_bylines.count(sorted_bylines[number]):
most_bylines = sorted_bylines.count(sorted_bylines[number])
for number in numbers:
if most_bylines == sorted_bylines.count(sorted_bylines[number]) and sorted_bylines[number] != sorted_bylines[number - 1]:
print(sorted_bylines[number], sorted_bylines.count(sorted_bylines[number]))
| 05/NYT-API_graded.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''base'': conda)'
# name: python3
# ---
# ## Bayesian Optimisation Verification
# +
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm
from scipy.interpolate import interp1d
from scipy import interpolate
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, WhiteKernel
from scipy import stats
from scipy.stats import norm
from sklearn.metrics.pairwise import euclidean_distances
from scipy.spatial.distance import cdist
from scipy.optimize import fsolve
import math
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
# -
# ## Trial on TiOx/SiOx
# Tempeature vs. S10_HF
#import timestamp from data sheet (time:0~5000s)
address = 'data/degradation.xlsx'
df = pd.read_excel(address,sheet_name = 'normal data',usecols = [0],names = None,nrows = 5000)
df_time = df.values.tolist()
# +
#import data sheet at 85 C (time:0~5000s)
df = pd.read_excel(address,sheet_name = 'normal data',usecols = [3],names = None,nrows = 5000)
df_85 = df.values.tolist()
df = pd.read_excel(address,sheet_name = 'smooth data',usecols = [3],names = None,nrows = 5000)
df_85s = df.values.tolist()
# +
#import data sheet at 120 C (time:0~5000s)
df = pd.read_excel(address,sheet_name = 'normal data',usecols = [3],names = None,nrows = 5000)
df_120 = df.values.tolist()
df = pd.read_excel(address,sheet_name = 'smooth data',usecols = [3],names = None,nrows = 5000)
df_120s = df.values.tolist()
# -
# randomly select 7 points from normal data
x_normal = np.array(time).T
y_normal = np.array(df_85).T
x_normal = x_normal.reshape((5000))
y_normal = y_normal.reshape((5000))
def plot (X,X_,y_mean,y,y_cov,gp,kernel):
#plot function
plt.figure()
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.fill_between(X_, y_mean - np.sqrt(np.diag(y_cov)),y_mean + np.sqrt(np.diag(y_cov)),alpha=0.5, color='k')
plt.scatter(X[:, 0], y, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.tight_layout()
# +
# Preparing training set
# For log scaled plot
x_loop = np.array([1,10,32,100,316,1000,3162])
X = x_normal[x_loop].reshape(x_loop.size)
y = y_normal[x_loop]
X = X.reshape(x_loop.size,1)
X = np.log10(X)
MAX_x_value = np.log10(5000)
X_ = np.linspace(0,MAX_x_value, 5000)
# Kernel setting
length_scale_bounds_MAX = 0.5
length_scale_bounds_MIN = 1e-4
for length_scale_bounds_MAX in (0.3,0.5,0.7):
kernel = 1.0 * RBF(length_scale=20,length_scale_bounds=(length_scale_bounds_MIN, length_scale_bounds_MAX)) + WhiteKernel(noise_level=0.00000001)
gp = GaussianProcessRegressor(kernel=kernel,alpha=0.0).fit(X, y)
y_mean, y_cov = gp.predict(X_[:, np.newaxis], return_cov=True)
plot (X,X_,y_mean,y,y_cov,gp,kernel)
# +
# Find the minimum value in the bound
# 5000 * 5000
# Find minimum value in the last row as the minimum value for the bound
def ucb(X , gp, dim, delta):
"""
Calculates the GP-UCB acquisition function values
Inputs: gp: The Gaussian process, also contains all data
x:The point at which to evaluate the acquisition function
Output: acq_value: The value of the aquisition function at point x
"""
mean, var = gp.predict(X[:, np.newaxis], return_cov=True)
#var.flags['WRITEABLE']=True
#var[var<1e-10]=0
mean = np.atleast_2d(mean).T
var = np.atleast_2d(var).T
beta = 2*np.log(np.power(5000,2.1)*np.square(math.pi)/(3*delta))
return mean - np.sqrt(beta)* np.sqrt(np.diag(var))
acp_value = ucb(X_, gp, 0.1, 5)
X_min = np.argmin(acp_value[-1])
print(acp_value[-1,X_min])
print(np.argmin(acp_value[-1]))
print(min(acp_value[-1]))
# +
# Preparing training set
x_loop = np.array([1,10,32,100,316,1000,3162])
X = x_normal[x_loop].reshape(x_loop.size)
y = y_normal[x_loop]
X = X.reshape(x_loop.size,1)
X = np.log10(X)
MAX_x_value = np.log10(5000)
X_ = np.linspace(0,MAX_x_value, 5000)
# Kernel setting
length_scale_bounds_MAX = 0.4
length_scale_bounds_MIN = 1e-4
kernel = 1.0 * RBF(length_scale=20,length_scale_bounds=(length_scale_bounds_MIN, length_scale_bounds_MAX)) + WhiteKernel(noise_level=0.0001)
gp = GaussianProcessRegressor(kernel=kernel,alpha=0.0).fit(X, y)
y_mean, y_cov = gp.predict(X_[:, np.newaxis], return_cov=True)
acp_value = ucb(X_, gp, 0.1, 5)
ucb_y_min = acp_value[-1]
print (min(ucb_y_min))
X_min = np.argmin(acp_value[-1])
print(acp_value[-1,X_min])
print(np.argmin(acp_value[-1]))
print(min(acp_value[-1]))
plt.figure()
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.plot(X_, ucb_y_min, 'x', lw=3, zorder=9)
# plt.fill_between(X_, y_mean, ucb_y_min,alpha=0.5, color='k')
plt.scatter(X[:, 0], y, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.tight_layout()
# +
acp_value = ucb(X_, gp, 0.1, 5)
X_min = np.argmin(acp_value[-1])
print(acp_value[-1,X_min])
print(np.argmin(acp_value[-1]))
print(min(acp_value[-1]))
# +
# Iterate i times with mins value point of each ucb bound
# Initiate with 7 data points, apply log transformation to them
x_loop = np.array([1,10,32,100,316,1000,3162])
X = x_normal[x_loop].reshape(x_loop.size)
Y = y_normal[x_loop]
X = X.reshape(x_loop.size,1)
X = np.log10(X)
MAX_x_value = np.log10(5000)
X_ = np.linspace(0,MAX_x_value, 5000)
# Kernel setting
length_scale_bounds_MAX = 0.5
length_scale_bounds_MIN = 1e-4
kernel = 1.0 * RBF(length_scale=20,length_scale_bounds=(length_scale_bounds_MIN, length_scale_bounds_MAX)) + WhiteKernel(noise_level=0.0001)
gp = GaussianProcessRegressor(kernel=kernel,alpha=0.0).fit(X, Y)
y_mean, y_cov = gp.predict(X_[:, np.newaxis], return_cov=True)
acp_value = ucb(X_, gp, 0.1, 5)
ucb_y_min = acp_value[-1]
plt.figure()
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.fill_between(X_, y_mean, ucb_y_min,alpha=0.5, color='k')
plt.scatter(X[:, 0], Y, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.tight_layout()
# Change i to set extra data points
i=0
while i < 5 :
acp_value = ucb(X_, gp, 0.1, 5)
ucb_y_min = acp_value[-1]
index = np.argmin(acp_value[-1])
print(acp_value[-1,X_min])
print(min(acp_value[-1]))
# Protection to stop equal x value
while index in x_loop:
index = index - 50
x_loop = np.append(x_loop, index)
x_loop = np.sort(x_loop)
print (x_loop)
X = x_normal[x_loop].reshape(x_loop.size)
Y = y_normal[x_loop]
X = X.reshape(x_loop.size,1)
X = np.log10(X)
gp = GaussianProcessRegressor(kernel=kernel,alpha=0.0).fit(X, Y)
plt.plot(X_, y_mean, 'k', lw=3, zorder=9)
plt.fill_between(X_, y_mean, ucb_y_min,alpha=0.5, color='k')
plt.scatter(X[:, 0], Y, c='r', s=50, zorder=10, edgecolors=(0, 0, 0))
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.title('cycle %d'%(i), color = 'white')
plt.tight_layout()
plt.show()
i+=1
print('X:', X, '\nY:', Y)
s = interpolate.InterpolatedUnivariateSpline(x_loop,Y)
x_uni = np.arange(0,5000,1)
y_uni = s(x_uni)
# Plot figure
plt.plot(df_120s,df_120Ls,'-',color = 'gray')
plt.plot(x_uni,y_uni,'-',color = 'red')
plt.plot(x_loop, Y,'x',color = 'black')
plt.tick_params(axis='y', colors = 'white')
plt.tick_params(axis='x', colors = 'white')
plt.ylabel('Lifetime',color = 'white')
plt.xlabel('Time',color = 'white')
plt.title('cycle %d'%(i+1), color = 'white')
plt.show()
| BO_trials/BO_all_temp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.0.0
# language: julia
# name: julia-1.0
# ---
# # Knet RNN example
# **TODO**: Use the new RNN interface, add dropout?
using Pkg; haskey(Pkg.installed(),"Knet") || Pkg.add("Knet")
using Knet
True=true # so we can read the python params
include("common/params_lstm.py")
gpu()
println("OS: ", Sys.KERNEL)
println("Julia: ", VERSION)
println("Knet: ", Pkg.installed()["Knet"])
println("GPU: ", read(`nvidia-smi --query-gpu=name --format=csv,noheader`,String))
# define model
function initmodel()
rnnSpec,rnnWeights = rnninit(EMBEDSIZE,NUMHIDDEN; rnnType=:gru)
inputMatrix = KnetArray(xavier(Float32,EMBEDSIZE,MAXFEATURES))
outputMatrix = KnetArray(xavier(Float32,2,NUMHIDDEN))
return rnnSpec,(rnnWeights,inputMatrix,outputMatrix)
end;
# +
# define loss and its gradient
function predict(weights, inputs, rnnSpec)
rnnWeights, inputMatrix, outputMatrix = weights # (1,1,W), (X,V), (2,H)
indices = permutedims(hcat(inputs...)) # (B,T)
rnnInput = inputMatrix[:,indices] # (X,B,T)
rnnOutput = rnnforw(rnnSpec, rnnWeights, rnnInput)[1] # (H,B,T)
return outputMatrix * rnnOutput[:,:,end] # (2,H) * (H,B) = (2,B)
end
loss(w,x,y,r)=nll(predict(w,x,r),y)
lossgradient = grad(loss);
# -
# load data
include(Knet.dir("data","imdb.jl"))
@time (xtrn,ytrn,xtst,ytst,imdbdict)=imdb(maxlen=MAXLEN,maxval=MAXFEATURES)
for d in (xtrn,ytrn,xtst,ytst); println(summary(d)); end
imdbarray = Array{String}(undef,88584)
for (k,v) in imdbdict; imdbarray[v]=k; end
imdbarray[xtrn[1]]
# prepare for training
weights = nothing; Knet.gc(); # Reclaim memory from previous run
rnnSpec,weights = initmodel()
optim = optimizers(weights, Adam; lr=LR, beta1=BETA_1, beta2=BETA_2, eps=EPS);
# cold start
@time for (x,y) in minibatch(xtrn,ytrn,BATCHSIZE;shuffle=true)
grads = lossgradient(weights,x,y,rnnSpec)
update!(weights, grads, optim)
end
# prepare for training
weights = nothing; Knet.gc(); # Reclaim memory from previous run
rnnSpec,weights = initmodel()
optim = optimizers(weights, Adam; lr=LR, beta1=BETA_1, beta2=BETA_2, eps=EPS);
# 29s
@info("Training...")
@time for epoch in 1:EPOCHS
@time for (x,y) in minibatch(xtrn,ytrn,BATCHSIZE;shuffle=true)
grads = lossgradient(weights,x,y,rnnSpec)
update!(weights, grads, optim)
end
end
@info("Testing...")
@time accuracy(weights, minibatch(xtst,ytst,BATCHSIZE), (w,x)->predict(w,x,rnnSpec))
| examples/DeepLearningFrameworks/Knet_RNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "8d17eb95-472d-4780-832e-fef7af4bac82"}
# # Twitter Bot Detection
# ##### by <NAME> and <NAME>
# In this notebook, we are using a dataset from [Kaggle](https://www.kaggle.com/davidmartngutirrez/twitter-bots-accounts) and [Botometer](https://botometer.osome.iu.edu/bot-repository/datasets.html), containing in total 26K twitter user ids. Thanks to the Tweepy Python library we are retrieving users' information, which we considered useful in order to determine the type of an account.We have numerical, categorical and also textual features. So we train 3 models, one for the numerical/categorical features, another for the textual features and one more for the combination of the two types of features.
# We also used different classifiers and different evaluation metrics, in order to make comparisons between the results.
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "0caecced-08db-4c7b-820c-868e8c3b52b5"}
# #### Install Dependencies
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "3d3a7794-0a3e-4739-b5cc-595b0e7f1cad"}
# !sudo pip install --upgrade pip
# !sudo pip install pyspark --upgrade
# !pip install nltk
# !python -m nltk.downloader all
# !pip install stopwords
# !pip install mlflow
# !pip install langdetect
# !pip install tweepy
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "ebac9c8f-2f49-4216-85d4-a1f2b04f9f12"}
# #### Import Packages
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "476022d0-a369-4998-ae2b-70778d40a43e"}
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
import csv
import langdetect
import tweepy
from tweepy import OAuthHandler
import mlflow
import string
import pyspark
from pyspark.sql import *
from pyspark.sql.types import *
from pyspark.sql.functions import *
from pyspark.sql.session import SparkSession
from pyspark import SparkContext, SparkConf, SparkFiles
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.classification import FMClassifier
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.feature import StringIndexer, VectorAssembler, StandardScaler, OneHotEncoder
from pyspark.ml.feature import StopWordsRemover, Word2Vec, RegexTokenizer, Tokenizer, MinMaxScaler
from pyspark.sql.functions import udf, col
import datetime
import nltk
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import re
from IPython.display import HTML
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "6f53700d-ed62-494a-8159-62d4aca5eb6c"}
# #### Some globals
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "91afec8c-f557-4738-aabf-2e6e05fcb890"}
now = datetime.datetime.now()
nltk.download('stopwords')
stop_words = set(stopwords.words('english'))
RANDOM_SEED = 42 # For reproducibility
best_AUC = 0 #For testing
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "4cad6980-cf32-444c-afe5-34a71073c5dc"}
# #### Test that everything is ok
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "a68c0c63-49c1-477a-b13c-cc076ab40ec8"}
spark
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "00d05cef-4cdf-42ff-ba06-cca3e10d1d15"}
# #### Import the Dataset
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "323de591-4fb6-46cb-8f75-53a86505fafb"}
bot_df = spark.read.format("csv").option("header","true").load("dbfs:/FileStore/shared_uploads/<EMAIL>1.it/twitter_human_bots_dataset_clean.csv")
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "cdb6e81d-cd3e-4a84-99de-6ee369e47609"}
# #### Remove Duplicates
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "d77cc69f-41b4-42cc-8a81-4302dfd86d14"}
bot_df.dropDuplicates(["id"])
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "c7971459-9614-4182-bf66-176355c8f9f0"}
# ### Some UDFs
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "9de5fc67-b9aa-4e66-8994-c360b69c50f4"}
#Clean and perform stemming
def clean_text(text):
porter = PorterStemmer()
if text is None:
return ""
if text == []:
return ""
row = text.lower()
row = row.strip()
row = re.sub(r'[^\w\s]',' ',row)
row = re.sub(r'\_',' ',row)
filtered_sentence = ""
for w in row.split() :
temp = porter.stem(w)
filtered_sentence += (temp + " ")
row = filtered_sentence
if row is None:
return ""
if row == []:
return ""
return row
spark.udf.register("clean_text",clean_text)
# Get the days since the account was created
def to_days(then):
now = datetime.datetime.now()
date_time_obj = datetime.datetime.strptime(then, '%Y-%m-%d %H:%M:%S').date()
diff = (now.date() - date_time_obj)
diff = str(diff).split(' ')
return int(diff[0])
to_days_UDF = spark.udf.register("to_daysUDF", to_days)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "eaae81df-42ce-4c6f-8771-52654492c865"}
# #### Split the dataframes into Numerical and Textual/Combined
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "6b6570b0-3329-40dd-a8ed-ce5822020ce4"}
# Get the days since account creation
bot_df = bot_df.withColumn("created_at", to_days_UDF(col("created_at")))
# The dataframe for both the Textual and Combined methods
bot_df_comb = bot_df.selectExpr("account_type","cast(follower_count as int) follower_count","cast(friends_count as int) friends_count","cast(listed_count as int) listed_count","cast(statuses_count as int) statuses_count","cast(retweets as float) retweets","cast(with_url as float) with_url","cast(with_mention as float) with_mention","geo_enabled", "verified", "has_extended_profile", "default_profile", "default_profile_image","cast(created_at as int) created_at","cast(avg_cosine as float) avg_cosine",'clean_text(description) as description','clean_text(tweet_text) as tweet_text')
# The dataframe for the Numerical Method
bot_df_num = bot_df.selectExpr("account_type","cast(follower_count as int) follower_count","cast(friends_count as int) friends_count","cast(listed_count as int) listed_count","cast(statuses_count as int) statuses_count","cast(retweets as float) retweets","cast(with_url as float) with_url","cast(with_mention as float) with_mention","geo_enabled", "verified", "has_extended_profile", "default_profile", "default_profile_image","cast(created_at as int) created_at","cast(avg_cosine as float) avg_cosine")
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "0b185764-733c-4a13-9294-242237c76879"}
bot_df_num.show(10)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "4fc897b0-657b-4a00-8b6b-d0ac05107e21"}
# #### Split features into categories
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "2b270f15-ba77-4394-9e9e-d811bb21d19b"}
NUMERICAL_FEATURES = ["follower_count",
"friends_count",
"listed_count",
"statuses_count",
"retweets",
"with_url",
"with_mention",
"created_at",
"avg_cosine"
]
CATEGORICAL_FEATURES = ["geo_enabled",
"verified",
"has_extended_profile",
"default_profile",
"default_profile_image",
]
TEXTUAL_FEATURES = ["description",
"tweet_text"
]
TARGET_VARIABLE = "account_type"
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "3d8a9488-1895-4138-b6a2-17ca3803ea95"}
# #### Correlation between features and target variable
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "a8ed15b0-e4a4-473b-bb55-4dba449ed058"}
# convert to pandas and to binary
df = bot_df_num.toPandas()
df['geo_enabled'] =df['geo_enabled'].astype('category').cat.codes
df['verified'] =df['verified'].astype('category').cat.codes
df['has_extended_profile'] =df['has_extended_profile'].astype('category').cat.codes
df['default_profile'] =df['default_profile'].astype('category').cat.codes
df['default_profile_image'] =df['default_profile_image'].astype('category').cat.codes
df['account_type'] =df['account_type'].astype('category').cat.codes
corr_matrix = df.corr()
print(corr_matrix[TARGET_VARIABLE].sort_values(ascending=False))
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "d7a8a519-4d1f-41ae-9812-2c919dddce28"}
# #### Split in Train/Test
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "13f7f153-a02b-4f39-bd53-d17b382a963d"}
bot_df.groupBy(TARGET_VARIABLE).count().show()
train_df_both, test_df_both = bot_df_comb.randomSplit([0.8, 0.2], seed=RANDOM_SEED)
train_df_num, test_df_num = bot_df_num.randomSplit([0.8, 0.2], seed=RANDOM_SEED)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "63fd9065-e9b0-4294-9e55-cce6c9e5d4fe"}
# #### Evaluate the Testing Results
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "305cab76-f8c5-4c2a-a36d-537e21aba9ee"}
def evaluate_model(predictions, metric="areaUnderROC"):
from pyspark.ml.evaluation import BinaryClassificationEvaluator
evaluator = BinaryClassificationEvaluator(metricName=metric)
return evaluator.evaluate(predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "f9145013-ca88-494c-90d0-ee9c580fb6a7"}
# #### Pipeline to create the train/test dataframes for both the combined and textual models
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "32c6c197-7088-43fd-bca8-103217c61e69"}
def pipeline_fitter(train, numerical_features, categorical_features, target_variable,with_std=True,with_mean=True):
# Stage 1-3 to gets the word2vec for the tweet texts
stage_1 = RegexTokenizer(inputCol="tweet_text", outputCol="tokens", pattern="\\W")
stage_2 = StopWordsRemover(inputCol="tokens", outputCol="filtered_words")
stage_3 = Word2Vec(inputCol="filtered_words", outputCol="feature_vector_text", vectorSize=100)
# Stage 4-6 to gets the word2vec for the description
stage_4 = RegexTokenizer(inputCol="description", outputCol="tokens_des", pattern="\\W")
stage_5 = StopWordsRemover(inputCol="tokens_des", outputCol="filtered_words_des")
stage_6 = Word2Vec(inputCol="filtered_words_des", outputCol="feature_vector_des", vectorSize=100)
indexers = [StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c), handleInvalid="keep") for c in categorical_features]
encoder = OneHotEncoder(inputCols=[indexer.getOutputCol() for indexer in indexers],
outputCols=["{0}_encoded".format(indexer.getOutputCol()) for indexer in indexers],
handleInvalid="keep")
label_indexer = StringIndexer(inputCol = target_variable, outputCol = "label")
# Combine the features of both tweet texts, description and numericals, for the combined methods
assembler_comb = VectorAssembler(inputCols=encoder.getOutputCols() + numerical_features+['feature_vector_des',"feature_vector_text"], outputCol="features_comb")
# Combine the features of both tweet texts and description, for the textual methods
assembler_text = VectorAssembler(inputCols=['feature_vector_des',"feature_vector_text"], outputCol="features_text")
scaler_comb = StandardScaler(inputCol=assembler_comb.getOutputCol(), outputCol="std_"+assembler_comb.getOutputCol(), withStd=with_std, withMean=with_mean)
scaler_text = StandardScaler(inputCol=assembler_text.getOutputCol(), outputCol="std_"+assembler_text.getOutputCol(), withStd=with_std, withMean=with_mean)
# Combine the stages
stages = indexers+[encoder]+[label_indexer]+[stage_1]+[stage_2]+[stage_3]+[stage_4]+[stage_5]+[stage_6]+[assembler_comb]+[assembler_text]+[scaler_comb]+[scaler_text]
pipeline = Pipeline(stages=stages)
transformer = pipeline.fit(train)
df_transformed = transformer.transform(train)
return transformer, df_transformed
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "2be26641-f69b-4d45-aa80-9429b685551f"}
transformer,train_df_both = pipeline_fitter(train_df_both, NUMERICAL_FEATURES, CATEGORICAL_FEATURES, TARGET_VARIABLE)
test_df_both=transformer.transform(test_df_both)
# Get the train/ test set for both Combined and Textual methods
train_df_comb=train_df_both.select(["std_features_comb", "label"])
test_df_comb=test_df_both.select(["std_features_comb", "label"])
train_df_text=train_df_both.select(["std_features_text", "label"])
test_df_text=test_df_both.select(["std_features_text", "label"])
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "cf301da1-8a1a-44e3-97b3-e5e03400d0fb"}
# ## Combined Textual and Numerical/Categorical Features
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "ab59d197-10df-4aa6-85b9-dc5f5e8aa7e9"}
# ### 1. Logistic Regression
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "7c909b68-9aee-4beb-8181-a18215469043"}
log_reg = LogisticRegression(featuresCol = "std_features_comb", labelCol = "label", maxIter=100)
log_reg_model = log_reg.fit(train_df_comb)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "abc61db2-9685-497b-9d35-5a75f48a856e"}
test_predictions = log_reg_model.transform(test_df_comb)
test_predictions.select("std_features_comb", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "833e63a6-0705-40f2-bf7f-bc058930b470"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "b5f5d258-7d1d-40c7-a6ed-33073f7443c6"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(test_predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(test_predictions, metric="areaUnderPR")))
print("***** Test Set *****")
best_model=log_reg_model
best_AUC=evaluate_model(test_predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "0716b6b5-1af4-434d-9d30-da3484fed67f"}
# ### 2. Decision Tree
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "3c51f7c5-82df-43f2-b8c2-e52929f60175"}
dt = DecisionTreeClassifier(featuresCol="std_features_comb", labelCol="label")
dt_model = dt.fit(train_df_comb)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "780600ae-eadc-4383-8f0c-48c65f93ea13"}
test_predictions = dt_model.transform(test_df_comb)
test_predictions.select("std_features_comb", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "ff03c8a1-dfc1-4381-98d9-ba337b7f7610"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "9235ca7f-5d28-4e3f-a746-d75c6d3cf1f8"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(test_predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(test_predictions, metric="areaUnderPR")))
print("***** Test Set *****")
if evaluate_model(test_predictions)>best_AUC:
best_model=dt_model
best_AUC=evaluate_model(test_predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "e6ea2038-d05d-4884-9c08-735c7b603c1e"}
# ### 3. Random Forest
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "994b9117-45c0-42aa-b84c-c3f51954311a"}
rf = RandomForestClassifier(featuresCol="std_features_comb", labelCol="label")
rf_model = rf.fit(train_df_comb)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "ffaa230f-499b-4076-9458-f6c536e2c6c3"}
test_predictions = rf_model.transform(test_df_comb)
test_predictions.select("std_features_comb", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "f22a3337-bb46-44a6-95d7-537f543259cb"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "ba9131d6-59a2-4913-af42-4595f56bd205"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(test_predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(test_predictions, metric="areaUnderPR")))
print("***** Test Set *****")
if evaluate_model(test_predictions)>best_AUC:
best_model=rf_model
best_AUC=evaluate_model(test_predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "ecc4d9d6-c774-403b-a75c-adc5d3a0fb8c"}
# ### 4. Factorization machines classifier
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "7f41efd9-eddb-4786-bcf6-ca9579b22007"}
fm = FMClassifier(labelCol="label", featuresCol="std_features_comb")
fm_model = fm.fit(train_df_comb)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "d6c634f9-b33d-4654-a808-282a7b4ebb94"}
test_predictions = fm_model.transform(test_df_comb)
test_predictions.select("std_features_comb", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "4b8a746b-d6ab-4d49-ad2f-818c2e7ec04d"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "13971bef-07ad-4ee1-82cd-9fc38bfc8be8"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(test_predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(test_predictions, metric="areaUnderPR")))
print("***** Test Set *****")
if evaluate_model(test_predictions)>best_AUC:
best_model=fm_model
best_AUC=evaluate_model(test_predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "83762b88-d95f-4049-893f-d89c9df48e2c"}
# ## Numerical/Categorical Features
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "a1f5b462-8560-47ba-a81d-7fee4f1bf1e6"}
# ###1. Logistic Regression
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "31447d8a-7888-489b-ad19-446fbb34995d"}
def logistic_regression_pipeline(train,
numerical_features,
categorical_features,
target_variable,
with_std=True,
with_mean=True,
k_fold=5):
# 1.a Create a list of indexers, i.e., one for each categorical feature
indexers = [StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c), handleInvalid="keep") for c in categorical_features]
# 1.b Create the one-hot encoder for the list of features just indexed (this encoder will keep any unseen label in the future)
encoder = OneHotEncoder(inputCols=[indexer.getOutputCol() for indexer in indexers],
outputCols=["{0}_encoded".format(indexer.getOutputCol()) for indexer in indexers],
handleInvalid="keep")
# 1.c Indexing the target column (i.e., transform human/bot into 1/0) and rename it as "label"
label_indexer = StringIndexer(inputCol = target_variable, outputCol = "label")
# 1.d Assemble all the features (both one-hot-encoded categorical and numerical) into a single vector
assembler = VectorAssembler(inputCols=encoder.getOutputCols() + numerical_features, outputCol="features")
# 2.a Create the StandardScaler
scaler = StandardScaler(inputCol=assembler.getOutputCol(), outputCol="std_"+assembler.getOutputCol(), withStd=with_std, withMean=with_mean)
# ...
# 3 Populate the stages of the pipeline with all the preprocessing steps
stages = indexers + [encoder] + [label_indexer] + [assembler] + [scaler] #+ ...
# 4. Create the logistic regression transformer
log_reg = LogisticRegression(featuresCol="std_features", labelCol="label", maxIter=100) # change `featuresCol=std_features` if scaler is used
# 5. Add the logistic regression transformer to the pipeline stages (i.e., the last one)
stages += [log_reg]
# 6. Set up the pipeline
pipeline = Pipeline(stages=stages)
# We use a ParamGridBuilder to construct a grid of parameters to search over.
# A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
# We use a ParamGridBuilder to construct a grid of parameters to search over.
# With 3 values for log_reg.regParam ($\lambda$) and 3 values for log_reg.elasticNetParam ($\alpha$),
# this grid will have 3 x 3 = 9 parameter settings for CrossValidator to choose from.
param_grid = ParamGridBuilder()\
.addGrid(log_reg.regParam, [0.0, 0.05, 0.1]) \
.addGrid(log_reg.elasticNetParam, [0.0, 0.5, 1.0]) \
.build()
cross_val = CrossValidator(estimator=pipeline,
estimatorParamMaps=param_grid,
evaluator=BinaryClassificationEvaluator(metricName="areaUnderROC"), # default = "areaUnderROC", alternatively "areaUnderPR"
numFolds=k_fold,
collectSubModels=True # this flag allows us to store ALL the models trained during k-fold cross validation
)
# Run cross-validation, and choose the best set of parameters.
cv_model = cross_val.fit(train)
return cv_model
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "d28b5a1a-c8dd-48c3-aa94-55e9cb9bbbb5"}
# This function summarizes all the models trained during k-fold cross validation
def summarize_all_models(cv_models):
for k, models in enumerate(cv_models):
print("*************** Fold #{:d} ***************\n".format(k+1))
for i, m in enumerate(models):
print("--- Model #{:d} out of {:d} ---".format(i+1, len(models)))
print("\tParameters: lambda=[{:.3f}]; alpha=[{:.3f}] ".format(m.stages[-1]._java_obj.getRegParam(), m.stages[-1]._java_obj.getElasticNetParam()))
print("\tModel summary: {}\n".format(m.stages[-1]))
print("***************************************\n")
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "c73f3357-908b-456a-881e-027a88099ef6"}
cv_model = logistic_regression_pipeline(train_df_num, NUMERICAL_FEATURES, CATEGORICAL_FEATURES, TARGET_VARIABLE)
summarize_all_models(cv_model.subModels)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "18674779-c37c-4dd0-b765-a4862b57a43c"}
for i, avg_roc_auc in enumerate(cv_model.avgMetrics):
print("Avg. ROC AUC computed across k-fold cross validation for model setting #{:d}: {:.3f}".format(i+1, avg_roc_auc))
print("Best model according to k-fold cross validation: lambda=[{:.3f}]; alfa=[{:.3f}]".
format(cv_model.bestModel.stages[-1]._java_obj.getRegParam(),
cv_model.bestModel.stages[-1]._java_obj.getElasticNetParam(),
)
)
print(cv_model.bestModel.stages[-1])
# `bestModel` is the best resulting model according to k-fold cross validation, which is also entirely retrained on the whole `train_df`
training_result = cv_model.bestModel.stages[-1].summary
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "5516e963-5aca-464d-827a-01386a26fa22"}
# Make predictions on the test set (`cv_model` contains the best model according to the result of k-fold cross validation)
# `test_df` will follow exactly the same pipeline defined above, and already fit to `train_df`
test_predictions = cv_model.transform(test_df_num)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "794a0b49-18a8-45c4-bf2a-1b265e48b7ec"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "7bdf67e5-c3be-4d2f-819b-8fc6cd1c3803"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(test_predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(test_predictions, metric="areaUnderPR")))
if evaluate_model(test_predictions)>best_AUC:
best_model=cv_model
best_AUC=evaluate_model(test_predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "2c2e90ef-3045-4e68-96ac-b0383c13278e"}
# #### Plots
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "276b7d35-502e-4268-a81a-15cd0d798fe0"}
plt.figure(figsize=(5,5))
# roc
#plt.subplot(2, 6, 1)
plt.plot([0, 1], [0, 1], 'r--')
plt.plot(training_result.roc.select('FPR').collect(),
training_result.roc.select('TPR').collect())
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.show()
#plt.subplot(2, 6, 2)
plt.plot([0, 1], [0, 1], 'r--')
plt.plot(training_result.pr.select('recall').collect(),
training_result.pr.select('precision').collect())
plt.xlabel('recall')
plt.ylabel('presicion')
plt.show()
#plt.subplot(2, 6, 3)
plt.plot([0, 1], [0, 1], 'r--')
plt.plot(training_result.precisionByThreshold.select('threshold').collect(),
training_result.precisionByThreshold.select('precision').collect())
plt.xlabel('threshold')
plt.ylabel('precision')
plt.show()
#plt.subplot(2, 6, 4)
plt.plot([0, 1], [0, 1], 'r--')
plt.plot(training_result.recallByThreshold.select('threshold').collect(),
training_result.recallByThreshold.select('recall').collect())
plt.xlabel('threshold')
plt.ylabel('recall')
plt.show()
#plt.subplot(2, 6, 5)
plt.plot([0, 1], [0, 1], 'r--')
plt.plot(training_result.fMeasureByThreshold.select('threshold').collect(),
training_result.fMeasureByThreshold.select('F-Measure').collect())
plt.xlabel('threshold')
plt.ylabel('F-Measure')
plt.show()
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "f0865f76-b2a7-446b-8d24-81ed2fb8fb2a"}
# ###2. Desicion Tree
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "4481afa9-4259-4654-8ea7-a3b359afbc2e"}
# This function defines the general pipeline for logistic regression
def decision_tree_pipeline(train,
numerical_features,
categorical_features,
target_variable,
with_std=True,
with_mean=True,
k_fold=5):
indexers = [StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c), handleInvalid="keep") for c in categorical_features]
# Indexing the target column (i.e., transform human/bot into 1/0) and rename it as "label"
label_indexer = StringIndexer(inputCol = target_variable, outputCol = "label")
# Assemble all the features (both one-hot-encoded categorical and numerical) into a single vector
assembler = VectorAssembler(inputCols=[indexer.getOutputCol() for indexer in indexers] + numerical_features, outputCol="features")
# Populate the stages of the pipeline with all the preprocessing steps
stages = indexers + [label_indexer] + [assembler] # + ...
# Create the decision tree transformer
dt = DecisionTreeClassifier(featuresCol="features", labelCol="label") # change `featuresCol=std_features` if scaler is used
# 5. Add the decision tree transformer to the pipeline stages (i.e., the last one)
stages += [dt]
# 6. Set up the pipeline
pipeline = Pipeline(stages=stages)
# We use a ParamGridBuilder to construct a grid of parameters to search over.
# A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
# We use a ParamGridBuilder to construct a grid of parameters to search over.
# With 3 values for dt.maxDepth and 2 values for dt.impurity
# this grid will have 3 x 2 = 9 parameter settings for CrossValidator to choose from.
param_grid = ParamGridBuilder()\
.addGrid(dt.maxDepth, [3, 5, 8]) \
.addGrid(dt.impurity, ["gini", "entropy"]) \
.build()
cross_val = CrossValidator(estimator=pipeline,
estimatorParamMaps=param_grid,
evaluator=BinaryClassificationEvaluator(metricName="areaUnderROC"), # default = "areaUnderROC", alternatively "areaUnderPR"
numFolds=k_fold,
collectSubModels=True # this flag allows us to store ALL the models trained during k-fold cross validation
)
# Run cross-validation, and choose the best set of parameters.
cv_model = cross_val.fit(train)
return cv_model
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "76e9a951-a158-4bcb-a7f5-5b1b63b2df12"}
# This function summarizes all the models trained during k-fold cross validation
def summarize_all_models(cv_models):
for k, models in enumerate(cv_models):
print("*************** Fold #{:d} ***************\n".format(k+1))
for i, m in enumerate(models):
print("--- Model #{:d} out of {:d} ---".format(i+1, len(models)))
print("\tParameters: maxDept=[{:d}]; impurity=[{:s}] ".format(m.stages[-1]._java_obj.getMaxDepth(), m.stages[-1]._java_obj.getImpurity()))
print("\tModel summary: {}\n".format(m.stages[-1]))
print("***************************************\n")
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "880f4b4f-9371-477b-894c-74e2dc3e3c68"}
cv_model = decision_tree_pipeline(train_df_num, NUMERICAL_FEATURES, CATEGORICAL_FEATURES, TARGET_VARIABLE)
summarize_all_models(cv_model.subModels)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "e408890b-e72b-4ea2-b6e7-77f73f7a166b"}
training_result = 0
for i, avg_roc_auc in enumerate(cv_model.avgMetrics):
print("Avg. ROC AUC computed across k-fold cross validation for model setting #{:d}: {:.3f}".format(i+1, avg_roc_auc))
if training_result < avg_roc_auc:
training_result = avg_roc_auc
print("Best model according to k-fold cross validation: maxDept=[{:d}]; impurity=[{:s}]".
format(cv_model.bestModel.stages[-1]._java_obj.getMaxDepth(),
cv_model.bestModel.stages[-1]._java_obj.getImpurity(),
)
)
print(cv_model.bestModel.stages[-1])
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "27417cb0-5ea5-44fb-a85c-0ef7ff52df1e"}
# Make predictions on the test set (`cv_model` contains the best model according to the result of k-fold cross validation)
# `test_df` will follow exactly the same pipeline defined above, and already fit to `train_df`
test_predictions = cv_model.transform(test_df_num)
test_predictions.select("features", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "8c92abf5-d50b-44f4-a0fa-dd6be9769372"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "2d70d0ae-51b6-465b-9f0b-45abb8e92c31"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(test_predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(test_predictions, metric="areaUnderPR")))
if evaluate_model(test_predictions)>best_AUC:
best_model=cv_model
best_AUC=evaluate_model(test_predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "b0d63dc3-f0e3-4d17-98cc-1c0275125f0f"}
# ###3. Random Forests
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "ea8f6648-329e-4b01-8c0f-daa60b7bc33a"}
# This function defines the general pipeline for logistic regression
def random_forest_pipeline(train,
numerical_features,
categorical_features,
target_variable,
with_std=True,
with_mean=True,
k_fold=5):
# Configure a random forest pipeline, which consists of the following stages:
indexers = [StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c), handleInvalid="keep") for c in categorical_features]
# Indexing the target column (i.e., transform human/bot into 1/0) and rename it as "label"
label_indexer = StringIndexer(inputCol = target_variable, outputCol = "label")
# Assemble all the features (both one-hot-encoded categorical and numerical) into a single vector
assembler = VectorAssembler(inputCols=[indexer.getOutputCol() for indexer in indexers] + numerical_features, outputCol="features")
# Populate the stages of the pipeline with all the preprocessing steps
stages = indexers + [label_indexer] + [assembler] # + ...
# Create the random forest transformer
rf = RandomForestClassifier(featuresCol="features", labelCol="label") # change `featuresCol=std_features` if scaler is used
# 5. Add the random forest transformer to the pipeline stages (i.e., the last one)
stages += [rf]
# 6. Set up the pipeline
pipeline = Pipeline(stages=stages)
# We use a ParamGridBuilder to construct a grid of parameters to search over.
# A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
# We use a ParamGridBuilder to construct a grid of parameters to search over.
# With 3 values for rf.maxDepth and 3 values for rf.numTrees
# this grid will have 3 x 3 = 9 parameter settings for CrossValidator to choose from.
param_grid = ParamGridBuilder()\
.addGrid(rf.maxDepth, [3, 5, 8]) \
.addGrid(rf.numTrees, [10, 50, 100]) \
.build()
cross_val = CrossValidator(estimator=pipeline,
estimatorParamMaps=param_grid,
evaluator=BinaryClassificationEvaluator(metricName="areaUnderROC"), # default = "areaUnderROC", alternatively "areaUnderPR"
numFolds=k_fold,
collectSubModels=True # this flag allows us to store ALL the models trained during k-fold cross validation
)
# Run cross-validation, and choose the best set of parameters.
cv_model = cross_val.fit(train)
return cv_model
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "b750eb1f-989a-4d75-9c68-d0fbc281590e"}
cv_model = random_forest_pipeline(train_df_num, NUMERICAL_FEATURES, CATEGORICAL_FEATURES, TARGET_VARIABLE)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "cdd8f12f-aec5-4e35-a0fa-3b0579e5c713"}
training_result = 0
for i, avg_roc_auc in enumerate(cv_model.avgMetrics):
print("Avg. ROC AUC computed across k-fold cross validation for model setting #{:d}: {:.3f}".format(i+1, avg_roc_auc))
if avg_roc_auc > training_result:
training_result = avg_roc_auc
print("Best model according to k-fold cross validation: maxDept=[{:d}]".
format(cv_model.bestModel.stages[-1]._java_obj.getMaxDepth(),),)
print(cv_model.bestModel.stages[-1])
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "258887b1-779f-4bf7-82a7-6980c81e7d53"}
#training_result = cv_model.bestModel.stages[-1].summary
# Make predictions on the test set (`cv_model` contains the best model according to the result of k-fold cross validation)
# `test_df` will follow exactly the same pipeline defined above, and already fit to `train_df`
test_predictions = cv_model.transform(test_df_num)
test_predictions.select("features", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "269ec37c-47d3-499d-8983-4fb1a1cdf677"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "05432cff-2205-4e17-99c7-ccc4a6d5edf6"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(test_predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(test_predictions, metric="areaUnderPR")))
if evaluate_model(test_predictions)>best_AUC:
best_model=cv_model
best_AUC=evaluate_model(test_predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "2cd622a9-9193-47d7-aa03-5640d6910233"}
# ###4. Factorization Machines
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "c644afa9-fbef-4387-8540-a1b17484079a"}
def fm_pipeline(train,
numerical_features,
categorical_features,
target_variable,
with_std=True,
with_mean=True,
k_fold=5):
# 1.a Create a list of indexers, i.e., one for each categorical feature
indexers = [StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c), handleInvalid="keep") for c in categorical_features]
# 1.b Create the one-hot encoder for the list of features just indexed (this encoder will keep any unseen label in the future)
encoder = OneHotEncoder(inputCols=[indexer.getOutputCol() for indexer in indexers],
outputCols=["{0}_encoded".format(indexer.getOutputCol()) for indexer in indexers],
handleInvalid="keep")
# Indexing the target column (i.e., transform human/bot into 1/0) and rename it as "label"
label_indexer = StringIndexer(inputCol = target_variable, outputCol = "label")
# Assemble all the features (both one-hot-encoded categorical and numerical) into a single vector
assembler = VectorAssembler(inputCols=[indexer.getOutputCol() for indexer in indexers] + numerical_features, outputCol="features")
featureScaler = MinMaxScaler(inputCol="features", outputCol="scaledFeatures")
stages = indexers + [encoder] + [label_indexer] + [assembler] + [featureScaler]
fm = FMClassifier(labelCol="label", featuresCol="scaledFeatures")
# 5. Add the decision tree transformer to the pipeline stages (i.e., the last one)
stages += [fm]
# 6. Set up the pipeline
pipeline = Pipeline(stages=stages)
# We use a ParamGridBuilder to construct a grid of parameters to search over.
# A CrossValidator requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
# We use a ParamGridBuilder to construct a grid of parameters to search over.
# this grid will have 3 x 3 = 9 parameter settings for CrossValidator to choose from.
param_grid = ParamGridBuilder()\
.addGrid(fm.stepSize, [0.001,0.002,0.005,0.01]) \
.build()
cross_val = CrossValidator(estimator=pipeline,
estimatorParamMaps=param_grid,
evaluator=BinaryClassificationEvaluator(metricName="areaUnderROC"), # default = "areaUnderROC", alternatively "areaUnderPR"
numFolds=k_fold,
collectSubModels=True # this flag allows us to store ALL the models trained during k-fold cross validation
)
# Run cross-validation, and choose the best set of parameters.
cv_model = cross_val.fit(train)
return cv_model
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "212cbb6e-ae42-4cd8-8bb2-b1ced8a8cb2e"}
cv_model = fm_pipeline(train_df_num, NUMERICAL_FEATURES, CATEGORICAL_FEATURES, TARGET_VARIABLE)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "b5e120ff-a8ae-43e3-8940-2b442bff3269"}
training_result = 0
for i, avg_roc_auc in enumerate(cv_model.avgMetrics):
print("Avg. ROC AUC computed across k-fold cross validation for model setting #{:d}: {:.3f}".format(i+1, avg_roc_auc))
if avg_roc_auc > training_result:
training_result = avg_roc_auc
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "f43e3f24-ded0-4802-84b7-7d9635a9ae37"}
#training_result = cv_model.bestModel.stages[-1].summary
# Make predictions on the test set (`cv_model` contains the best model according to the result of k-fold cross validation)
# `test_df` will follow exactly the same pipeline defined above, and already fit to `train_df`
test_predictions.select("features", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "4063daab-7c3a-41aa-8773-6361d7ad690b"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "08bc6f00-5400-4d6b-8dd8-654054dbc3ff"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(test_predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(test_predictions, metric="areaUnderPR")))
if evaluate_model(test_predictions)>best_AUC:
best_model=cv_model
best_AUC=evaluate_model(test_predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "853f0b92-55e5-4595-adfe-e382c8bd2d6b"}
# ## Textual Features
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "d3a26773-209c-4991-954a-8b421575c32e"}
# ###1. Logistic Regression
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "5ed13b3d-f938-4a33-bab8-48e30c3f6511"}
log_reg = LogisticRegression(featuresCol = "std_features_text", labelCol = "label", maxIter=100)
log_reg_model = log_reg.fit(train_df_text)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "e80de337-75c9-43be-88f8-0dbc756241aa"}
predictions = log_reg_model.transform(test_df_text)
predictions.select("std_features_text", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "e3a06a15-23ae-4f61-9556-84a88aabe2c3"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "81cc0b0c-8f3a-4fc9-a5eb-5444663a170b"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(predictions, metric="areaUnderPR")))
if evaluate_model(predictions)>best_AUC:
best_model=log_reg_model
best_AUC=evaluate_model(predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "dbe27e31-fd6a-4141-8d9f-fea7dab763c8"}
# ###2. Decision Tree
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "3659e21a-78cd-408f-9983-77b85a6b7113"}
dt = DecisionTreeClassifier(featuresCol="std_features_text", labelCol="label")
dt_model = dt.fit(train_df_text)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "a367fd2b-e7e8-4a98-b5ca-c7d2d79ff737"}
predictions = dt_model.transform(test_df_text)
predictions.select("std_features_text", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "267c9844-79e3-4fba-a2e3-b04c6b150147"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "7f13e2e4-54a0-4921-ab07-1ee04ae49187"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(predictions, metric="areaUnderPR")))
if evaluate_model(predictions)>best_AUC:
best_model=dt_model
best_AUC=evaluate_model(predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "fcb0ab77-6fc8-422b-baca-a15f083bceb7"}
# ###3. Random Forests
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "d8df5a26-a608-415f-a156-bc58b9760f32"}
rf = RandomForestClassifier(featuresCol="std_features_text", labelCol="label")
rf_model = rf.fit(train_df_text)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "bb6dd066-81a2-49a6-ab2e-6d8190a176ae"}
predictions = rf_model.transform(test_df_text)
predictions.select("std_features_text", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "2ec87334-3c5f-4d68-933c-0ed23a10d16f"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "d4ae27b4-85c3-495b-a41c-def14818868b"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(predictions, metric="areaUnderPR")))
if evaluate_model(predictions)>best_AUC:
best_model=rf_model
best_AUC=evaluate_model(predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "fcceecff-1906-446d-88a7-fe0c24bc8dae"}
# ###4. Factorization Machines
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "f6bdd6bd-086f-446b-9fbd-9e9e0a073bb8"}
fm = FMClassifier(labelCol="label", featuresCol="std_features_text")
fm_model = fm.fit(train_df_text)
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "d7ca33f6-df39-4063-a2d1-1e06d8d438f8"}
predictions = fm_model.transform(test_df_text)
predictions.select("std_features_text", "prediction", "label").show(5)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "0c69e817-2956-4319-905f-ad26a03b4ca3"}
# #### Evaluation
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "83d787a3-e5b4-4845-bda4-32a0a840c351"}
print("***** Test Set *****")
print("Area Under ROC Curve (ROC AUC): {:.3f}".format(evaluate_model(predictions)))
print("Area Under Precision-Recall Curve: {:.3f}".format(evaluate_model(predictions, metric="areaUnderPR")))
if evaluate_model(predictions)>best_AUC:
best_model=fm_model
best_AUC=evaluate_model(predictions)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "1571f7f5-5f51-4527-a98c-4ee914d855d4"}
# ## Test the Results
# Give a Twitter user's screen name on the widget above and find out if it is a bot or a human
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "4697c751-5a39-4b26-a481-edf03864ad89"}
def find_user(usr,model):
text_list = []
consumer_key = "<KEY>"
consumer_secret = "<KEY>"
access_token = "<KEY>"
access_token_secret = "<KEY>"
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
try:
# count
retweets = 0
with_mention = 0
with_url = 0
text = ""
user = api.get_user(usr)
tweets = api.user_timeline(
screen_name=user.screen_name, count=130, include_rts=True, tweet_mode='extended')
# read 130 of user's tweets
for tweet in tweets:
try:
if tweet.retweeted_status:
retweets += 1
except AttributeError:
# combine all tweets into one big text
to_add = remove_emoji(tweet.full_text).replace("\n", " ")
to_add = clean(to_add)
try:
# keep only english texts
if langdetect.detect(to_add) != 'en':
continue
except langdetect.lang_detect_exception.LangDetectException:
continue
text_list.append(to_add)
text = text + " " + to_add
if tweet.entities['urls']:
with_url += 1
if tweet.entities['user_mentions']:
with_mention += 1
text = " ".join(text.split())
# find retweets,mentions and urls per tweet
if len(tweets) >= 1:
retweets = retweets / len(tweets)
with_mention = with_mention / len(tweets)
with_url = with_url/len(tweets)
else:
retweets = 0
with_url = 0
with_mention = 0
# clean description
description = " ".join((re.sub(r"(?:\@|http?\://|https?\://|www)\S+",
"", remove_emoji(user.description).replace("\n", " "))).split())
# create a new clean and complete csv file with the dataset
bot_df = spark.createDataFrame(
[
("human", user.followers_count, user.friends_count, user.listed_count, user.statuses_count, str(user.geo_enabled), str(user.verified),
str(user.created_at), str(user.has_extended_profile), str(user.default_profile), str(user.default_profile_image), retweets, with_url, with_mention, description, text),
],
['account_type', 'follower_count', 'friends_count', 'listed_count', 'statuses_count', 'geo_enabled', 'verified',
'created_at', 'has_extended_profile', 'default_profile', 'default_profile_image', 'retweets', 'with_url', 'with_mention', 'description', 'tweet_text'] # add your column names here
)
bot_df = bot_df.withColumn(
"created_at", to_days_UDF(col("created_at")))
bot_df = bot_df.selectExpr("account_type", "cast(follower_count as int) follower_count", "cast(friends_count as int) friends_count", "cast(listed_count as int) listed_count", "cast(statuses_count as int) statuses_count",
"cast(retweets as float) retweets", "cast(with_url as float) with_url", "cast(with_mention as float) with_mention", "geo_enabled", "verified", "has_extended_profile", "default_profile", "default_profile_image", "cast(created_at as int) created_at")
test_predictions = model.transform(bot_df)
# test_predictions.select("features", "prediction").show(1)
test_predictions = test_predictions.select("prediction")
bot_pdf = test_predictions.toPandas()
if bot_pdf["prediction"][0] == 0:
return "bot"
else:
return "human"
except tweepy.TweepError:
return "unknown"
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "ea05e9b5-5c0c-4b42-ac1d-50f0b1bc1d97"}
dbutils.widgets.text("name", "")
find_user(dbutils.widgets.get("name"),best_model)
# + [markdown] application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "29686702-e23d-4e53-a72d-cf5061fc9bb2"}
# ## A preview of our Website's frontend
# + application/vnd.databricks.v1+cell={"title": "", "showTitle": false, "inputWidgets": {}, "nuid": "9ef027ca-e8ac-47ad-b217-e8449d735837"}
s = """<!DOCTYPE html>
<html>
<head>
<title>Bot Detector Website</title>
<link rel= "stylesheet" type= "text/css" href= "{{
url_for('static',filename='styles/index.css') }}">
</head>
<body>
<div class="container">
<h1>Twitter Bot Detector</h1>
<p>Check if a user is human or bot.</p>
<form action="" method="post">
<input type="text" placeholder="e.g. @hello_kitty123" name="name">
<div class="bar"></div>
<div class="highlight"></div>
<button class="btn striped-shadow dark" type="submit"
value="submit"><span>Check</span></button>
</form>
</div>
</body>
<style>
body {
font-family: courier, arial, helvetica;
font-size: 150%;
color: #77bfa1;
background-color: #726da8;
}
.container {
text-align: center;
margin-top: 10%;
font-weight: bold;
}
@import "https://fonts.googleapis.com/css?family=Bungee+Shade";
*,
:after,
:before {
box-sizing: border-box;
}
:focus {
outline: none;
}
button {
overflow: visible;
border: 0;
padding: 0;
margin: 1.8rem;
}
.btn.striped-shadow span {
display: block;
position: relative;
z-index: 2;
border: 5px solid;
}
.btn.striped-shadow.dark span {
border-color: #393939;
background: #77bfa1;
color: #393939;
}
.btn {
height: 80px;
line-height: 65px;
display: inline-block;
letter-spacing: 1px;
position: relative;
font-size: 1.35rem;
transition: opacity 0.3s, z-index 0.3s step-end, -webkit-transform 0.3s;
transition: opacity 0.3s, z-index 0.3s step-end, transform 0.3s;
transition: opacity 0.3s, z-index 0.3s step-end, transform 0.3s,
-webkit-transform 0.3s;
z-index: 1;
background-color: transparent;
cursor: pointer;
}
.btn {
width: 155px;
height: 48px;
line-height: 38px;
}
button.btn.striped-shadow.dark:after,
button.btn.striped-shadow.dark:before {
background-image: linear-gradient(
135deg,
transparent 0,
transparent 5px,
#393939 5px,
#393939 10px,
transparent 10px
);
}
button.btn.striped-shadow:hover:before {
max-height: calc(100% - 10px);
}
button.btn.striped-shadow:after {
width: calc(100% - 4px);
height: 8px;
left: -10px;
bottom: -9px;
background-size: 15px 8px;
background-repeat: repeat-x;
}
button.btn.striped-shadow:after,
button.btn.striped-shadow:before {
content: "";
display: block;
position: absolute;
z-index: 1;
transition: max-height 0.3s, width 0.3s, -webkit-transform 0.3s;
transition: transform 0.3s, max-height 0.3s, width 0.3s;
transition: transform 0.3s, max-height 0.3s, width 0.3s,
-webkit-transform 0.3s;
}
.btn.striped-shadow:hover {
-webkit-transform: translate(-12px, 12px);
-ms-transform: translate(-12px, 12px);
transform: translate(-12px, 12px);
z-index: 3;
}
button.btn.striped-shadow:hover:after,
button.btn.striped-shadow:hover:before {
-webkit-transform: translate(12px, -12px);
-ms-transform: translate(12px, -12px);
transform: translate(12px, -12px);
}
button.btn.striped-shadow:before {
width: 8px;
max-height: calc(100% - 5px);
height: 100%;
left: -12px;
bottom: -5px;
background-size: 8px 15px;
background-repeat: repeat-y;
background-position: 0 100%;
}
.input {
margin: 5% 10%;
position: relative;
width: fit-content;
}
input {
padding: 10px 10px 10px 5px;
font-size: 18px;
width: 280px;
border: 1px solid;
border-color: transparent transparent gray;
background-color: transparent;
}
input:focus {
outline: none;
}
/*Label */
label {
position: absolute;
top: 30%;
font-size: 18px;
color: rgb(165, 165, 165);
left: 3%;
z-index: -1;
pointer-events: none;
transition: all 0.3s;
-webkit-transition: all 0.3s;
-moz-transition: all 0.3s;
-ms-transition: all 0.3s;
-o-transition: all 0.3s;
}
/* Activate State */
input:focus + label,
input:valid + label {
font-size: 12px;
color: rgb(148, 98, 255);
top: -1%;
transition: all 0.3s;
-webkit-transition: all 0.3s;
-moz-transition: all 0.3s;
-ms-transition: all 0.3s;
-o-transition: all 0.3s;
}
/*End Label */
/*Bar*/
.bar {
width: 100%;
height: 2px;
position: absolute;
background-color: rgb(148, 98, 255);
top: calc(100% - 2px);
left: 0;
transform: scaleX(0);
-webkit-transform: scaleX(0);
-moz-transform: scaleX(0);
-ms-transform: scaleX(0);
-o-transform: scaleX(0);
}
/*Activate State */
input:focus ~ .bar,
input:valid ~ .bar {
transform: scaleX(1);
-webkit-transform: scaleX(1);
-moz-transform: scaleX(1);
-ms-transform: scaleX(1);
-o-transform: scaleX(1);
transition: transform 0.3s;
-webkit-transition: transform 0.3s;
-moz-transition: transform 0.3s;
-ms-transition: transform 0.3s;
-o-transition: transform 0.3s;
}
/*End Bar */
/*Highlight */
.highlight {
width: 100%;
height: 85%;
position: absolute;
background-color: rgba(148, 98, 255, 0.2);
top: 15%;
left: 0;
visibility: hidden;
z-index: -1;
}
input:focus ~ .highlight {
width: 0;
visibility: visible;
transition: all 0.09s linear;
-webkit-transition: all 0.09s linear;
-moz-transition: all 0.09s linear;
-ms-transition: all 0.09s linear;
-o-transition: all 0.09s linear;
}
/*End highlight */
::placeholder {
color: #77bfa1;
font-size: 20px;
}
</style>
</html>"""
h = HTML(s)
display(h)
| src/Twitter_Bot_Detector.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gaussian Mixture Model
#
# Original NB by <NAME>, modified by <NAME>
#
# !date
import numpy as np, pandas as pd, matplotlib.pyplot as plt, seaborn as sns
# %matplotlib inline
sns.set_context('paper')
sns.set_style('darkgrid')
import pymc3 as pm, theano.tensor as tt
# +
# simulate data from a known mixture distribution
np.random.seed(12345) # set random seed for reproducibility
k = 3
ndata = 500
spread = 5
centers = np.array([-spread, 0, spread])
# simulate data from mixture distribution
v = np.random.randint(0, k, ndata)
data = centers[v] + np.random.randn(ndata)
plt.hist(data);
# -
# setup model
model = pm.Model()
with model:
# cluster sizes
p = pm.Dirichlet('p', a=np.array([1., 1., 1.]), shape=k)
# ensure all clusters have some points
p_min_potential = pm.Potential('p_min_potential', tt.switch(tt.min(p) < .1, -np.inf, 0))
# cluster centers
means = pm.Normal('means', mu=[0, 0, 0], sd=15, shape=k)
# break symmetry
order_means_potential = pm.Potential('order_means_potential',
tt.switch(means[1]-means[0] < 0, -np.inf, 0)
+ tt.switch(means[2]-means[1] < 0, -np.inf, 0))
# measurement error
sd = pm.Uniform('sd', lower=0, upper=20)
# latent cluster of each observation
category = pm.Categorical('category',
p=p,
shape=ndata)
# likelihood for each observed value
points = pm.Normal('obs',
mu=means[category],
sd=sd,
observed=data)
# fit model
with model:
step1 = pm.Metropolis(vars=[p, sd, means])
step2 = pm.ElemwiseCategorical(vars=[category], values=[0, 1, 2])
tr = pm.sample(10000, step=[step1, step2])
# ## Full trace
pm.plots.traceplot(tr, ['p', 'sd', 'means']);
# ## After convergence
# take a look at traceplot for some model parameters
# (with some burn-in and thinning)
pm.plots.traceplot(tr[5000::5], ['p', 'sd', 'means']);
# I prefer autocorrelation plots for serious confirmation of MCMC convergence
pm.autocorrplot(tr[5000::5], varnames=['sd'])
# ## Sampling of cluster for individual data point
i=0
plt.plot(tr['category'][fdf8:f53e:61e4::18, i], drawstyle='steps-mid')
plt.axis(ymin=-.1, ymax=2.1)
def cluster_posterior(i=0):
print('true cluster:', v[i])
print(' data value:', np.round(data[i],2))
plt.hist(tr['category'][5fc00:db20:35b:7399::5,i], bins=[-.5,.5,1.5,2.5,], rwidth=.9)
plt.axis(xmin=-.5, xmax=2.5)
plt.xticks([0,1,2])
cluster_posterior(i)
| docs/source/notebooks/gaussian_mixture_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + active=""
# MIT License
#
# Copyright (c) 2019 <NAME>, https://orcid.org/0000-0001-9626-8615 (ORCID)
#
# Project Netherlands Offshore F3 Block - Complete
# https://terranubis.com/datainfo/Netherlands-Offshore-F3-Block-Complete
#
# Analyse a subbset in VTK format:
# X Range: 615552 to 629576 (delta: 14023.9)
# Y Range: 6.07384e+06 to 6.08422e+06 (delta: 10380)
# Z Range: -1844 to -800 (delta: 1044)
# -
from matplotlib import cm, colors
import matplotlib.pyplot as plt
# %matplotlib inline
# +
from scipy.ndimage.filters import gaussian_filter
from scipy.stats import linregress
# band filter
def raster_filter_range(raster0, g1, g2):
raster = raster0.copy()
raster.values = raster.values.astype(np.float32)
raster.values = gaussian_filter(raster.values,g1) - gaussian_filter(raster.values,g2)
return raster
# +
import xarray as xr
import numpy as np
from vtk import vtkStructuredGridReader
from vtk.util import numpy_support as VN
def vtk2da(filename):
reader = vtkStructuredGridReader()
reader.SetFileName(filename)
reader.ReadAllScalarsOn()
reader.Update()
data = reader.GetOutput()
dim = data.GetDimensions()
bnd = data.GetBounds()
#print (dim, bnd)
values = VN.vtk_to_numpy(data.GetPointData().GetArray('trace'))
values = values.reshape(dim,order='F')
da = xr.DataArray(values.transpose([2,1,0]),
coords=[np.linspace(bnd[4],bnd[5],dim[2]),
np.linspace(bnd[2],bnd[3],dim[1]),
np.linspace(bnd[0],bnd[1],dim[0])],
dims=['z','y','x'])
return da
# -
# ### Load dataset
da = vtk2da('Seismic_data_subset.vtk')
# TODO: fix Z axis values order
da.z.values = da.z.values[::-1]
da
np.diff(da.x)[0],np.diff(da.y)[0],np.diff(da.z)[0]
# +
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(12,5))
da.sel(z=da.z[0]).plot(ax=ax1)
da.sel(z=da.z[-1]).plot(ax=ax2)
fig.tight_layout(rect=[0.03, 0.0, 1, 0.9])
plt.suptitle('3D Seismic Cube Slices',fontsize=20)
plt.show()
# -
# ### Calculate spatial spectrum components
# +
gammas = np.arange(1,51)
#gammas = np.array([1,10,20,30,40])
powers = []
for g in gammas:
power1 = raster_filter_range(da.sel(z=da.z[0]), g-.5, g+.5).std()
power2 = raster_filter_range(da.sel(z=da.z[-1]), g-.5, g+.5).std()
powers.append(power1/power2)
da_power0 = xr.DataArray(np.array(powers),
coords=[25*gammas],
dims=['r'])
# +
gammas = np.arange(1,51)
#gammas = np.array([1,10,20,30,40])
zs = da.z.values
#zs = da.z.values[::10]
powers = []
for z in zs:
print (z,". ", end = '')
for g in gammas:
power = raster_filter_range(da.sel(z=z), g-.5, g+.5).std()
powers.append(power)
da_power = xr.DataArray(np.array(powers).reshape([len(zs),len(gammas)]),
coords=[zs,25*gammas],
dims=['z','r'])
# -
# ### Plot spatial spectrum components
# +
fig, ((ax1,ax2),(ax3,ax4),(ax5,ax6)) = plt.subplots(3,2,figsize=(12.5,15))
da_power0[1:].plot(ax=ax1)
ax1.set_title('Ratio top to bottom layers',fontsize=16)
ax1.set_xlabel('Wavelength, m',fontsize=14)
ax1.set_ylabel('Ratio',fontsize=14)
ax1.axhline(y=1, xmin=0, xmax=1, color = 'black', ls='--', alpha=1)
da_power.plot(ax=ax2,vmin=0,vmax=100)
ax2.set_title('Power (per depth)',fontsize=16)
ax2.set_xlabel('Wavelength, m',fontsize=14)
ax2.set_ylabel('Z, m',fontsize=14)
data1 = raster_filter_range(da.sel(z=da.z[0]),40,41)
data1.plot(ax=ax3,cmap='bwr',vmin=-100,vmax=100)
ax3.set_title('Z=-800m, wavelength 1000m',fontsize=16)
data2 = raster_filter_range(da.sel(z=da.z[-1]),40,41)
data2.plot(ax=ax4,cmap='bwr',vmin=-100,vmax=100)
ax4.set_title('Z=-1844m, wavelength 1000m',fontsize=16)
data1 = raster_filter_range(da.sel(z=da.z[0]),2,3)
data1.plot(ax=ax5,cmap='bwr',vmin=-4000,vmax=4000)
ax5.set_title('Z=-800m, wavelength 100m',fontsize=16)
data2 = raster_filter_range(da.sel(z=da.z[-1]),2,3)
data2.plot(ax=ax6,cmap='bwr',vmin=-4000,vmax=4000)
ax6.set_title('Z=-1844m, wavelength 100m',fontsize=16)
plt.suptitle('Spectral Components Analysis for 3D Seismic Data',fontsize=28)
fig.tight_layout(rect=[0.03, 0.0, 1, 0.95])
plt.savefig('Spectral Components Analysis for 3D Seismic Data.jpg', dpi=150)
plt.show()
| 3D Seismic Spectral Components Analysis/3D Seismic Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
Using MiniBatch KMeans to handle more data
# +
import numpy as np
from sklearn.datasets import make_blobs
blobs, labels = make_blobs(int(1e6), 3)
from sklearn.cluster import KMeans, MiniBatchKMeans
kmeans = KMeans(n_clusters=3)
minibatch = MiniBatchKMeans(n_clusters=3)
# -
# %time kmeans.fit(blobs) #IPython Magic
# %time minibatch.fit(blobs)
kmeans.cluster_centers_
minibatch.cluster_centers_
from sklearn.metrics import pairwise
pairwise.pairwise_distances(kmeans.cluster_centers_[0].reshape(1, -1), minibatch.cluster_centers_[0].reshape(1, -1))
np.diag(pairwise.pairwise_distances(kmeans.cluster_centers_, minibatch.cluster_centers_))
minibatch = MiniBatchKMeans(batch_size=len(blobs))
# %time minibatch.fit(blobs)
| Chapter06/Using MiniBatch KMeans to handle more data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Modelling Stickiness based on outputs of Diffusion Limited Aggregation
# author: <NAME>
# ## Approach
# 1. We shall first visualise the outputs from DLA
# 2. Note down our observations around stickiness variation based on the visuals
# 3. Build a mathematical model to model variation of stickiness with any parameters that can be derived from DLA outputs
# 4. Conclude with the best possible model
# ## Assumptions
# 1. In the interest of time, I'll only try 1 parameter / metric that intuitively seems best at the moment.
# 2. Since DLA takes really long to run, I've only run it on an image of size 251x251 with total number of particles ranging from 4500 to 19500 (interval: 2500) and stickiness varying from 0.05 to 1.0 (interval: 0.1). It took ~48 hours to generate this data.
# 3. Images are stored as numpy arrays. It is assumed that numpy arrays can be stored as grayscale images (with .png or .jpeg formats) and can then be loaded using PIL and converted to the numpy array, if required.
'''Import required packages'''
import os
import numpy as np
import pandas as pd
# %matplotlib inline
from matplotlib import pyplot as plt
'''Load and prepare dataframe'''
curr_dir = os.path.dirname(os.path.abspath(""))
data_filepath = os.path.join(curr_dir, "output_data.csv")
data = pd.read_csv(data_filepath).drop(columns=["Unnamed: 0"])
data["images"] = [np.load(filepath) for filepath in data["filepath"]]
data = data.drop(columns=["filepath"])
'''Visualise all outputs from DLA'''
# Change inline to qt to visualise the images externally, in a larger resolution.
# %matplotlib inline
fig, ax = plt.subplots(7,3)
for idx1, row in enumerate(ax):
for idx2, fig in enumerate(row):
fig.imshow(data["images"].iloc[(11*idx1)+(5*idx2)], cmap="Greys")
plt.show()
# ## Observations
#
# As the stickiness of the particles reduce:
# - Patterns seem to have lesser number of branches
# - Each branch becomes more dense
# - The total area that the pattern covers inside image seems to reduce
# ## Potential metrics to estimate stickiness
#
# To quantify the change in density we can try and analyse the following parameters,
# - Average number of neighbors-per-particle at a distance k (NN@k)
#
# Below is the implementation of the same.
# ## Visualising variation of Stickiness vs NN@k for observed data
'''NN@k - Number of neighbors at distance k'''
def computeNNK(image, k):
nz_idxs = zip(*np.nonzero(image))
min_row = min_col = 0
max_row, max_col = image.shape[0], image.shape[1]
nnk = []
for prow, pcol in nz_idxs:
rmin, rmax = max(min_row, prow-k), min(max_row, prow+k)
cmin, cmax = max(min_col, pcol-k), min(max_col, pcol+k)
neighbors = image[rmin:rmax+1, cmin:cmax+1]
val = np.sum(neighbors)/neighbors.size
nnk.append(val)
return sum(nnk)/len(nnk)
# Compute NN@k for all images and store in the dataframe
data["nnk"] = [computeNNK(img, 1) for img in data.images]
# +
'''Visualise variation of stickiness with NNK for systems with different number of total particles'''
# %matplotlib inline
# Group data based on total number of particles in the system
groups = data.groupby(by=["num_particles"])
# Iterate over each group and plot the variation between
# stickiness and NN@k (k=1) for each group
for group_key in groups.groups:
group = groups.get_group(group_key)
plt.plot(group.nnk, group.stickiness, label=group_key)
plt.legend(title="Number of Particles")
plt.grid()
plt.xlabel("NN@k")
plt.ylabel("Stickiness")
plt.title("NNK (k=1) vs Stickiness")
# -
# ## Modelling stickiness based on NN@k and N (number of total particles)
#
# Our intuition that the density increases as stickiness reduces seems to be correct based on the plots above. The factors that can be used to determine stickiness can be as follows:
# - NN@k
# - Number of total particles in the system
#
# This inverse relationship can probably be modeled using a polynomial regressor of the order 1 or 2. Some example formulation can be as follows, where S is Stickiness, N is the number of particles, x is the input image, and NNK(x) is the average number of particles at a distance k units from each particle.
# - $S = (A \times NNK(x)^m) + (B \times N^n) + C$
#
# In this case, we need to determine the parameters A, B, C, m, and n - to most accurately predict the stickiness value, given an input image.
#
# For simplification, we can assume maximum value for $m$ as $m=3$ (highest order for NNK) since we observe an inverse cubic / squared relationship w.r.t. $NNK(x)$ based on the plot. $N$ seems to have a larger effect for higher values of S. The effect seems to reduce at lower S values. This can be modelled by testing $n=1$ and $n=2$. So the estimation models that we'll try are:
# 1. $S = (A \times NNK(x)^2) + (B \times N^1) + C$
# 2. $S = (A \times NNK(x)^2) + (B \times N^2) + C$
# 3. $S = (A \times NNK(x)^2) + (B \times NNK(x)) + (C \times N) + D$
# 4. $S = (A \times NNK(x)^3) + (B \times NNK(x)^2) + (C \times NNK(x)) + (D \times N) + E$
'''Defining models'''
model1 = lambda image_params, A, B, C : (A*(image_params[0]**2)) + (B*image_params[1]) + C
model2 = lambda image_params, A, B, C : (A*(image_params[0]**2)) + (B*(image_params[1]**2)) + C
model3 = lambda image_params, A, B, C, D : (A*(image_params[0]**2)) + (B*image_params[0]) + (C*image_params[1]) + D
model4 = lambda image_params, A, B, C, D, E : (A*(image_params[0]**3)) + (B*image_params[0]**2) + (C*image_params[0])+ (D*image_params[1]) + E
from scipy.optimize import curve_fit
image_params = data[["nnk","num_particles"]].to_numpy().T
output_data = data["stickiness"].to_numpy()
popt1, pcov1 = curve_fit(model1, image_params, output_data)
popt2, pcov2 = curve_fit(model2, image_params, output_data)
popt3, pcov3 = curve_fit(model3, image_params, output_data)
popt4, pcov4 = curve_fit(model4, image_params, output_data)
# +
# %matplotlib inline
groups = data.groupby(by=["num_particles"])
for group_key in [4500]:
# Plot original
group = groups.get_group(group_key)
p = plt.plot(group.nnk, group.stickiness, label=group_key)
# Plot predictions from Model 1
image_params = group[["nnk", "num_particles"]].to_numpy()
predicted_stickiness1 = [model1(image_param, *popt1) for image_param in image_params]
plt.plot(group.nnk, predicted_stickiness1, label=f"{group_key} pred model 1")
# Plot predictions from Model 2
predicted_stickiness2 = [model2(image_param, *popt2) for image_param in image_params]
plt.plot(group.nnk, predicted_stickiness2, label=f"{group_key} pred model 2")
# Plot predictions from Model 3
predicted_stickiness3 = [model3(image_param, *popt3) for image_param in image_params]
plt.plot(group.nnk, predicted_stickiness3, label=f"{group_key} pred model 3")
# Plot predictions from Model 4
predicted_stickiness4 = [model4(image_param, *popt4) for image_param in image_params]
plt.plot(group.nnk, predicted_stickiness4, label=f"{group_key} pred model 4")
plt.legend(title="Number of Particles")
plt.grid()
plt.xlabel("NN@K")
plt.ylabel("Stickiness")
plt.title("NNK (k=1) vs Stickiness")
# -
# Clearly, Model 3 & 4 seem to fit the best in this case. Let's plot outputs from model 3 & 4 for all values of N.
# ## Visualising predictions from Model 3
# +
'''Visualising outputs of Model 3 with original'''
# %matplotlib inline
for group_key in groups.groups:
# Plot original
group = groups.get_group(group_key)
p = plt.plot(group.nnk, group.stickiness, label=group_key)
# Plot predictions from Model 3
predicted_stickiness3 = [model3(image_param, *popt3) for image_param in image_params]
plt.plot(group.nnk, predicted_stickiness3, label=f"{group_key} pred", ls="--", color=p[0].get_color())
#plt.legend(title="Number of Particles") # Uncomment when visualising plot using QT based renderer instead of inline
plt.grid()
plt.xlabel("NN@k")
plt.ylabel("Stickiness")
plt.title("NNK (k=1) vs Predicted Stickiness: Model 3")
# -
# ## Visualising predictions of Model 4
# +
'''Visualising outputs of Model 4 with original'''
# %matplotlib inline
for group_key in groups.groups:
# Plot original
group = groups.get_group(group_key)
p = plt.plot(group.nnk, group.stickiness, label=group_key)
# Plot predictions from Model 3
predicted_stickiness4 = [model4(image_param, *popt4) for image_param in image_params]
plt.plot(group.nnk, predicted_stickiness4, label=f"{group_key} pred", ls="--", color=p[0].get_color())
#plt.legend(title="Number of Particles") # Uncomment when visualising plot using QT based renderer instead of inline
plt.grid()
plt.xlabel("NN@k")
plt.ylabel("Stickiness")
plt.title("NNK (k=1) vs Predicted Stickiness: Model 4")
# -
# ## Conclusion
#
# Model 4 seems to do much better when the stickiness is higher, while both model 3 and 4 seem to predict negative values when the stickiness is low. accurately estimate the stickiness based on NN@k (k=1), hence the current-best model seems to be Model 4 i.e.
# - $S = (A \times NNK(x)^3) + (B \times NNK(x)^2) + (C \times NNK(x)) + (D \times N) + E$
#
# where,
#
# $A = 38.56$\
# $B = -45.55$\
# $C = 11.47$\
# $D = 2.16 \times 10^{-5}$\
# $E = 0.98$
# ## What else could be done?
#
# 1. The model could be fit on a subset of DLA outputs and the accuracy of the model can be estimated on unseen DLA simulations. This would help us understand if our model makes good predictions or not.
# 2. All models above seem to predict negative values when stickiness is low. This could be solved by either clipping the model output to a minimum such as 0.001, or by adding more complexity into the model, and/or by adding more constraints on the outputs of the model.
| notebooks/Stickiness Estimation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="ZRA_SLfa0KTa"
# # Big Signal Analysis of Reaction Networks
# -
# This notebook explores how to analyze the qualitative characteristics of a reaction network, such as:
# * Number of fixed points
# * Types of fixed points
# * Bifurcations
#
# The core challenge is to find the number of fixed points and explore the conditions under which their characteristics change.
# Finding the number of fixed points requires solving a quadratic system.
# Characterizing the fixed points requires calculating characteristic equations.
# The technical approach here is primarily using symbolic algebra.
# + [markdown] id="6DI1yNOd0PI5"
# # Preliminaries
# -
# ## Imports
# + executionInfo={"elapsed": 39768, "status": "ok", "timestamp": 1620740149477, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Ggr-yAwbfqFCOlFTHoKepUYJ9VjZuCGILW-YdHvUQ=s64", "userId": "07301174361489660166"}, "user_tz": 240} id="bYlTQg0n0S8d"
import tellurium as te
import sympy
import matplotlib.pyplot as plt
import numpy as np
from common_python.sympy import sympyUtil as su
from common_python.ODEModel.ODEModel import ODEModel
import matplotlib.pyplot as plt
# -
# ## Constants
su.addSymbols("S0 S1 S2 S3 S4 S5 S6 S7 S8 S9 k0 k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 k11 k12 k13 k14 k15 k16 k17 k18 k19 k20")
su.addSymbols("J0 J1 J2 J3 J4 J5 J6 J7 J8 J9 J10 J11 J12 J13 J14 J15 J16 J`7 J18 J19 J20")
FLUXES = [J0, J1, J2, J3, J4, J5, J6, J7, J8, J9, J10, J11, J12, J13, J14, J15]
SPECIES = [S0, S1, S2, S3, S4, S5, S6, S7, S8]
MODEL = """
J0: S0 -> S2; k0*S0
J1: S3 + S3 -> S0; k5*S3*S3
J2: S2 -> S3; k6*S2
J3: S3 ->; k9*S2*S3
J4: -> S3; k10*S0
k0 = 6+0.9011095014634776
k5 = 1.4823891153952284
k6 = -10+15.149868787476994
k9 = 91.19197034598812
k10 = 200
S0 = 1.0
S1 = 5.0
S2 = 9.0
S3 = 3.0
S4 = 10.0
"""
RR = te.loada(MODEL)
# + [markdown] id="UzypYn5RUEgj"
# # Helper Functions
# +
# Create dictionary relating reaction names to species
def mkStoichiometryExpressionDct(rr):
stoichiometryArr = rr.getFullStoichiometryMatrix()
reactionNames = [n[1:] if n[0] == "_" else n for n in stoichiometryArr.colnames]
stoichiometryArr.colnames = reactionNames
speciesNames = stoichiometryArr.rownames
dct = {}
for idx, species in enumerate(speciesNames):
sym = eval(species)
entry = ["%d*%s" % (stoichiometryArr[idx, n], reactionNames[n]) for n in range(len(reactionNames))]
expressionStr = " + ".join(entry)
dct[sym] = eval(expressionStr)
dct[sym] = sympy.simplify(dct[sym])
return dct
# Tests
stoichiometryDct = mkStoichiometryExpressionDct(RR)
stoichiometryDct
# +
# Do plot
def plotEigenInfo(rr, title="", k10Range=None, fixedPointIdx=1, **kwargs):
"""
Plots information about the dominant eigenvalue and fixed point for the
indicated fixed point.
Parameters
----------
rr: ExtendedRoadrunner
title: str
k10Range: range of k10
fixedPointIdx: int
index of the fixed point to study
kwargs: dict
fractional adjustment in value
"""
if k10Range is None:
k10Range = [150 + v for v in range(60)]
# Construct the data
subDct = {k0: rr.k0, k5: rr.k5, k6: rr.k6, k9: rr.k9, k10: rr.k10}
for key, value in kwargs.items():
if isinstance(key, str):
newKey = [s for s in subDct.keys() if s.name == key][0]
else:
newKey = key
subDct[newKey] = subDct[newKey] * value
xvs = []
reals = []
imags = []
fixedPointDcts = []
for c10 in k10Range:
subDct[k10] = c10
dominantReal = None
dominantImag = None
model = ODEModel(REDUCED_STATE_DCT, subs=subDct, isEigenvecs=False)
# Find the dominant eigenvalue for the fixed points
if len(model.fixedPoints) > fixedPointIdx:
fixedPointDcts.append(dict(model.fixedPoints[fixedPointIdx].valueDct))
# Find the dominant eigenvalue
for entry in model.fixedPoints[fixedPointIdx].eigenEntries:
value = entry.value
if isinstance(value, complex):
real, imag = su.asRealImag(value)
else:
real = value
imag = 0
if (dominantReal is None) or (real > dominantReal):
dominantReal = real
dominantImag = np.abs(imag)
xvs.append(c10)
reals.append(dominantReal)
imags.append(dominantImag)
# Plot the dominant eigenvalue
_, ax = plt.subplots(1)
ax.plot(xvs, reals, color="blue")
ax.plot(xvs, imags, color="brown")
ax.plot([xvs[0], xvs[-1]], [0, 0], linestyle="--", color="black")
ax.legend(["real", "imag"])
ax.set_title(title)
ax.set_xlabel("k10")
# Plot the indexed fixed point
states = list(fixedPointDcts[0].keys())
_, ax = plt.subplots(1)
COLORS = ["red", "green", "brown"]
for idx, state in enumerate(states):
yvs = [f[state] for f in fixedPointDcts]
ax.plot(xvs, yvs, color=COLORS[idx])
ax.legend(states)
ax.set_title("Fixed Points")
ax.set_xlabel("k10")
return fixedPointDcts
# Test
dcts = plotEigenInfo(RR, k10Range=[100 + 5*v for v in range(5)], k9=1, title="Dominant eigenvalue for 2nd fixed point.")
# +
def runSim(model=MODEL, endTime=100, startTime=0, **kwargs):
def findIdx(arr, time):
"""Finds the index of the time in the simulation results array."""
bestIdx = 0
diff = np.abs(arr[0, 0] - time)
for idx, value in enumerate(arr[:, 0]):
if np.abs(value - time) < diff:
diff = np.abs(value - time)
bestIdx = idx
return bestIdx
rr = te.loada(MODEL)
# Adjust the parameters
for key, value in kwargs.items():
if isinstance(key, sympy.core.symbol.Symbol):
newKey = s.name
else:
newKey = key
rr[newKey] = rr[newKey] * value
#rr.plot(rr.simulate(startTime, endTime, 10*endTime))
arr = rr.simulate(0, endTime, 10*endTime)
_, ax = plt.subplots(1)
startIdx = findIdx(arr, startTime)
endIdx = findIdx(arr, endTime)
for idx in range(len(arr.colnames[1:])):
ax.plot(arr[startIdx:endIdx,0], arr[startIdx:endIdx, idx+1])
ax.legend(arr.colnames[1:])
# Tests
runSim(k9=0.5, startTime=900, endTime=910)
# -
# # Damped Model 2
# +
MODEL2 = """
var S0
var S1
var S2
var S3
var S4
var S5
var S6
var S7
ext S8
J0: S4 -> S7+S5; k0*S4
J1: S2 -> S4+S4; k1*S2
J2: S4 -> S3+S3; k2*S4
J3: S4 -> S2+S3; k3*S4
J4: S0 -> S5; k4*S0
J5: S5 + S4 -> S5; k5*S5*S4
J6: S5 -> S3; k6*S5
J7: S8 + S3 -> S0; k7*S8*S3
J8: S3 -> S6+S5; k8*S3
J9: S6 + S5 -> S4; k9*S6*S5
J10: S7 + S5 -> S0 + S2; k10*S7*S5
J11: S3 -> S5+S6; k11*S3
J12: S6 + S1 -> S5; k12*S6*S1
J13: S5 -> S5; k13*S5
J14: S1 + S7 -> S1 + S1; k14*S1*S7
k0 = 2.5920480618068815
k1 = 422.2728070204247
k2 = 28.978192374985912
k3 = 29.723263589242986
k4 = 21.04114996098882
k5 = 1.5111236529181926
k6 = 14.363185343334044
k7 = 0.8231126169112812
k8 = 54.27226867691914
k9 = 58.17954213283633
k10 = 10.682986014127339
k11 = 194.08273474192015
k12 = 15.989508525207631
k13 = 13.186614071108659
k14 = 35.67582901156382
S0 = 1.0
S1 = 5.0
S2 = 9.0
S3 = 3.0
S4 = 10.0
S5 = 3.0
S6 = 7.0
S7 = 1.0
S8 = 6.0
"""
rr = te.loada(MODEL2)
rr.plot(rr.simulate())
# -
mat = sympy.Matrix(rr.getFullStoichiometryMatrix())
mat
SPECIES_FLUX_DCT = mkStoichiometryExpressionDct(rr)
SPECIES_FLUX_DCT
nullspace = mat.nullspace()
# Kinetics dictionary
kineticDct = {
J0: k0*S4,
J1: k1*S2,
J2: k2*S4,
J3: k3*S4,
J4: k4*S0,
J5: k5*S5*S4,
J6: k6*S5,
J7: k7*S8*S3,
J8: k8*S3,
J9: k9*S6*S5,
J10: k10*S7*S5,
J11: k11*S3,
J12: k12*S6*S1,
J13: k13*S5,
J14: k14*S1*S7,
}
STATE_DCT = {s: SPECIES_FLUX_DCT[s].subs(kineticDct) for s in SPECIES_FLUX_DCT.keys() }
STATE_DCT
MODEL = ODEModel(STATE_DCT, isFixedPoints=False)
# Need to find a linear combination of values in the null space
# such that the kinetic equations hold.
# Have N reactions, M species. So, N - M constants to find.
su.addSymbols("c c_0 c_1 c_2 c_3 c_4 c_5 c_6")
c = sympy.Matrix([c_0, c_1, c_2, c_3, c_4, c_5, c_6])
mat = sympy.Matrix(nullspace).reshape(15, 7)
mat * c
105/7
# Solve for log(S*)
exprs = [ j - b for j, b in zip(kineticDct.values(), mat*c) ]
exprs = [e.subs({S2: 0, S3: 0}) for e in exprs]
sympy.solve(exprs, [ S5])
# **Approach**
# 1. $N$ = stoichiometry matrix
# 1. $M$ = nullspace of $N$
# 1. Substitute 0 for any state variable that must be zero for all vectors $M \star c$.
# 1. Solve for log of $x_n$ (state variable) in terms of log of $J_n$ (fluxes)
# 1. We know that the fluxes for the fixed points must be in $M \star c$, where $c$ is a vector.
# 1. Substitute $J_n$ value from previous (5) into (4) to give an expression for $x_n$ in terms of $c_n$.
#
# Issue: How do I find the $c_i$?
# **Approach 2**
# 1. Solve for $x_n$ in terms of $J$
exprs[1]
list(kineticDct.values())[1]
sympy.shape(mat)
exprs
sympy.solve(exprs, [S0, S1, S2, S3, S4, S5, S6, S7, S8])
# +
# Looks like I can manually solve for most species
SPECIES_FLUX_DCT = {
S0: J4/k4,
S1: (J12/k12) /((J9 / k9) / (J6 / k6)),
S2: J1 / k1,
S3: J8 / k8,
S4: J0 / k0,
S5: J6 / k6,
S6: (J9 / k9) / (J6 / k6),
S7: (J10 / k10) / (J6 / k6),
S8: (J7 / k7) / (J8 / k8),
}
# -
dstateDct = {s: SPECIES_FLUX_DCT[s].subs(kineticDct) for s in SPECIES_FLUX_DCT.keys()}
dstateDct
solnDct = sympy.solve(list(SPECIES_FLUX_DICT.values()), list(kineticDct.keys()))
solnDct
exprs = [solnDct[j].subs(kineticDct) - kineticDct[j] for j in solnDct.keys()]
exprs
# +
#sympy.solve(exprs, list(dstateDct.keys()))
# -
# # Reduced Model
su.addSymbols("S0 S1 S2 S3 S4 S5 S6")
su.addSymbols("k0 k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 k11 k12 k13 k14 k15 k16 k17 k18 k19 k20")
su.addSymbols("J0 J1 J2 J3 J4 J5 J6 J7 J8 J9 J10 J11 J12 J13 J14 J15 J16 J`7 J18 J19 J20")
REDUCED_FLUXES = [J0, J1, J2, J3, J4]
REDUCED_SPECIES = [S0, S1, S2, S3, S4]
MODEL = """
J0: S0 -> S2; k0*S0
J1: S3 + S3 -> S0; k5*S3*S3
J2: S2 -> S3; k6*S2
J3: S3 ->; k9*S2*S3
J4: -> S3; k10*S0
k0 = 6+0.9011095014634776
k5 = 1.4823891153952284
k6 = -10+15.149868787476994
k9 = 91.19197034598812
k10 = 200
S0 = 1.0
S1 = 5.0
S2 = 9.0
S3 = 3.0
S4 = 10.0
"""
# +
MODEL = """
J0: S0 -> S2; k0*S0
J1: S3 + S3 -> S0; k5*S3*S3
J2: S2 -> S3; k6*S2
J3: S3 ->; k9*S2*S3
J4: -> S3; k10*S0
k0 = (6+0.9011095014634776)
k5 = 1.4823891153952284
k6 = (-10+15.149868787476994)
k9 = 91.19197034598812 # At k9 * 0.5, use k10 = k155
k10 = 200 # 164, 165 ins a transition from damped to stable oscillations
S0 = 1.0
S2 = 9.0
S3 = 3.0
"""
rr = te.loada(MODEL)
rr.plot(rr.simulate(0, 100, 100))
# -
runSim(k9=0.5, k10=140, endTime=1000)
runSim(k9=0.5, k10=150, endTime=1000)
plot1(k0=0.01, k5=0.1, k6=0.1, k9=0.1, base=100)
REDUCED_SPECIES_FLUX_DCT = mkStoichiometryExpressionDct(rr)
REDUCED_SPECIES_FLUX_DCT
# +
kineticDct = {
J0: k0 * S0,
J1: k5 * S3 * S3,
J2: k6 * S2,
J3: k9 * S2 * S3,
J4: k10*S0, # Is this really mass action?
}
# -
# State equation is wrong for S2. Should be - S2*k6
REDUCED_STATE_DCT = {s: REDUCED_SPECIES_FLUX_DCT[s].subs(kineticDct) for s in REDUCED_SPECIES_FLUX_DCT.keys()}
REDUCED_STATE_DCT
sympy.solve(list(REDUCED_STATE_DCT.values()), list(REDUCED_STATE_DCT.keys()))
reducedModel = ODEModel(REDUCED_STATE_DCT)
# Fixed points
[f.valueDct for f in reducedModel.fixedPoints]
# Verify that these are fixed points
for fp in reducedModel.fixedPoints:
print([sympy.simplify(e.subs(fp.valueDct)) for e in REDUCED_STATE_DCT.values()])
# Look at the eigenvectors
if False:
for entry in reducedModel.fixedPoints[1].eigenEntries:
for vector in entry.vectors:
print(vector)
eigenvalues = [e.value for e in reducedModel.fixedPoints[1].eigenEntries]
# **Approach**
# 1. Find the fixed points.
# 1. For non-zero fixed points:
# 1. Find the eigenvalues in terms of each constant in turn, setting the other constants to 1.
# 1. Search for values of constants that result in a positive but near zero real value and significant non-zero imaginary part
# **Issue**
# 1. Eigenvalues have no relationship to the system behavior
# ## Finding Parameter Values
# Given an ODEModel, find values of parameters that result in oscillations at different frequencies.
c0 = rr.k0
c5 = rr.k5
c6 = rr.k6
c9 = rr.k9
c10 = rr.k10
for c10 in [150 + n for n in range(50)]:
subDct = {k0: c0, k5: c5, k6: c6, k9: c9, k10: c10}
model = ODEModel(REDUCED_STATE_DCT, subs=subDct)
entries = model.fixedPoints[1].eigenEntries
print((c10, [e.value for e in entries]))
# ## Plots
dcts = plotEigenInfo(RR, k10Range=[100 + 5*v for v in range(25)], k9=1, title="Dominant eigenvalue for 2nd fixed point.")
runSim(k10=100/200, startTime=0, endTime=10)
runSim(k10=160/200, startTime=0, endTime=10)
runSim(k9=1, k10=160/200, startTime=990, endTime=1000)
runSim(k9=1, k10=170/200, startTime=990, endTime=1000)
runSim(k9=1, k10=200/200, startTime=990, endTime=1000)
4.5 / (2*np.pi)
# +
def plot1(base=150, **kwargs):
k10Range=[base + 10*v for v in range(10)]
title = ""
for key, value in kwargs.items():
title += " %s: %3.2f " % (str(key), value)
plotComplexEigenvalue(rr, k10Range=k10Range, title=title, **kwargs)
plot1(k0=1, k5=1, k6=1, k9=1)
# -
plot1(k0=0.01, k5=0.1, k6=0.1, k9=0.1, base=100)
plot1(k0=1, k5=1, k6=1, k9=0.5, base=100)
runSim(k0=1, k5=1, k6=1, k9=0.5, k10=100, endTime=1000)
# Am i excluding the dominant eigenvalue? Do the plots for all eigenvalues.
| notebooks/Big Signal Analysis of Reaction Networks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
url_dataset = "./data/buddymove_holidayiq.csv"
data = pd.read_csv(url_dataset)
data
# Vector kì vọng
mean_data = data.describe().mean()
mean_data
# convert data to np.array
data = data.to_numpy()
for i in range(len(data)):
for j in range(len(data))
| Class/PCA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, LSTM
from sklearn.model_selection import train_test_split
#Import data
df = pd.read_csv('df_temporal.csv',index_col=0)
df.set_index(['ID','time','t'],drop=True,inplace=True)
df.reset_index(drop=False,inplace=True)
df.fillna(0,inplace=True)
ids = df.ID.unique()
window = 5
t_test = []
t_pred = []
# -
for i in ids:
X = df[df.ID.values == i].iloc[:,range(3,len(df.columns))]
last = int(len(df[df.ID.values == i])/5)
X_train = X[:-last]
X_test = X[-last-window:]
in_tr = []
out_tr = []
for j in range(window,len(X_train)):
in_tr.append(np.array(X_train.iloc[j-window:j,:]))
out_tr.append(np.array(X_train.iloc[j,0]))
in_tr, out_tr = np.array(in_tr), np.array(out_tr)
in_te = []
out_te = []
for j in range(window,len(X_test)):
in_te.append(np.array(X_test.iloc[j-window:j,:]))
out_te.append(np.array(X_test.iloc[j,0]))
in_te, out_te = np.array(in_te), np.array(out_te)
model = Sequential()
model.add(LSTM(128, input_shape=(in_tr.shape[1:]), activation='relu',return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(128, input_shape=(in_tr.shape[1:]), activation='relu',return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(1, input_shape=([in_tr.shape[0],128]), activation='sigmoid'))
model.compile(loss='mse',optimizer='adam')
model.fit(in_tr, out_tr, epochs=100,verbose=0,batch_size=1)
t_test.extend(out_te.tolist())
predictions = model.predict(in_te)
for pred in range(len(predictions)):
t_pred.append(predictions.tolist()[pred][0])
from sklearn.metrics import mean_squared_error
MSE=mean_squared_error(t_test,t_pred)
print(MSE)
# t_pred = model.predict(in_te)
# plt.figure()
# plt.plot(t_pred,':',label='LSTM')
# plt.plot(t_test,'--',label='Actual')
# plt.legend()
| DMT A1 RNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: JuliaPro_v1.4.2-1 1.4.2
# language: julia
# name: juliapro_v1.4.2-1-1.4
# ---
using CSV
using Plots
using Random
using DataFrames
# +
function Measure_Eval1(system,qubo)
m = size(qubo)[2]
k = m^1
born_prob = system.*system
fitness = zeros(k)
probs = zeros(k)
bitstring = repeat(transpose(born_prob[2,:]),k) .> rand(k,m)
for i in 1:k
fitness[i] = transpose(bitstring[i,:])*qubo*bitstring[i,:]
sol_prob = 1
for j in 1:size(bitstring)[2]
sol_prob = sol_prob*born_prob[bitstring[i,j]+1,j]
end
probs[i] = sol_prob
end
return transpose(fitness)*probs/sum(probs)
end
function Measure_Eval2(system,qubo)
m = size(qubo)[2]
k = m^1
born_prob = system.*system
fitness = zeros(eltype(system))
probs = zeros(eltype(system))
bitstring = repeat(transpose(born_prob[2,:]),k) .> rand(k,m)
for i in 1:k
fitness[i] = transpose(bitstring[i,:])*qubo*bitstring[i,:]
sol_prob = 1
for j in 1:size(bitstring)[2]
sol_prob = sol_prob*born_prob[bitstring[i,j]+1,j]
end
probs[i] = sol_prob
end
return transpose(fitness)*probs/sum(probs)
end
function Measure_Eval3(system,qubo)
m = size(qubo)[2]
k = m^1
born_prob = system.*system
#fitness = zeros(k)
#probs = zeros(k)
fitness = 0.0
wft = 0.0
probs = 0.0
bitstring = repeat(transpose(born_prob[2,:]),k) .> rand(k,m)
for i in 1:k
fitness = transpose(bitstring[i,:])*qubo*bitstring[i,:]
sol_prob = 1
for j in 1:size(bitstring)[2]
sol_prob = sol_prob*born_prob[bitstring[i,j]+1,j]
end
wft += fitness*sol_prob
probs += sol_prob
end
#return transpose(fitness)*probs/sum(probs)
return wft/probs
end
function Rotate(T,system)
noq = size(system)[2]
new_system = zeros(2,noq)
for i in 1:noq
if rand() > 0.5
angle = T/100
else
angle = -T/100
end
U = [cos(angle) -sin(angle);sin(angle) cos(angle)]
new_system[:,i] = U*system[:,i]
end
return new_system
end
function NOT(system)
noq = size(system)[2]
new_system = zeros(2,noq)
for i in 1:noq
if rand() > 0.5
angle = pi
U = [cos(angle) -sin(angle);sin(angle) cos(angle)]
new_system[:,i] = U*system[:,i]
else
new_system[:,i] = system[:,i]
end
#U = [cos(angle) -sin(angle);sin(angle) cos(angle)]
#new_system[:,i] = U*system[:,i]
end
return new_system
end
function Final_measure(system)
born_prob = system.*system
solution_state = Int.(round.(born_prob[2,:]))
sol_prob = 1
for j in 1:size(solution_state)[1]
sol_prob = sol_prob*born_prob[solution_state[j]+1,j]
end
#return solution_state,round(sol_prob*100,digits=3)
return solution_state, sol_prob
end
function Replacement(current_systems,rotated_systems,fit_current,fit_rotated,T,rep_count)
delta_fit = fit_rotated - fit_current
probability = 1 ./ (1 .+ exp.(delta_fit))
randoms = transpose(rand(length(probability)))
for i in 1:length(probability)
if probability[i] > 0.5 && probability[i] > randoms[i]
#if probability[i] > 0.5
current_systems[i] = rotated_systems[i]
#println("System $i has been replaced")
rep_count += 1
end
end
return current_systems, delta_fit, randoms, probability, rep_count
#return current_systems, replacement_count, delta_fit, randoms, probability
end
function Mutation(replaced_systems,mutated_systems,fit_replaced,fit_mutated,T,mut_count)
delta_fit = fit_mutated - fit_replaced
probability = 1 ./ (1 .+ exp.(delta_fit))
randoms = transpose(rand(length(probability)))
for i in 1:length(probability)
#if probability[i] > 0.9 && probability[i] > randoms[i]
if probability[i] > 0.5
replaced_systems[i] = mutated_systems[i]
#println("System $i has been mutated")
mut_count += 1
end
end
return replaced_systems, delta_fit, randoms, probability, mut_count
#return current_systems, replacement_count, delta_fit, randoms, probability
end
function Migration(systems,fits,T,mig_count)
min_fit_index = argmin(fits)[2]
best_system = systems[min_fit_index]
diff_fit = transpose(repeat([minimum(fits)],length(fits))) - fits
probability = 1 ./ (1 .+ exp.(diff_fit))
randoms = transpose(rand(length(probability)))
for i in 1:length(probability)
if probability[i] > 0.5 && probability[i] > randoms[i]
#if probability[i] > 0.5
systems[i] = best_system
#println("System $min_fit_index is migrated to system $i ")
mig_count += 1
end
end
return systems, diff_fit, randoms, probability, mig_count
#return systems,migration_count, diff_fit, randoms, probability
end
function Best(results,qubo)
X = results[argmax(results)][1]
prob = results[argmax(results)][2]
E = transpose(X)*qubo*X
return join(X),E,prob
end
# -
qubo = CSV.read("/home/aniruddha/My_Code/Workstation/Projects/Github Projects/QGA_SQA/qubos/iris_qubo.csv")
qubo = convert(Array,qubo[:,2:size(qubo)[2]])
# +
#Random.seed!(123) ## using random seed
print("Algorithm is on progress. Please wait...\n\n")
log_df = DataFrame(Bitstring=[],Energy=[],Probability=[],Time=[],Iteration=[],Replacement=[],Mutation=[],Migration=[])
for run in 1:20
n = size(qubo)[2]
q = ones(2,n)/sqrt(2)
current_systems = repeat([q],n)
iteration,T = 0,100
iteration_count = [iteration]
temps =[]
Fits = zeros(1,n)
rep_count,mut_count,mig_count = 0,0,0
beta_squared = []
detail = zeros(1,n)
fitness_detail = zeros(1,n)
replacement_records, migration_records,mutation_records = [],[],[]
sys_prob_log = zeros(n)
all_sys_prob_log = zeros(1,n)
Start_time = time()
while any(sys_prob_log .< 0.995)==true && T > 1
#println("\n\n############################################################## Iteration no ",iteration)
#println("Current temperature is-----------------------------------------",T)
fit_current = transpose([Measure_Eval1(system,qubo) for system in current_systems])
#println("fit current:",fit_current)
rotated_systems = [Rotate(T,system) for system in current_systems]
fit_rotated = transpose([Measure_Eval1(system,qubo) for system in rotated_systems])
replaced_systems,del_fit_rep,ran_rep,prob_rep,rep_count = Replacement(current_systems,rotated_systems,fit_current,fit_rotated,T,rep_count)
fit_replaced = transpose([Measure_Eval1(system,qubo) for system in replaced_systems])
mutated_systems = [NOT(system) for system in current_systems]
fit_mutated = transpose([Measure_Eval1(system,qubo) for system in mutated_systems])
adapted_systems,del_fit_adp,ran_adp,prob_adp,mut_count = Mutation(replaced_systems,mutated_systems,fit_replaced,fit_mutated,T,mut_count)
fit_adapted = transpose([Measure_Eval1(system,qubo) for system in adapted_systems])
migrated_systems,diff_fit_mig,ran_mig,prob_mig,mig_count = Migration(adapted_systems,fit_adapted,T,mig_count)
fit_migrated = transpose([Measure_Eval1(system,qubo) for system in migrated_systems])
sys_prob_log = [Final_measure(system)[2] for system in migrated_systems]
#all_sys_prob_log = [all_sys_prob_log;transpose(sys_prob_log)]
#println("system probabilities:",sys_prob_log)
#println("fit Migrated:",fit_migrated)
#Fits = [Fits;fit_migrated]
###########################################
#fitness_detail = [fitness_detail;transpose(repeat([iteration],n));fit_current;fit_rotated;fit_replaced;fit_migrated]
#detail = [detail;transpose(repeat([iteration],n));del_fit_rep;ran_rep;prob_rep;diff_fit_mig;ran_mig;prob_mig]
###########################################
current_systems = migrated_systems
append!(temps,T)
T = T*0.99
iteration += 1
end
End_time = time()
results = [Final_measure(system) for system in current_systems]
best_system,best_energy,best_prob = Best(results,qubo)
push!(log_df,[best_system,best_energy,round(best_prob*100,digits=2),End_time-Start_time,iteration,rep_count,mut_count,mig_count])
end
println("Number of unique results:",length(unique(log_df.Bitstring)),"\n\n")
println(log_df)
Dict([(i,count(x->x==i,log_df.Bitstring)) for i in unique(log_df.Bitstring)])
# -
| Julia qga_qsa/QSA_multi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/HiteshAI/Stock-Price-Prediction/blob/master/StockPrediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="6lBJnvzJpb5N" colab_type="code" colab={}
from pandas_datareader import data
import matplotlib.pyplot as plt
import pandas as pd
import datetime as dt
import urllib.request, json
import os
import numpy as np
import tensorflow as tf # This code has been tested with TensorFlow 1.6
from sklearn.preprocessing import MinMaxScaler
# + id="s4gvCW4ppb5z" colab_type="code" colab={}
# !pip install pandas_datareader
import pandas as pd
# + id="-QyzZvLWpb6C" colab_type="code" colab={}
from pandas_datareader import data
# + id="YCvNu6zamJVp" colab_type="code" colab={}
# !cd /root/
# + id="8npDgOH_pb6h" colab_type="code" colab={}
import matplotlib.pyplot as plt
import pandas as pd
import datetime as dt
import urllib.request, json
import os
import numpy as np
import tensorflow as tf # This code has been tested with TensorFlow 1.6
from sklearn.preprocessing import MinMaxScaler
# + id="bgsiy8aPpb6r" colab_type="code" outputId="0375dd34-f09a-4320-b59a-5bc6a2969db8" colab={"base_uri": "https://localhost:8080/", "height": 34}
df = pd.read_csv(os.path.join('/root/hpq.us.txt'),delimiter=',',usecols=['Date','Open','High','Low','Close'])
print('Loaded data from the Kaggle repository')
# + id="V5Ad082gpb66" colab_type="code" colab={}
df = df.sort_values('Date')
# + id="noKJWZcppb7H" colab_type="code" outputId="2eabc65d-eea6-40c2-f67d-cb4bc42d5b7a" colab={"base_uri": "https://localhost:8080/", "height": 204}
df.head()
# + id="wJim3ayQpb7e" colab_type="code" outputId="87af1cdf-d479-49c0-9e00-dc9b44fcb899" colab={"base_uri": "https://localhost:8080/", "height": 597}
plt.figure(figsize = (18,9))
plt.plot(range(df.shape[0]),(df['Low']+df['High'])/2.0)
plt.xticks(range(0,df.shape[0],500),df['Date'].loc[::500],rotation=45)
plt.xlabel('Date',fontsize=18)
plt.ylabel('Mid Price',fontsize=18)
plt.show()
# + id="MDP09YJHpb79" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 85} outputId="5eb67655-ca90-43c7-f73f-18f6ecb2056a"
high_prices = df.loc[:,'High'].as_matrix()
low_prices = df.loc[:,'Low'].as_matrix()
mid_prices = (high_prices+low_prices)/2.0
# + id="k65yGvbopb8U" colab_type="code" colab={}
train_data = mid_prices[:11000]
test_data = mid_prices[11000:]
# + id="O5_qVX6lpb8k" colab_type="code" colab={}
scaler = MinMaxScaler()
train_data = train_data.reshape(-1,1)
test_data = test_data.reshape(-1,1)
# + id="rB_Fjl3cpb8z" colab_type="code" colab={}
smoothing_window_size = 2500
for di in range(0,10000,smoothing_window_size):
scaler.fit(train_data[di:di+smoothing_window_size,:])
train_data[di:di+smoothing_window_size,:] = scaler.transform(train_data[di:di+smoothing_window_size,:])
# You normalize the last bit of remaining data
scaler.fit(train_data[di+smoothing_window_size:,:])
train_data[di+smoothing_window_size:,:] = scaler.transform(train_data[di+smoothing_window_size:,:])
# + id="4KWZh2iTpb8-" colab_type="code" colab={}
# Reshape both train and test data
train_data = train_data.reshape(-1)
# Normalize test data
test_data = scaler.transform(test_data).reshape(-1)
# + id="EtZWlMJepb9L" colab_type="code" colab={}
EMA = 0.0
gamma = 0.1
for ti in range(11000):
EMA = gamma*train_data[ti] + (1-gamma)*EMA
train_data[ti] = EMA
# Used for visualization and test purposes
all_mid_data = np.concatenate([train_data,test_data],axis=0)
# + id="eNII6S7epb9R" colab_type="code" outputId="5e9cdeb7-ba04-4e6a-ec4c-73d7d9f9bd1d" colab={"base_uri": "https://localhost:8080/", "height": 34}
#Rnunning EMA
window_size = 100
N = train_data.size
run_avg_predictions = []
run_avg_x = []
mse_errors = []
running_mean = 0.0
run_avg_predictions.append(running_mean)
decay = 0.5
for pred_idx in range(1,N):
if pred_idx >= N:
date = dt.datetime.strptime(k, '%Y-%m-%d').date() + dt.timedelta(days=1)
else:
date = df.loc[pred_idx,'Date']
running_mean = running_mean*decay + (1.0-decay)*train_data[pred_idx-1]
run_avg_predictions.append(running_mean)
mse_errors.append((run_avg_predictions[-1]-train_data[pred_idx])**2)
run_avg_x.append(date)
print('MSE error for EMA averaging: %.5f'%(0.5*np.mean(mse_errors)))
# + id="97vlUe0xpb9h" colab_type="code" outputId="11a14833-9c28-46c7-e55f-f91b701153b1" colab={"base_uri": "https://localhost:8080/", "height": 551}
plt.figure(figsize = (18,9))
plt.plot(range(df.shape[0]),all_mid_data,color='b',label='True')
plt.plot(range(0,N),run_avg_predictions,color='orange', label='Prediction')
#plt.xticks(range(0,df.shape[0],50),df['Date'].loc[::50],rotation=45)
plt.xlabel('Date')
plt.ylabel('Mid Price')
plt.legend(fontsize=18)
plt.show()
# + id="w6xYsYjMpb9z" colab_type="code" outputId="58bad84d-bc66-47a5-b3d0-de1d504b62d1" colab={"base_uri": "https://localhost:8080/", "height": 68}
print(train_data[0:5])
print(train_data[5:10])
print(train_data[10:15])
# + id="3O_WWTBQpb-O" colab_type="code" colab={}
class DataGeneratorSeq(object):
def __init__(self,prices,batch_size,num_unroll):
self._prices = prices
print(prices[0:20])
self._prices_length = len(self._prices) - num_unroll
self._batch_size = batch_size
self._num_unroll = num_unroll
self._segments = self._prices_length //self._batch_size
self._cursor = [offset * self._segments for offset in range(self._batch_size)]
def next_batch(self):
batch_data = np.zeros((self._batch_size),dtype=np.float32)
batch_labels = np.zeros((self._batch_size),dtype=np.float32)
# print("Started sendinng one batch............")
for b in range(self._batch_size):
if self._cursor[b]+1>=self._prices_length:
#self._cursor[b] = b * self._segments
self._cursor[b] = np.random.randint(0,(b+1)*self._segments)
batch_data[b] = self._prices[self._cursor[b]]
batch_labels[b]= self._prices[self._cursor[b]+np.random.randint(0,5)]
# print(self._cursor[b]," ",self._prices[self._cursor[b]])
self._cursor[b] = (self._cursor[b]+1)%self._prices_length
# print("Completed sendinng one batch............")
return batch_data,batch_labels
def unroll_batches(self):
unroll_data,unroll_labels = [],[]
init_data, init_label = None,None
for ui in range(self._num_unroll):
data, labels = self.next_batch()
unroll_data.append(data)
unroll_labels.append(labels)
return unroll_data, unroll_labels
def reset_indices(self):
for b in range(self._batch_size):
self._cursor[b] = np.random.randint(0,min((b+1)*self._segments,self._prices_length-1))
# + id="XJjTm77Opb-e" colab_type="code" outputId="8faea4b0-4e24-4255-c850-94a5331e1242" colab={"base_uri": "https://localhost:8080/", "height": 595}
dg = DataGeneratorSeq(train_data,5,5)
u_data, u_labels = dg.unroll_batches()
for ui,(dat,lbl) in enumerate(zip(u_data,u_labels)):
print('\n\nUnrolled index %d'%ui)
dat_ind = dat
lbl_ind = lbl
print('\tInputs: ',dat )
print('\n\tOutput:',lbl)
# + id="ckejuQ5mpb_A" colab_type="code" colab={}
D = 1 # Dimensionality of the data. Since your data is 1-D this would be 1
num_unrollings = 50 # Number of time steps you look into the future.
batch_size = 500 # Number of samples in a batch
num_nodes = [200,200,150] # Number of hidden nodes in each layer of the deep LSTM stack we're using
n_layers = len(num_nodes) # number of layers
dropout = 0.2 # dropout amount
# + id="Q8NwEr1Ypb_L" colab_type="code" colab={}
# You unroll the input over time defining placeholders for each time step
train_inputs, train_outputs = [],[]
for ui in range(num_unrollings):
train_inputs.append(tf.placeholder(tf.float32, shape=[batch_size,D],name='train_inputs_%d'%ui))
train_outputs.append(tf.placeholder(tf.float32, shape=[batch_size,1], name = 'train_outputs_%d'%ui))
# + id="uc-ZeNHwpb_Y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 275} outputId="69c5f5cb-26a8-461b-daf3-95abf03021d3"
lstm_cells = [
tf.contrib.rnn.LSTMCell(num_units=num_nodes[li],
state_is_tuple=True,
initializer= tf.contrib.layers.xavier_initializer()
)
for li in range(n_layers)]
drop_lstm_cells = [tf.contrib.rnn.DropoutWrapper(
lstm, input_keep_prob=1.0,output_keep_prob=1.0-dropout, state_keep_prob=1.0-dropout
) for lstm in lstm_cells]
drop_multi_cell = tf.contrib.rnn.MultiRNNCell(drop_lstm_cells)
multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
w = tf.get_variable('w',shape=[num_nodes[-1], 1], initializer=tf.contrib.layers.xavier_initializer())
b = tf.get_variable('b',initializer=tf.random_uniform([1],-0.1,0.1))
# + id="hf7VwZiDpb_c" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 190} outputId="026cbe1a-c994-4577-b9f3-452256ed4c0f"
# Create cell state and hidden state variables to maintain the state of the LSTM
c, h = [],[]
initial_state = []
for li in range(n_layers):
c.append(tf.Variable(tf.zeros([batch_size, num_nodes[li]]), trainable=False))
h.append(tf.Variable(tf.zeros([batch_size, num_nodes[li]]), trainable=False))
initial_state.append(tf.contrib.rnn.LSTMStateTuple(c[li], h[li]))
# Do several tensor transofmations, because the function dynamic_rnn requires the output to be of
# a specific format. Read more at: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
all_inputs = tf.concat([tf.expand_dims(t,0) for t in train_inputs],axis=0)
# all_outputs is [seq_length, batch_size, num_nodes]
all_lstm_outputs, state = tf.nn.dynamic_rnn(
drop_multi_cell, all_inputs, initial_state=tuple(initial_state),
time_major = True, dtype=tf.float32)
all_lstm_outputs = tf.reshape(all_lstm_outputs, [batch_size*num_unrollings,num_nodes[-1]])
all_outputs = tf.nn.xw_plus_b(all_lstm_outputs,w,b)
split_outputs = tf.split(all_outputs,num_unrollings,axis=0)
# + id="_Sv_P_1cpb_k" colab_type="code" outputId="7adcd520-c0e4-4e08-e7e6-c391eaa39d1d" colab={"base_uri": "https://localhost:8080/", "height": 156}
print('Defining training Loss')
loss = 0.0
with tf.control_dependencies([tf.assign(c[li], state[li][0]) for li in range(n_layers)]+
[tf.assign(h[li], state[li][1]) for li in range(n_layers)]):
for ui in range(num_unrollings):
loss += tf.reduce_mean(0.5*(split_outputs[ui]-train_outputs[ui])**2)
print('Learning rate decay operations')
global_step = tf.Variable(0, trainable=False)
inc_gstep = tf.assign(global_step,global_step + 1)
tf_learning_rate = tf.placeholder(shape=None,dtype=tf.float32)
tf_min_learning_rate = tf.placeholder(shape=None,dtype=tf.float32)
learning_rate = tf.maximum(
tf.train.exponential_decay(tf_learning_rate, global_step, decay_steps=1, decay_rate=0.5, staircase=True),
tf_min_learning_rate)
# Optimizer.
print('TF Optimization operations')
optimizer = tf.train.AdamOptimizer(learning_rate)
gradients, v = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
optimizer = optimizer.apply_gradients(
zip(gradients, v))
print('\tAll done')
# + id="ozHyl7ySpb_1" colab_type="code" outputId="875710ce-80a9-4fb2-a8e6-29660544b9ae" colab={"base_uri": "https://localhost:8080/", "height": 51}
print('Defining prediction related TF functions')
sample_inputs = tf.placeholder(tf.float32, shape=[1,D])
# Maintaining LSTM state for prediction stage
sample_c, sample_h, initial_sample_state = [],[],[]
for li in range(n_layers):
sample_c.append(tf.Variable(tf.zeros([1, num_nodes[li]]), trainable=False))
sample_h.append(tf.Variable(tf.zeros([1, num_nodes[li]]), trainable=False))
initial_sample_state.append(tf.contrib.rnn.LSTMStateTuple(sample_c[li],sample_h[li]))
reset_sample_states = tf.group(*[tf.assign(sample_c[li],tf.zeros([1, num_nodes[li]])) for li in range(n_layers)],
*[tf.assign(sample_h[li],tf.zeros([1, num_nodes[li]])) for li in range(n_layers)])
sample_outputs, sample_state = tf.nn.dynamic_rnn(multi_cell, tf.expand_dims(sample_inputs,0),
initial_state=tuple(initial_sample_state),
time_major = True,
dtype=tf.float32)
with tf.control_dependencies([tf.assign(sample_c[li],sample_state[li][0]) for li in range(n_layers)]+
[tf.assign(sample_h[li],sample_state[li][1]) for li in range(n_layers)]):
sample_prediction = tf.nn.xw_plus_b(tf.reshape(sample_outputs,[1,-1]), w, b)
print('\tAll done')
# + id="awaiMKC9pcAR" colab_type="code" outputId="b03aeb88-5e12-4d8c-9209-af4427e8a21c" colab={"base_uri": "https://localhost:8080/", "height": 918}
epochs = 15
valid_summary = 1 # Interval you make test predictions
n_predict_once = 50 # Number of steps you continously predict for
train_seq_length = train_data.size # Full length of the training data
train_mse_ot = [] # Accumulate Train losses
test_mse_ot = [] # Accumulate Test loss
predictions_over_time = [] # Accumulate predictions
session = tf.InteractiveSession()
tf.global_variables_initializer().run()
# Used for decaying learning rate
loss_nondecrease_count = 0
loss_nondecrease_threshold = 2 # If the test error hasn't increased in this many steps, decrease learning rate
print('Initialized')
average_loss = 0
# Define data generator
data_gen = DataGeneratorSeq(train_data,batch_size,num_unrollings)
x_axis_seq = []
# Points you start your test predictions from
test_points_seq = np.arange(11000,12000,50).tolist()
for ep in range(epochs):
# ========================= Training =====================================
for step in range(train_seq_length//batch_size):
u_data, u_labels = data_gen.unroll_batches()
feed_dict = {}
for ui,(dat,lbl) in enumerate(zip(u_data,u_labels)):
feed_dict[train_inputs[ui]] = dat.reshape(-1,1)
feed_dict[train_outputs[ui]] = lbl.reshape(-1,1)
feed_dict.update({tf_learning_rate: 0.0001, tf_min_learning_rate:0.000001})
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
# ============================ Validation ==============================
if (ep+1) % valid_summary == 0:
average_loss = average_loss/(valid_summary*(train_seq_length//batch_size))
# The average loss
if (ep+1)%valid_summary==0:
print('Average loss at step %d: %f' % (ep+1, average_loss))
train_mse_ot.append(average_loss)
average_loss = 0 # reset loss
predictions_seq = []
mse_test_loss_seq = []
# ===================== Updating State and Making Predicitons ========================
for w_i in test_points_seq:
mse_test_loss = 0.0
our_predictions = []
if (ep+1)-valid_summary==0:
# Only calculate x_axis values in the first validation epoch
x_axis=[]
# Feed in the recent past behavior of stock prices
# to make predictions from that point onwards
for tr_i in range(w_i-num_unrollings+1,w_i-1):
current_price = all_mid_data[tr_i]
feed_dict[sample_inputs] = np.array(current_price).reshape(1,1)
_ = session.run(sample_prediction,feed_dict=feed_dict)
feed_dict = {}
current_price = all_mid_data[w_i-1]
feed_dict[sample_inputs] = np.array(current_price).reshape(1,1)
# Make predictions for this many steps
# Each prediction uses previous prediciton as it's current input
for pred_i in range(n_predict_once):
pred = session.run(sample_prediction,feed_dict=feed_dict)
our_predictions.append(np.asscalar(pred))
feed_dict[sample_inputs] = np.asarray(pred).reshape(-1,1)
if (ep+1)-valid_summary==0:
# Only calculate x_axis values in the first validation epoch
x_axis.append(w_i+pred_i)
mse_test_loss += 0.5*(pred-all_mid_data[w_i+pred_i])**2
session.run(reset_sample_states)
predictions_seq.append(np.array(our_predictions))
mse_test_loss /= n_predict_once
mse_test_loss_seq.append(mse_test_loss)
if (ep+1)-valid_summary==0:
x_axis_seq.append(x_axis)
current_test_mse = np.mean(mse_test_loss_seq)
# Learning rate decay logic
if len(test_mse_ot)>0 and current_test_mse > min(test_mse_ot):
loss_nondecrease_count += 1
else:
loss_nondecrease_count = 0
if loss_nondecrease_count > loss_nondecrease_threshold :
session.run(inc_gstep)
loss_nondecrease_count = 0
print('\tDecreasing learning rate by 0.5')
test_mse_ot.append(current_test_mse)
print('\tTest MSE: %.5f'%np.mean(mse_test_loss_seq))
predictions_over_time.append(predictions_seq)
print('\tFinished Predictions')
# + id="CD3mIIklpcCP" colab_type="code" outputId="7f947c01-c004-4778-9acf-645b25d42f20" colab={"base_uri": "https://localhost:8080/", "height": 1000}
best_prediction_epoch = 14 # replace this with the epoch that you got the best results when running the plotting code
plt.figure(figsize = (18,18))
plt.subplot(2,1,1)
plt.plot(range(df.shape[0]),all_mid_data,color='b')
# Plotting how the predictions change over time
# Plot older predictions with low alpha and newer predictions with high alpha
start_alpha = 0.25
alpha = np.arange(start_alpha,1.1,(1.0-start_alpha)/len(predictions_over_time[::3]))
for p_i,p in enumerate(predictions_over_time[::3]):
for xval,yval in zip(x_axis_seq,p):
plt.plot(xval,yval,color='r',alpha=alpha[p_i])
plt.title('Evolution of Test Predictions Over Time',fontsize=18)
plt.xlabel('Date',fontsize=18)
plt.ylabel('Mid Price',fontsize=18)
plt.xlim(11000,12500)
plt.subplot(2,1,2)
# Predicting the best test prediction you got
plt.plot(range(df.shape[0]),all_mid_data,color='b')
for xval,yval in zip(x_axis_seq,predictions_over_time[best_prediction_epoch]):
plt.plot(xval,yval,color='r')
plt.title('Best Test Predictions Over Time',fontsize=18)
plt.xlabel('Date',fontsize=18)
plt.ylabel('Mid Price',fontsize=18)
plt.xlim(11000,12500)
plt.show()
# + id="ArQGuHsPpcCT" colab_type="code" colab={}
| StockPrediction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Object Oriented Programming Challenge - Solution
#
# For this challenge, create a bank account class that has two attributes:
#
# * owner
# * balance
#
# and two methods:
#
# * deposit
# * withdraw
#
# As an added requirement, withdrawals may not exceed the available balance.
#
# Instantiate your class, make several deposits and withdrawals, and test to make sure the account can't be overdrawn.
class Account:
def __init__(self,owner,balance=0):
self.owner = owner
self.balance = balance
def __str__(self):
return f'Account owner: {self.owner}\n Account balance: ${self.balance}'
def deposit(self,dep_amt):
self.balance += dep_amt
print('Deposit Accepted')
def withdraw(self,wd_amt):
if self.balance >= wd_amt:
self.balance -= wd_amt
print('Withdrawal Accepted')
else:
print('Funds Unavailable!')
# 1. Instantiate the class
acct1 = Account('Jose',100)
# 2. Print the object
print(acct1)
# 3. Show the account owner attribute
acct1.owner
# 4. Show the account balance attribute
acct1.balance
# 5. Make a series of deposits and withdrawals
acct1.deposit(50)
acct1.withdraw(75)
# 6. Make a withdrawal that exceeds the available balance
acct1.withdraw(500)
# ## Good job!
| 05-Object Oriented Programming/05-OOP Challenge - Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import math
class vector:
def __init__(self, x=0, y=0, z=0):
self.x = x
self.y = y
self.z = z
def __mul__(self, k):
return vector(self.x * k, self.y * k, self.z * k)
def __add__(self, other):
s, o = self, other
return vector(s.x + o.x, s.y + o.y, s.z + o.z)
def __sub__(self, other):
return self + other * -1
# dot product
def __mod__(self, other):
s, o = self, other
return (s.x * o.x) + (s.y * o.y) + (s.z * o.z)
# modulus
def __abs__(self):
return math.sqrt(self % self)
# cross product
def __xor__(self, other):
s, o = self, other
x = (s.y * o.z) - (s.z * o.y)
y = (s.z * o.x) - (s.x * o.z)
z = (s.x * o.y) - (s.y * o.x)
return vector(x, y, z)
def read_vector():
return vector(*map(float, input().split()))
A = read_vector()
B = read_vector()
C = read_vector()
D = read_vector()
X = (B - A) ^ (C - B)
Y = (C - B) ^ (D - C)
phi = math.degrees(math.acos(X % Y / math.sqrt((X % X) * (Y % Y))))
print('%.2f' % phi)
# +
# Enter your code here. Read input from STDIN. Print output to STDOUT
import math
class coords:
def __init__(self,A):
self.x = A[0]
self.y = A[1]
self.z = A[2]
def __sub__(self,B):
AB_x = self.x - B.x
AB_y = self.y - B.y
AB_z = self.z - B.z
AB = [AB_x,AB_y,AB_z]
return coords(AB)
def cross(A,B):
AB_x = A.y * B.z - A.z * B.y
AB_y = A.z * B.x - A.x * B.z
AB_z = A.x * B.y - A.y * B.x
AB = [AB_x,AB_y,AB_z]
return coords(AB)
def dot(A,B):
return A.x * B.x + A.y * B.y + A.z * B.z
def mod(A):
return math.sqrt(A.x **2 + A.y **2 + A.z **2)
def printc(A):
print "{}".format(A.x) + " " + "{}".format(A.y) + " " + "{}".format(A.z)
A = coords(map(float,raw_input().split()))
B = coords(map(float,raw_input().split()))
C = coords(map(float,raw_input().split()))
D = coords(map(float,raw_input().split()))
AB = A - B
BC = B - C
CD = C - D
X = cross(AB,BC)
Y = cross(BC,CD)
if mod(X) and mod(Y):
phi = math.acos(dot(X,Y)/(mod(X)*mod(Y)))
print "{:0.2f}".format(phi*180/math.pi)
elif mod(X) == 0 or mod(y) ==0:
print "90.00"
| notebooks/Find the Torsional Angle.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploring Quantum Classification Library
# This notebook offers a high-level walk-through of solving classification problems using the [quantum machine learning library](https://docs.microsoft.com/azure/quantum/user-guide/libraries/machine-learning/) that is part of the Microsoft Quantum Development Kit.
# It does not require any familiarity with the basic of quantum computing to follow.
#
# The companion Q# notebook [Inside Quantum Classifiers](./InsideQuantumClassifiers.ipynb) offers a deep dive in the internals of a simple quantum classifier and several exercises on implementing it from scratch.
#
# > <font color="red">This notebook contains some heavy computations, and might take some time to execute.
# Precomputed cell outputs are included - you might want to study these before you opt to re-run the cells.</font>
# ## Setup
#
# To start with, execute this cell using Ctrl+Enter (or ⌘+Enter on a Mac). This is necessary to prepare the environment, import the Q# libraries and operations we'll use later in the tutorial, and configure the plotting routines. If any Python packages are reported as missing, install them.
# +
import math
import random
from typing import List
import numpy as np
from matplotlib import pyplot
pyplot.style.use('ggplot')
import warnings
warnings.simplefilter('ignore')
# %matplotlib inline
# Plotting configuration
cases = [(0, 0), (0, 1), (1, 1), (1, 0)]
markers = [
'.' if actual == classified else 'X'
for (actual, classified) in cases
]
colors = ['blue', 'blue', 'red', 'red']
# Q# configuration and necessary imports
import qsharp
import Microsoft.Quantum.Kata.QuantumClassification as QuantumClassification
print()
print("Setup complete!")
# -
# ## The Data
# The first step of solving a classification problem is preparing the training and validation datasets.
#
# > In the first part of the tutorial we will use artificially generated data, in which the two classes can be separated using two lines that go through the (0, 0) point.
# This mirrors the data used in the [deep dive tutorial](./InsideQuantumClassifiers.ipynb).
# A real classification problem will load real data instead, but this choice of artificial data allows to construct a simple quantum classifier by hand, which will be helpful for a deep dive in the classifier structure.
# +
def generate_data (samples_number : int, separation_angles : List[float]):
"""Generates data with 2 features and 2 classes separable by a line that goes through the origin"""
features = []
labels = []
for i in range(samples_number):
sample = [random.random(), random.random()]
angle = math.atan2(sample[1], sample[0])
features.append(sample)
labels.append(0 if angle < separation_angles[0] or angle > separation_angles[1] else 1)
data = { 'Features' : features, 'Labels' : labels }
return data
# generate training and validation data using the same pair of separation angles
separation_angles = [math.pi / 6, math.pi / 3]
training_data = generate_data(150, separation_angles)
validation_data = generate_data(50, separation_angles)
print("Training and validation data generated")
# +
def plot_data (features : list, actual_labels : list, classified_labels : list = None, extra_lines : list = None):
"""Plots the data, labeling it with actual labels if there are no classification results provided,
and with the classification results (indicating their correctness) if they are provided.
"""
samples = np.array(features)
pyplot.figure(figsize=(8, 8))
for (idx_case, ((actual, classified), marker, color)) in enumerate(zip(cases, markers, colors)):
mask = np.logical_and(np.equal(actual_labels, actual),
np.equal(actual if classified_labels == None else classified_labels, classified))
if not np.any(mask): continue
pyplot.scatter(
samples[mask, 0], samples[mask, 1],
label = f"Class {actual}" if classified_labels == None else f"Was {actual}, classified {classified}",
marker = marker, s = 300, c = [color],
)
# Add the lines to show the true classes boundaries, if provided
if extra_lines != None:
for line in extra_lines:
pyplot.plot(line[0], line[1], color = 'gray')
pyplot.legend()
def separation_endpoint (angle : float) -> (float, float):
if (angle < math.pi / 4):
return (1, math.tan(angle))
return (1/math.tan(angle), 1)
# Set up lines that show class separation
separation_lines = list(zip([(0,0), (0,0)], list(map(separation_endpoint, separation_angles))))
extra_lines = []
for line in separation_lines:
extra_lines.append([[line[0][0], line[1][0]], [line[0][1], line[1][1]]])
plot_data(training_data['Features'], training_data['Labels'], extra_lines = extra_lines)
# -
# ## Training
#
# Now that the data is ready, we can get to the interesting part: training the model!
#
# > This code calls Q# operation `TrainLinearlySeparableModel` defined in the `Backend.qs` file.
# This operation is a wrapper for the library operation [TrainSequentialClassifier](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.machinelearning.trainsequentialclassifier).
# It provides all "quantum" details, such as the model structure.
# We will take a closer look at these details in the [deep dive tutorial](./InsideQuantumClassifiers.ipynb).
(parameters, bias) = QuantumClassification.TrainLinearlySeparableModel.simulate(
trainingVectors = training_data['Features'],
trainingLabels = training_data['Labels'],
initialParameters = [[1.0], [2.0]]
)
# ## Validation
#
# Let's validate our training results on a different data set, generated with the same distribution.
#
# > This code calls Q# operation `ClassifyLinearlySeparableModel` defined in the `Backend.qs` file.
# This operation is a wrapper for the library operations [EstimateClassificationProbabilities](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.machinelearning.estimateclassificationprobabilities)
# and [InferredLabels](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.machinelearning.inferredlabels).
# Again, we will take a closer look at the details of what's going on in the [deep dive tutorial](./InsideQuantumClassifiers.ipynb).
# +
# Validation parameters
tolerance = 0.0005
nMeasurements = 10_000
# Classify validation data set using training results
classified_labels = QuantumClassification.ClassifyLinearlySeparableModel.simulate(
samples = validation_data['Features'],
parameters = parameters, bias = bias,
tolerance = tolerance, nMeasurements = nMeasurements
)
# Calculate miss rate
mask = np.not_equal(validation_data['Labels'], classified_labels)
miss_count = np.array(classified_labels)[np.where(mask)].size
miss_rate = miss_count / len(classified_labels)
print(f"Miss rate: {miss_rate:0.2%}")
# -
plot_data(validation_data['Features'], validation_data['Labels'], classified_labels, extra_lines)
# ## Under the Hood
#
# So far everything we've seen looked perfectly normal, there was no noticeable difference between using a quantum classification library and a traditional machine learning library.
# Let's take the same data and see what is going on under the hood: what does a quantum classifier model look like, what are the parameters it uses and how it can be trained.
#
# **Go on to the [deep dive tutorial](./InsideQuantumClassifiers.ipynb)**.
#
# ## What's Next?
#
# This tutorial covered classifying artificial data, taking advantage of its simple structure. Classifying real data will require more complex models - same as in traditional machine learning.
#
# * Check out [introduction to quantum machine learning](https://docs.microsoft.com/azure/quantum/user-guide/libraries/machine-learning/) at Microsoft Quantum Development Kit documentation, which features a more interesting example - classifying half-moons dataset.
# * [Quantum machine learning samples](https://github.com/microsoft/Quantum/tree/main/samples/machine-learning) offer examples of classifying several more datasets.
| tutorials/QuantumClassification/ExploringQuantumClassificationLibrary.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %cd ..
import os
try:
os.mkdir('./data/raw')
except:
pass
try:
os.mkdir('./data/raw/USPTO_original')
except:
pass
# download the original USPTO datasets
# !wget https://ndownloader.figshare.com/articles/5104873/versions/1
# move it to data/raw
# !mv 1 data/raw
# unzip the datasets
# !unzip data/raw/1 -d data/raw
# remove the original file
# !rm data/raw/1
# !pip install pyunpack
# !pip install patool
from pyunpack import Archive
Archive('data/raw/1976_Sep2016_USPTOgrants_smiles.7z').extractall("data/raw/USPTO_original/")
Archive('data/raw/2001_Sep2016_USPTOapplications_smiles.7z').extractall("data/raw/USPTO_original/")
Archive('data/raw/1976_Sep2016_USPTOgrants_cml.7z').extractall("data/raw/USPTO_original/")
import pandas as pd
df3 = pd.read_csv('data/raw/USPTO_original/1976_Sep2016_USPTOgrants_smiles.rsmi',sep='\t')
df = pd.read_csv('data/raw/USPTO_original/1976_Sep2016_USPTOgrants_smiles.rsmi',sep='\t')
df2 = pd.read_csv('data/raw/USPTO_original/2001_Sep2016_USPTOapplications_smiles.rsmi',sep='\t')
df = pd.concat([df,df2])
df.head()
df_50k = pd.read_csv('data/USPTO_50k_MHN_prepro.csv.gz')
df_50k['id']
df_cd = pd.merge(df_50k, df, how='inner', left_on = 'id', right_on='PatentNumber')
df_cd.sample(5)
# patentNumber 2 year
pn2year = df.groupby('PatentNumber')['Year'].max().to_dict()
(df.groupby('PatentNumber')['Year'].var()>0).sum() # they agree
rem = 3
pn2year_fuzzy = {k[2:-rem]:v for k,v in pn2year.items()}
df_50k['Year'] = df_50k['id'].apply(lambda k: pn2year.get(k, pn2year_fuzzy.get(str(k[:-rem]),None)))
import numpy as np
df_50k['Year'].isna().mean()
df_50k['Year'].isna().sum()
# +
# couldn't find the year for 1.2% or 19 samples
# -
df_50k[df_50k['Year'].isna()]['id'].apply(lambda k: k[2:-2][-7:])
pn2year_fuzzy2 = {k[2:-2][-7:]:v for k,v in pn2year.items()}
df_50k[df_50k['Year'].isna()]['id']
df.groupby('Year')['PatentNumber'].apply(lambda k: set([i[:5] for i in k]))
df_50k[df_50k['Year'].isna()]['id'].apply(lambda k: pn2year_fuzzy2.get(k[2:-2][-7:], None))
df_50k['year'] = df_50k['Year'].apply(lambda k: k if pd.isna(k) else int(k))
df_50k.sort_values('Year').iloc[:int(len(df_50k)*.8)]['time_split'] = 'train'
df_50k = df_50k.sort_values('Year')
train_frac, val_frac, test_frac = .8, .1, .1
N = len(df_50k)
train_end = int((1.0 - val_frac - test_frac) * N)
val_end = int((1.0 - test_frac) * N)
df_50k.at[:train_end, 'time_split'] = 'train'
df_50k.at[train_end:val_end, 'time_split'] = 'valid'
df_50k.at[val_end:, 'time_split'] = 'test'
import matplotlib.pyplot as plt
plt.plot(df_50k.Year.values)
df_50k.groupby('year').count()['id'].cumsum()/len(df_50k)
df_50k.at[(df_50k['year']<=2012), 'time_split_years'] = 'train'
df_50k.at[(df_50k['year']==2013), 'time_split_years'] = 'valid'
df_50k.at[(df_50k['year']>2013), 'time_split_years'] = 'test'
df_50k.at[(df_50k['year'].isna()), 'time_split_years'] = 'nan'
df_50k.groupby('time_split_years').count()['id']/len(df_50k)
# +
# gater also
# -
# all the relevant data is now in here
df_rel = df_50k[['id','class','prod_smiles','reactants_can','split', 'reaction_smarts', 'label', 'time_split','year','time_split_years']]
df_rel = df_rel.sort_index()
df_rel.to_csv('./data/USPTO_50k_MHN_prepro_recre_time.csv.gz')
df_rel
| notebooks/04_prepro_time_split.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tensorflow_gpu
# language: python
# name: tensorflow_gpu
# ---
# # Import Necessary Library
# +
import time
from haversine_script import *
import numpy as np
import tensorflow as tf
import random
import pandas as p
import math
import matplotlib.pyplot as plt
import os
# +
from tensorflow.keras import backend as K
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout,Activation,BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.wrappers.scikit_learn import KerasRegressor
from tensorflow.keras.callbacks import Callback, TensorBoard, ModelCheckpoint, EarlyStopping
from tensorflow.keras import regularizers
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.pipeline import Pipeline
from sklearn import preprocessing
from sklearn.decomposition import PCA
# -
config = tf.compat.v1.ConfigProto( device_count = {'GPU': 1 } )
sess = tf.compat.v1.Session(config=config)
tf.compat.v1.keras.backend.set_session(sess)
tf.debugging.set_log_device_placement(True)
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(gpus[0], True)
# # Dataset Preprocessing Functions
def get_exponential_distance(x,minimum,a=60):
positive_x= x-minimum
numerator = np.exp(positive_x.div(a))
denominator = np.exp(-minimum/a)
exponential_x = numerator/denominator
exponential_x = exponential_x * 1000 #facilitating calculations
final_x = exponential_x
return final_x
def get_powed_distance(x,minimum,b=1.1):
positive_x= x-minimum
numerator = positive_x.pow(b)
denominator = (-minimum)**(b)
powed_x = numerator/denominator
final_x = powed_x
return final_x
# # Python Random Seeding for experiment reproducibility
os.environ['PYTHONHASHSEED'] = "42"
np.random.seed(42)
tf.random.set_seed(42)
random.seed(42)
trial_name="MLP_withPCA=8"
components=8 # select the top 40 gateways
# +
#mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0"])
# -
# # Loading Dataset
# reading the data
file = p.read_csv('lorawan_antwerp_2019_dataset.csv')
columns = file.columns
# x = file[columns[0:68]]
# y = file[columns[71:]]
x = file[columns[0:72]]
x = x.join(file[columns[73]])
y = file[columns[72:]]
# Dataset Preprocessing
x = x.replace(-200,200)
minimum = x.min().min() - 1
x = x.replace(200,minimum)
print('minimum')
print(minimum)
# RSSI Data representation using Powed Function
#
final_x = get_powed_distance(x,minimum)
random_state = 42
x_train, x_test_val, y_train, y_test_val = train_test_split(final_x.values, y.values, test_size=0.3, random_state=random_state)
x_val, x_test, y_val, y_test = train_test_split(x_test_val, y_test_val, test_size=0.5, random_state=random_state)
print(x_train.shape)
print(x_val.shape)
print(x_test.shape)
# + active=""
# Dataset Normalization [0,1]
# +
scaler = preprocessing.MinMaxScaler().fit(x_train)
x_train = scaler.transform(x_train)
x_val = scaler.transform(x_val)
x_test = scaler.transform(x_test)
scaler_y = preprocessing.MinMaxScaler().fit(y_train)
y_train = scaler_y.transform(y_train)
y_val = scaler_y.transform(y_val)
y_test = scaler_y.transform(y_test)
# -
# PCA Application
# +
pca = PCA(n_components =components)
x_train = pca.fit_transform(x_train)
x_val = pca.transform(x_val)
x_test = pca.transform(x_test)
explained_variance = pca.explained_variance_ratio_
# -
print(x_train.shape)
print(x_val.shape)
print(x_test.shape)
n_of_features = x_train.shape[1]
# # Network HyperParameters
dropout = 0.15
l2 = 0.00
lr = 0.0005
epochs = 10000
batch_size= 512
patience = 300
# # Define the MLP Network
# +
#with mirrored_strategy.scope():
model = Sequential()
model.add(Dense(units=1024, input_dim=n_of_features, kernel_regularizer=regularizers.l2(l2)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(dropout, seed=random_state))
model.add(Dense(units=1024, input_dim=n_of_features, kernel_regularizer=regularizers.l2(l2)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(dropout, seed=random_state))
model.add(Dense(units=1024, input_dim=n_of_features, kernel_regularizer=regularizers.l2(l2)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(dropout, seed=random_state))
model.add(Dense(units=256, kernel_regularizer=regularizers.l2(l2)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(dropout, seed=random_state))
model.add(Dense(units=128, kernel_regularizer=regularizers.l2(l2)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(dropout, seed=random_state))
model.add(Dense(units=128, kernel_regularizer=regularizers.l2(l2)))
model.add(BatchNormalization())
model.add(Activation('relu'))
# model.add(Dropout(dropout))
model.add(Dense(units=2))
model.compile(loss='mean_absolute_error',optimizer=Adam(lr=lr))
cb =[EarlyStopping(monitor='val_loss', patience=patience, verbose =1, restore_best_weights=True)]
history = model.fit(x_train, y_train,validation_data=(x_val, y_val),epochs=epochs, batch_size=batch_size, verbose=1, callbacks= cb)
# + active=""
# Training Configuration
# + active=""
# Plot Training Loss Function
# -
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.savefig('training_curves.png')
plt.show()
# # Testing
# + active=""
# Predict Position
# -
y_predict = model.predict(x_test, batch_size=batch_size)
y_predict_in_val = model.predict(x_val, batch_size=batch_size)
y_predict_in_train = model.predict(x_train, batch_size=batch_size)
# Revert the Representation from normalize to lat-long coordinates
y_predict = scaler_y.inverse_transform(y_predict)
y_predict_in_train = scaler_y.inverse_transform(y_predict_in_train)
y_predict_in_val = scaler_y.inverse_transform(y_predict_in_val)
y_train = scaler_y.inverse_transform(y_train)
y_val = scaler_y.inverse_transform(y_val)
y_test = scaler_y.inverse_transform(y_test)
# Calculate Haversine Error
print("Train set mean error: {:.2f}".format(my_custom_haversine_error_stats(y_predict_in_train, y_train,'mean')))
print("Train set median error: {:.2f}".format(my_custom_haversine_error_stats(y_predict_in_train, y_train,'median')))
print("Train set75th perc error: {:.2f}".format(my_custom_haversine_error_stats(y_predict_in_train, y_train,'percentile',75)))
print("Val set mean error: {:.2f}".format(my_custom_haversine_error_stats(y_predict_in_val, y_val,'mean')))
print("Val set median error: {:.2f}".format(my_custom_haversine_error_stats(y_predict_in_val, y_val,'median')))
print("Val set 75th perc. error: {:.2f}".format(my_custom_haversine_error_stats(y_predict_in_val, y_val,'percentile',75)))
print("Test set mean error: {:.2f}".format(my_custom_haversine_error_stats(y_predict, y_test,'mean')))
print("Test set median error: {:.2f}".format(my_custom_haversine_error_stats(y_predict, y_test,'median')))
print("Test set 75th perc. error: {:.2f}".format(my_custom_haversine_error_stats(y_predict, y_test,'percentile',75)))
test_error_list = calculate_pairwise_error_list(y_predict,y_test)
p.DataFrame(test_error_list).to_csv(trial_name+".csv")
print("Experiment completed!!!")
# +
# keras library import for Saving and loading model and weights
from tensorflow.keras.models import model_from_json
from tensorflow.keras.models import load_model
# serialize model to JSON
# the keras model which is trained is defined as 'model' in this example
model_json = model.to_json()
with open(trial_name+".json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights(trial_name+".h5")
| MLP_withPCA=8.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
for fizzbuzz in range(51):
if fizzbuzz % 3 == 0 and fizzbuzz % 5 == 0:
print("fizzbuzz")
continue
elif fizzbuzz % 3 == 0:
print("fizz")
continue
elif fizzbuzz % 5 == 0:
print("buzz")
continue
print(fizzbuzz)
| Baumkataster_Visualisierung/Fizz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from os import path
import numpy as np
import time
name_title_processed_path = '../../data/imbd_data/name_title_processed.csv'
nodes_df_path = '../../data/imbd_data/nodes.csv'
edges_df_path = '../../data/imbd_data/edges.csv'
def write_to_csv(df,filepath):
'''
input: df - a pandas DataFrame
filepath - an output filepath as a string
writes to a csv file
in same diretory as this script
returns: nothing
'''
# if no csv exists
if not path.exists(filepath):
df.to_csv(filepath,index=False)
else:
df.to_csv(filepath, mode='a', header=False,index=False)
def load_dataset(filepath):
df = pd.read_csv(filepath)
return df
df = load_dataset(name_title_processed_path)
print(df.head(1))
print(len(df))
def create_nodes(df, filepath):
'''
takes a pandas dataframe
and extracts names
uses filepath to write output
'''
df = df[['nconst']].drop_duplicates()
write_to_csv(df,filepath)
'''
def create_edges(df, filepath):
'''
takes a pandas dataframe
and for each year for each film
uses filepath to write output
'''
films = df.tconst.unique()
for film in films[0:1]:
start = time.time()
temp = df[df.tconst == film].reset_index()
t_year = temp.film_year.unique()
t_names = temp['nconst']
pairs = np.stack(i.ravel() for i in np.meshgrid(t_names, t_names)).T
end = time. time()
print(end - start)
#temp = pd.DataFrame(pairs, columns=['source', 'target'])
#temp['film_year'] = int(t_year)
#print(temp.drop_duplicates())
#write_to_csv(temp,filepath)
'''
def create_edges_csv(df,filepath):
'''
Inputs: df - pandas dataframe
filepath - python string
this will take the nominees per film title
and create link between every nominee in that movie
saves output as dataframe with columns source target
Returns: nothing
'''
films = df['title'].unique()
for film in films:
nominees = df[df['title'] == film]['name']
nominees = nominees.reset_index(drop=True)
# only include films with more than one nominees in it
if len(nominees) != 1:
for i in range(0, len(nominees)):
for j in range(i+1, len(nominees)):
pairs = [[nominees[i], nominees[j]]]
df1 = pd.DataFrame(pairs, columns = ['source', 'target'])
write_to_csv(df1, filepath)
# CREATE EDGES
start = time.time()
create_edges(df, edges_df_path)
# CREATE NODES
create_nodes(df[0:100], nodes_df_path)
# +
array_1 = np.array([1,2,3,4])
array_2 = np.array([1,2,3,4])
start = time.time()
mesh = np.array(np.meshgrid(array_1, array_2))
combinations = mesh.T.reshape(-1, 2)
print(combinations)
end = time. time()
print(end - start)
# -
#2.404025077819824
films = df.tconst.unique()
print(len(films))
temp = df[df.tconst == 'tt0025164']
print(temp[['nconst','tconst','film_year']].head(10))
| code/clean_imbd_data_scripts/create_node_edges.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using multiple microphones to infer on-axis source levels
# In the previous notebook we saw how the use of beam-shape patterns helped us infer on-axis source levels even in calls which were right censored. In this notebook we will investigate the use of multiple microphones and beam-shape modelling in the role of improving on-axis source level estimation.
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(100)
import pymc3 as pm
import pandas as pda
import sys
sys.path.append('../../../research-repos/bat_beamshapes/')
import beamshape_predictions as b_p
db = lambda X: 20*np.log10(X)
# ### The situation: 2 mics and a bat
# Imagine a case where you have two microphones on a line, and bat calls are recorded. For the sake simplicity we assume the bat is relatively far away, and you somehow magically know the position, and can therefore calculate the apparent source-level at 1 metre (a standard distance in echolocation). In the cartoon drawing below you can see the bat's head, it's beam-shape, and two mics in a line.
#
# 
# As can be expected, this might help us figure out the $\theta$ and on-axis level of a call. In our imaginary case above for instance, the upper mic is off-axis, while the lower mic is mostly on-axis. This specific difference in apparent source-level calculated at both mics can now be recreated by a much smaller parameter combination than a single mic scenario. Let's proceed to see how much this helps us.
def vib_spherecap(thetas,kvalue,Rvalue,theta0):
'''The array version of vibrating_cap_of_sphere for multiple theta values
'''
kwargs = {'R':Rvalue,'theta_0':theta0}
outputs = [ b_p.vibrating_cap_of_sphere(each_theta,kvalue,**kwargs) for each_theta in thetas]
return np.array(outputs)
theta_values = np.linspace(0,np.pi,200)
v_sound = 330 # m/s
freq = 90*10**3 # Hz
k_value = 2*np.pi/(v_sound/freq)
ball_rad = 0.2
ball_diam = 0.005
cap_theta = ball_diam/ball_rad
offaxis = db(vib_spherecap(theta_values,k_value,ball_rad,cap_theta))
plt.figure()
a0 = plt.subplot(111,projection='polar')
a0.set_theta_zero_location("N")
plt.plot(theta_values, offaxis)
help(b_p.vibrating_cap_of_sphere)
| paramsims/source-level-estim2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tf1]
# language: python
# name: conda-env-tf1-py
# ---
# +
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Lambda
from keras.layers import Conv2D, MaxPooling2D, Activation
from keras import backend as K
# input image dimensions
img_rows, img_cols = 28, 28
# +
from PIL import Image
import glob
import random
import numpy as np
from scipy.misc import imresize
def rgb2gray(rgb):
r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]
gray = 0.2989 * r + 0.5870 * g + 0.1140 * b
return gray
xs = []
ys = []
for filename in glob.glob('gauges/*.png'): #assuming gif
im=Image.open(filename)
# pull apart filename
ys.append(float(filename.split('_')[1].split('.')[0])/360.0)
# resize image
#im.thumbnail((img_rows,img_cols), Image.ANTIALIAS)
im=np.array(im)
im=rgb2gray(im)
im=imresize(im,(img_rows,img_cols))
xs.append(im)
c = list(zip(xs, ys))
random.shuffle(c)
xs, ys = zip(*c)
xs=np.asarray(xs)
ys=np.asarray(ys)
# -
import matplotlib.pyplot as plt
# %matplotlib inline
plt.imshow(xs[0],cmap='gray')
plt.show()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(xs, ys, test_size=0.1, random_state=42)
# +
print(x_train.shape)
#if K.image_data_format() == 'channels_first':
#x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
#x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
#input_shape = (1, img_rows, img_cols)
#else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# +
batch_size = 128
epochs = 34
model = Sequential()
model.add(Lambda(lambda x: x/127.5 - 1., input_shape=input_shape, output_shape=input_shape))
model.add(Conv2D(32, 3, 3, activation='relu'))
model.add(Conv2D(64, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, init = 'normal'))
model.compile(loss = 'mse', optimizer = 'Adam')
model.fit(x_train, y_train,
batch_size=batch_size,
nb_epoch=epochs,
verbose=1,
validation_data=(x_test, y_test))
# -
for index in range(20):
angle = float(model.predict(x_test[index][None, :, :, :], batch_size=1))
print('====')
print(angle*360)
print(y_test[index]*360)
| gaugenist.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="vzbGdfFYWdiO" outputId="406789a8-6d06-4019-81d7-75f856e05908"
# for colab import
# #!pip install -q xlrd
# #!git clone https://github.com/onimaru/CursoBio.git
# + colab={"base_uri": "https://localhost:8080/", "height": 69} colab_type="code" id="yYtdadB0YG9L" outputId="4166a0f5-63ce-4b37-e597-c035fce67bb8"
# #!ls CursoBio/datasets/newHIV-1_data/
# + colab={} colab_type="code" id="spsJhOq0Wwt3"
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# + colab={} colab_type="code" id="BaVG1f3oW0Jm"
data_746_raw = pd.read_csv(r".\datasets\newHIV-1_data\746Data.txt", header=None)
data_1625_raw = pd.read_csv(r".\datasets\newHIV-1_data\1625Data.txt", header=None)
data_impens_raw = pd.read_csv(r".\datasets\newHIV-1_data\impensData.txt", header=None)
data_schilling_raw = pd.read_csv(r".\datasets\newHIV-1_data\schillingData.txt", header=None)
# + colab={} colab_type="code" id="XoWKT2nUW0VG"
data_7 = data_746_raw.copy()
# + colab={} colab_type="code" id="4aMbZ3iRYWYC"
data_1 = data_1625_raw.copy()
# + colab={} colab_type="code" id="zDuD1zuLYW1A"
data_i = data_impens_raw.copy()
# + colab={} colab_type="code" id="QsYRRTWWYW-T"
data_s = data_schilling_raw.copy()
# + colab={"base_uri": "https://localhost:8080/", "height": 202} colab_type="code" id="1NwKmzv8W0X1" outputId="a5a6b20a-17b5-4460-8824-2e61b60d6502"
data_s.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 219} colab_type="code" id="uEDLNInbW0al" outputId="4c1f193f-fc06-49e1-cd39-89425467120c"
data_s.columns = ['Peptide','Cleavage']
print(data_s.shape)
data_s.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 156} colab_type="code" id="J-HRvdVtW0d3" outputId="f350163e-90a1-4091-b214-53a1995d7a4f"
data_s['Prot_sum'] = data_s['Peptide'].apply(len)
print('Número máximo:',max(data_s['Prot_sum']))
print('Número mínimo:',min(data_s['Prot_sum']))
print(data_s.head())
# + colab={"base_uri": "https://localhost:8080/", "height": 202} colab_type="code" id="7XV7KXH9W0gm" outputId="eebd7614-e737-4bac-d2b0-06889bf77ccb"
# Separando a string em Peptide em uma coluna para cada proteína
n = max(data_s['Peptide'].apply(len))
for i in range(n):
data_s['Pep0'+str(i)] = data_s['Peptide'].str[i]
data_s.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 219} colab_type="code" id="bWOcKo4xW0iw" outputId="df438925-9e11-4a1d-8a45-97d3a7e3a937"
# target feature
y_s = pd.DataFrame(data_s['Cleavage'])
y_s = y_s.replace(-1,0)
print(y_s.shape)
y_s.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 219} colab_type="code" id="09xwK8xvW0mC" outputId="e0b18d8e-17f9-4c8c-ade9-b86835b578de"
# X features
X_s = data_s.drop(['Peptide','Cleavage','Prot_sum'],axis=1)
print(X_s.shape)
X_s.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 269} colab_type="code" id="P8V0rIpkW0ox" outputId="d1e1f978-9b36-48c6-f5f1-63175c2fbdf6"
X_s_enc = pd.get_dummies(X_s)
print(X_s_enc.shape)
X_s_enc.head()
# + colab={} colab_type="code" id="HsrI9H3lW0q_"
features = X_s_enc.columns
# + colab={"base_uri": "https://localhost:8080/", "height": 286} colab_type="code" id="WiRjo9u4qfQT" outputId="ec6e55cc-3786-4ea4-a919-86542d37e106"
# preparing other datasets
data_7.columns = ['Peptide','Cleavage']
n = max(data_7['Peptide'].apply(len))
for i in range(n):
data_7['Pep0'+str(i)] = data_7['Peptide'].str[i]
y_7 = pd.DataFrame(data_7['Cleavage'])
y_7 = y_7.replace(-1,0)
X_7 = data_7.drop(['Peptide','Cleavage'],axis=1)
X_7_enc = pd.get_dummies(X_7)
print(X_7_enc.shape)
print(y_7.shape)
X_7_enc.head()
# + colab={} colab_type="code" id="Rj071_HgsFxF"
X = X_s_enc.values
y = y_s.values
# + colab={} colab_type="code" id="ONHVGzZBrzkO"
X_test = X_7_enc.values
y_test = y_7.values
# + colab={} colab_type="code" id="t2rtg4wlr6EM"
# save files
X_s_enc.to_csv(r'.\datasets\newHIV-1_data\X_s.csv')
y_s.to_csv(r'.\datasets\newHIV-1_data\y_s.csv')
X_7_enc.to_csv(r'.\datasets\newHIV-1_data\X_7.csv')
y_7.to_csv(r'.\datasets\newHIV-1_data\y_7.csv')
# + [markdown] colab_type="text" id="Y6DlNKnsa_ME"
# ***Training phase***
# + colab={} colab_type="code" id="Qqv2tPE8W0we"
# machine learning libraries
from sklearn.model_selection import StratifiedShuffleSplit, cross_val_predict, GridSearchCV, cross_val_score, train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.cross_validation import KFold
# metrics
from sklearn.metrics import accuracy_score, log_loss, make_scorer, confusion_matrix, f1_score, precision_score,\
recall_score, precision_recall_curve, roc_curve, roc_auc_score
# aditional libraries
import time
import warnings
warnings.filterwarnings('ignore')
# + colab={} colab_type="code" id="3oGiqt5MkkC0"
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# + colab={} colab_type="code" id="yOOa0XFPkpBe"
def classifier_scores(true_labels,prediction_labels):
f1 = f1_score(true_labels,prediction_labels)
pre = precision_score(true_labels,prediction_labels)
rec = recall_score(true_labels,prediction_labels)
acc = accuracy_score(true_labels,prediction_labels)
auc = roc_auc_score(true_labels,prediction_labels)
report = pd.DataFrame({'AUC':np.around([auc],3),'Precision':np.around([pre],3), 'Recall':np.around([rec],3),'F1':np.around([f1],3),'Accuracy':np.around([acc],3)})
print(report)
# + colab={} colab_type="code" id="Ja0-CZdMciyf"
# list of classifiers
classifiers = [
KNeighborsClassifier(5),
SVC(probability=True),
DecisionTreeClassifier(),
RandomForestClassifier(),
AdaBoostClassifier(),
GradientBoostingClassifier(),
GaussianNB(),
LinearDiscriminantAnalysis(),
QuadraticDiscriminantAnalysis(),
SGDClassifier(),
LogisticRegression()]
# + colab={} colab_type="code" id="u6r9ibV0W0yO"
# graph to compare different regressors with the same chosen metric
# classifiers = list of classifiers we want to use
# cv = number of desired folds
def scoringGraph(classifiers,cv,X,y):
init = time.time()
log_cols = ["Classifier", "F1_score"]
log = pd.DataFrame(columns=log_cols)
splits = 5
sss = StratifiedShuffleSplit(n_splits=splits, test_size=0.1, random_state=0)
acc_dict = {}
for train_index, test_index in sss.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
for clf in classifiers:
name = clf.__class__.__name__
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
acc = f1_score(y_test, y_pred)
if name in acc_dict:
acc_dict[name] += acc
else:
acc_dict[name] = acc
for clf in acc_dict:
acc_dict[clf] = acc_dict[clf] / splits
log_entry = pd.DataFrame([[clf, acc_dict[clf]]], columns=log_cols)
log = log.append(log_entry)
plt.xlabel('F1_score')
plt.title('Classifier F1_score')
sns.set_color_codes("muted")
sns.barplot(x='F1_score', y='Classifier', data=log, color="b");
print('Tempo total: {:.2f}'.format(time.time()-init))
# + colab={"base_uri": "https://localhost:8080/", "height": 312} colab_type="code" id="4Ni6LuEXd-YN" outputId="45c38440-6900-4f7d-8060-150653889720"
scoringGraph(classifiers,4,X,y)
# + [markdown] colab_type="text" id="sgS__exSfLZa"
# ***Melhorando os modelos***
# + colab={} colab_type="code" id="xpKiCtTNfKGT"
clf = DecisionTreeClassifier()
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="N9B0zZmofKJC" outputId="adb2bae6-e46e-47a3-d6c9-d131f1fd7c93"
scorer = make_scorer(roc_auc_score)
score = cross_val_score(clf,X,y,cv=5,scoring=scorer)
score
# + colab={"base_uri": "https://localhost:8080/", "height": 52} colab_type="code" id="mtibNKa6fKN_" outputId="27d9c3a0-7c9d-4074-af1e-1b6430391d2b"
y_pred = cross_val_predict(clf,X,y,cv=5)
classifier_scores(y,y_pred)
# + colab={"base_uri": "https://localhost:8080/", "height": 329} colab_type="code" id="EMc-1RWnfKWA" outputId="ffc01507-9eb1-40c4-d627-19c941de7a9a"
plot_confusion_matrix(cm_sgd,classes=['Negative cleavage','Positive cleavage'],normalize=False,title='Not-Normalized confusion matrix')
# + colab={} colab_type="code" id="TvDXdCdWjBYg"
y_scores = cross_val_predict(clf,X, y,cv=5)
# + colab={} colab_type="code" id="x6sI6CrhjBbx"
precisions, recalls, thresholds = precision_recall_curve(y,y_pred)
# + colab={"base_uri": "https://localhost:8080/", "height": 285} colab_type="code" id="6V-6pBD_jBeh" outputId="11a546c8-6539-4c8e-f261-46cf86c77b6a"
#Plot precision and recall as functions of the threshold value
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.xlabel("Threshold", fontsize=16)
plt.legend(loc="upper left", fontsize=16)
plt.figure(figsize=(8, 4))
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
# + colab={"base_uri": "https://localhost:8080/", "height": 302} colab_type="code" id="6QYAc6dPjBg3" outputId="a17178c4-2701-4383-8d96-ce1f6be4e1dc"
plt.plot(precisions,recalls)
plt.xlabel("Precision", fontsize=16)
plt.ylabel("Recall", fontsize=16)
# + colab={"base_uri": "https://localhost:8080/", "height": 415} colab_type="code" id="YpAfKfKGjBkJ" outputId="d0828eb4-bd6c-4413-8dd1-1693139b949f"
fpr, tpr, thresholds = roc_curve(y,y_scores)
print('AUC: {:.2f}'.format(roc_auc_score(y,y_scores)))
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr, tpr)
# + [markdown] colab_type="text" id="9SljK_Wljaw9"
# ## GridSerarchCV
# + [markdown] colab_type="text" id="k_Bh7cduslNn"
# **DecisionTreeClassifier**
# + colab={"base_uri": "https://localhost:8080/", "height": 191} colab_type="code" id="A7Gn4LOfjBoX" outputId="7e0dbae5-9182-4193-d1a4-c95ca10419d9"
classifier = DecisionTreeClassifier()
# escolhemos alguns parâmetros para a procura
parameters = {'criterion':['gini','entropy'],'min_samples_split':[3,7,10],'max_features':['auto','sqrt','log2',None]}
# escolhemos uma métrica
scorer = make_scorer(roc_auc_score)
# rodamos o grid search no training set
grid_obj = GridSearchCV(classifier, parameters, scoring=scorer)
grid_obj = grid_obj.fit(X, y)
# definimos o classificador knn com os melhores parâmetros
classifier = grid_obj.best_estimator_
best_dt = grid_obj.best_estimator_
# e então treinamos o algoritmo com esta combinação
classifier.fit(X, y)
print(classifier.__class__.__name__)
print('Train set results:')
y_pred_train = classifier.predict(X)
classifier_scores(y,y_pred_train)
print('')
print('Test set results:')
y_pred_test = classifier.predict(X_test)
classifier_scores(y_test,y_pred_test)
print('')
# aditional info
feat = classifier.feature_importances_
print('Max feature importance: {:.2f}'.format(max(feat)))
# + [markdown] colab_type="text" id="stfZCpHQswxu"
# ***RandomForestClassifier***
# + colab={"base_uri": "https://localhost:8080/", "height": 191} colab_type="code" id="UeyHDWnejBt6" outputId="77da1bcc-5772-43ba-d7b5-8a380d34a070"
classifier = RandomForestClassifier()
# escolhemos alguns parâmetros para a procura
parameters = {'n_estimators':[10,15,20],'criterion':['gini','entropy'],'min_samples_split':[3,7,10],'max_features':['auto','sqrt','log2',None]}
scorer = make_scorer(roc_auc_score)
grid_obj = GridSearchCV(classifier, parameters, scoring=scorer)
grid_obj = grid_obj.fit(X, y)
classifier = grid_obj.best_estimator_
classifier.fit(X, y)
print(classifier.__class__.__name__)
print('Train set results:')
y_pred_train = classifier.predict(X)
classifier_scores(y,y_pred_train)
print('')
print('Test set results:')
y_pred_test = classifier.predict(X_test)
classifier_scores(y_test,y_pred_test)
print('')
# aditional info
feat = classifier.feature_importances_
print('Max feature importance: {:.2f}'.format(max(feat)))
# + [markdown] colab_type="text" id="i2ngaRKPIiV7"
# **SVC**
# + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" id="DZb53RmRIilO" outputId="0728d0d0-3ecc-4bc0-9c47-78f67455ca4e"
classifier = SVC()
# escolhemos alguns parâmetros para a procura
parameters = {'C':[0.001,0.01,1.,10.],'kernel':['linear','poly','rbf','sigmoid'],'degree':[1,2,3,4]}
scorer = make_scorer(roc_auc_score)
grid_obj = GridSearchCV(classifier, parameters, scoring=scorer)
grid_obj = grid_obj.fit(X, y)
classifier = grid_obj.best_estimator_
classifier.fit(X, y)
print(classifier.__class__.__name__)
print('Train set results:')
y_pred_train = classifier.predict(X)
classifier_scores(y,y_pred_train)
print('')
print('Test set results:')
y_pred_test = classifier.predict(X_test)
classifier_scores(y_test,y_pred_test)
print('')
# + [markdown] colab_type="text" id="qzbng2oAtIi4"
# **AdaBoostClassifier**
# + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" id="80xVvbxIjBmn" outputId="f19fd7da-88a0-4e71-e5c1-79217be2223f"
classifier = AdaBoostClassifier()
# escolhemos alguns parâmetros para a procura
parameters = {'n_estimators':[30,50,70,90],'algorithm':['SAMME', 'SAMME.R']}
scorer = make_scorer(roc_auc_score)
grid_obj = GridSearchCV(classifier, parameters, scoring=scorer)
grid_obj = grid_obj.fit(X, y)
classifier = grid_obj.best_estimator_
classifier.fit(X, y)
print(classifier.__class__.__name__)
print('Train set results:')
y_pred_train = classifier.predict(X)
classifier_scores(y,y_pred_train)
print('')
print('Test set results:')
y_pred_test = classifier.predict(X_test)
classifier_scores(y_test,y_pred_test)
print('')
# aditional info
# + [markdown] colab_type="text" id="5VojV16AtLDG"
# **GradientBoostingClassifier**
# + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" id="8iqYypX8tF3z" outputId="6a203137-2566-4284-b992-2b426fb9fb57"
classifier = GradientBoostingClassifier()
# escolhemos alguns parâmetros para a procura
parameters = {'loss':['deviance', 'exponential'],'learning_rate':[0.01,0.1,1.],'min_samples_split':[3,7,10],'max_features':['auto','sqrt','log2',None]}
scorer = make_scorer(roc_auc_score)
grid_obj = GridSearchCV(classifier, parameters, scoring=scorer)
grid_obj = grid_obj.fit(X, y)
classifier = grid_obj.best_estimator_
classifier.fit(X, y)
print(classifier.__class__.__name__)
print('Train set results:')
y_pred_train = classifier.predict(X)
classifier_scores(y,y_pred_train)
print('')
print('Test set results:')
y_pred_test = classifier.predict(X_test)
classifier_scores(y_test,y_pred_test)
print('')
# aditional info
# + [markdown] colab_type="text" id="hHiVPQq-uGnk"
# **SGDClassifier**
# + colab={"base_uri": "https://localhost:8080/", "height": 173} colab_type="code" id="-3FMgWQztF9R" outputId="5e6503cf-49a6-4c86-c01e-56edc600672c"
classifier = SGDClassifier()
# escolhemos alguns parâmetros para a procura
parameters = {'eta0':[0.0001],'loss':['hinge','log','modified_huber','squared_hinge','perceptron'],'penalty':[None,'l2','l1','elasticnet'],'max_iter':[5,10,15],'learning_rate':['constant','optimal','invscaling']}
scorer = make_scorer(roc_auc_score)
grid_obj = GridSearchCV(classifier, parameters, scoring=scorer)
grid_obj = grid_obj.fit(X, y)
classifier = grid_obj.best_estimator_
classifier.fit(X, y)
print(classifier.__class__.__name__)
print('Train set results:')
y_pred_train = classifier.predict(X)
classifier_scores(y,y_pred_train)
print('')
print('Test set results:')
y_pred_test = classifier.predict(X_test)
classifier_scores(y_test,y_pred_test)
print('')
| HIV_cleavage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os, sys
cur_path = os.path.abspath(os.path.dirname('__file__'))
basic_path = cur_path.replace('classify', 'basic')
sys.path.append(basic_path)
# -
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
# +
digits = datasets.load_digits()
X = digits.data
# 如果不用copy,则y和digits指向同一个引用,修改y则会修改digits
y = digits.target.copy()
# 模拟数据倾斜,只关注数字为9的数据
y[digits.target == 9] = 1
y[digits.target != 9] = 0
# -
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=666)
# +
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
decision_scores = log_reg.decision_function(X_test)
# -
from metrics import FPR, TPR
fprs = []
tprs = []
thresholds = np.arange(np.min(decision_scores), np.max(decision_scores), 0.1)
for threshold in thresholds:
y_predict = np.array(decision_scores >= threshold, dtype=int)
fprs.append(FPR(y_test, y_predict))
tprs.append(TPR(y_test, y_predict))
plt.plot(fprs, tprs)
plt.show()
# ### scikit-learn 中的 ROC-Curve
# +
from sklearn.metrics import roc_curve
fprs, tprs, thresholds = roc_curve(y_test, decision_scores)
# -
plt.plot(fprs, tprs)
plt.show()
# +
from sklearn.metrics import roc_auc_score
# ROC面积越大,分类效果越好
# 当fpr越小,tpr越大,此时对应的面积越大
roc_auc_score(y_test, decision_scores)
# -
| ml/classify/roc-curve.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Plot model diagram
# ## Import modules
# +
import cv2
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import default_rng
import pandas as pd
from scipy.spatial.distance import squareform
from sklearn.manifold import TSNE
# %matplotlib inline
# -
# ## Load data
tips = pd.read_csv(
"../results/builds/natural/natural_sample_1_with_90_vpm_sliding/tip_attributes_with_weighted_distances.tsv",
sep="\t"
)
df = tips.query("timepoint == '1996-10-01' | timepoint == '1997-10-01'").loc[
:,
[
"strain",
"timepoint",
"raw_date",
"numdate",
"frequency",
"aa_sequence"
]
].copy()
df.head()
rng = default_rng()
df["random"] = rng.uniform(size=df.shape[0])
df.shape
timepoints = df["timepoint"].unique()
timepoints
plt.plot(
df["numdate"],
df["random"],
"o"
)
df = df[df["frequency"] > 0.001].copy()
df.head()
# ## Calculate earth mover's distance (EMD) between strains in adjacent timepoints
df_i = df.query(f"timepoint == '{timepoints[0]}'")
df_j = df.query(f"timepoint == '{timepoints[1]}'")
df_i.shape
df_j.shape
# +
emd_distances = np.zeros((df_i.shape[0], df_j.shape[0]))
for i, (index_i, row_i) in enumerate(df_i.iterrows()):
sequence_i = np.frombuffer(row_i["aa_sequence"].encode(), "S1")
for j, (index_j, row_j) in enumerate(df_j.iterrows()):
sequence_j = np.frombuffer(row_j["aa_sequence"].encode(), "S1")
distance = (sequence_i != sequence_j).sum()
emd_distances[i, j] = distance
# -
emd_distances = emd_distances.astype(np.float32)
emd_distances.shape
emd_distances
strains_i = df_i["strain"].values
strains_j = df_j["strain"].values
frequencies_i = df_i["frequency"].values.astype(np.float32)
frequencies_j = df_j["frequency"].values.astype(np.float32)
frequencies_i
frequencies_j
emd, _, flow = cv2.EMD(
frequencies_i,
frequencies_j,
cv2.DIST_USER,
cost=emd_distances
)
emd
flow = np.round(flow, 3)
flow.shape
nonzero_flow_pairs = np.nonzero(flow)
nonzero_flow_pairs
nonzero_flow_pairs[0].shape
nonzero_flow = flow[nonzero_flow_pairs]
nonzero_flow.shape
# +
flow_records = []
for i, (index_i, index_j) in enumerate(np.transpose(nonzero_flow_pairs)):
flow_records.append({
"strain": strains_i[index_i],
"other_strain": strains_j[index_j],
"flow": nonzero_flow[i]
})
# -
flow_df = pd.DataFrame(flow_records)
flow_df.head()
# ## Calculate t-SNE position of strains in one dimension
df_records = df.to_dict("records")
# +
distances = []
for i in range(len(df_records)):
sequence_i = np.frombuffer(df_records[i]["aa_sequence"].encode(), "S1")
for j in range(i + 1, len(df_records)):
sequence_j = np.frombuffer(df_records[j]["aa_sequence"].encode(), "S1")
distance = (sequence_i != sequence_j).sum()
distances.append(distance)
# -
distances = np.array(distances)
distances.shape
squareform(distances).shape
distance_matrix = squareform(distances)
tsne = TSNE(n_components=1, learning_rate=500, metric="precomputed", random_state=314)
X_embedded_1d = tsne.fit_transform(distance_matrix)
X_embedded_1d.shape
df["tsne_1d"] = X_embedded_1d
df.head()
plt.plot(
df["numdate"],
df["random"],
"o"
)
plt.plot(
df["numdate"],
df["tsne_1d"],
"o"
)
minimal_df = df.drop(columns=["aa_sequence"])
minimal_df = minimal_df.sort_values(["timepoint", "frequency"])
minimal_df["timepoint_occurrence"] = minimal_df.groupby("timepoint")["strain"].cumcount()
counts_by_timepoint = minimal_df.groupby("timepoint")["strain"].count().reset_index().rename(columns={"strain": "count"})
counts_by_timepoint
minimal_df = minimal_df.merge(
counts_by_timepoint,
on="timepoint"
)
minimal_df["y_position"] = (minimal_df["timepoint_occurrence"]) / minimal_df["count"]
plt.plot(
minimal_df["timepoint"],
(minimal_df["timepoint_occurrence"]) / minimal_df["count"],
"o",
alpha=0.6
)
minimal_df = minimal_df.drop(columns=["count"])
# ## Join minimal data frame with flow pairs
minimal_df.head()
flow_df.head()
paired_df = minimal_df.merge(
flow_df,
on="strain",
how="left"
)
paired_df.head()
full_df = paired_df.merge(
minimal_df,
left_on="other_strain",
right_on="strain",
suffixes=["", "_other"],
how="left"
)
full_df = np.round(full_df, 4)
full_df["strain_occurrence"] = full_df.groupby("strain")["strain"].cumcount()
full_df.head()
full_df[full_df["flow"] == 0]
full_df.to_csv(
"../results/emd_example.csv",
sep=",",
index=False,
header=True
)
| analyses/2020-11-24-earth-movers-distance-diagram.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Recursive least squares
#
# Recursive least squares is an expanding window version of ordinary least squares. In addition to availability of regression coefficients computed recursively, the recursively computed residuals the construction of statistics to investigate parameter instability.
#
# The `RecursiveLS` class allows computation of recursive residuals and computes CUSUM and CUSUM of squares statistics. Plotting these statistics along with reference lines denoting statistically significant deviations from the null hypothesis of stable parameters allows an easy visual indication of parameter stability.
#
# Finally, the `RecursiveLS` model allows imposing linear restrictions on the parameter vectors, and can be constructed using the formula interface.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
from pandas_datareader.data import DataReader
np.set_printoptions(suppress=True)
# -
# ## Example 1: Copper
#
# We first consider parameter stability in the copper dataset (description below).
# +
print(sm.datasets.copper.DESCRLONG)
dta = sm.datasets.copper.load_pandas().data
dta.index = pd.date_range("1951-01-01", "1975-01-01", freq="AS")
endog = dta["WORLDCONSUMPTION"]
# To the regressors in the dataset, we add a column of ones for an intercept
exog = sm.add_constant(
dta[["COPPERPRICE", "INCOMEINDEX", "ALUMPRICE", "INVENTORYINDEX"]]
)
# -
# First, construct and fit the model, and print a summary. Although the `RLS` model computes the regression parameters recursively, so there are as many estimates as there are datapoints, the summary table only presents the regression parameters estimated on the entire sample; except for small effects from initialization of the recursions, these estimates are equivalent to OLS estimates.
# +
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
print(res.summary())
# -
# The recursive coefficients are available in the `recursive_coefficients` attribute. Alternatively, plots can generated using the `plot_recursive_coefficient` method.
print(res.recursive_coefficients.filtered[0])
res.plot_recursive_coefficient(range(mod.k_exog), alpha=None, figsize=(10, 6))
# The CUSUM statistic is available in the `cusum` attribute, but usually it is more convenient to visually check for parameter stability using the `plot_cusum` method. In the plot below, the CUSUM statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
print(res.cusum)
fig = res.plot_cusum()
# Another related statistic is the CUSUM of squares. It is available in the `cusum_squares` attribute, but it is similarly more convenient to check it visually, using the `plot_cusum_squares` method. In the plot below, the CUSUM of squares statistic does not move outside of the 5% significance bands, so we fail to reject the null hypothesis of stable parameters at the 5% level.
res.plot_cusum_squares()
# ## Example 2: Quantity theory of money
#
# The quantity theory of money suggests that "a given change in the rate of change in the quantity of money induces ... an equal change in the rate of price inflation" (Lucas, 1980). Following Lucas, we examine the relationship between double-sided exponentially weighted moving averages of money growth and CPI inflation. Although Lucas found the relationship between these variables to be stable, more recently it appears that the relationship is unstable; see e.g. Sargent and Surico (2010).
start = "1959-12-01"
end = "2015-01-01"
m2 = DataReader("M2SL", "fred", start=start, end=end)
cpi = DataReader("CPIAUCSL", "fred", start=start, end=end)
# +
def ewma(series, beta, n_window):
nobs = len(series)
scalar = (1 - beta) / (1 + beta)
ma = []
k = np.arange(n_window, 0, -1)
weights = np.r_[beta ** k, 1, beta ** k[::-1]]
for t in range(n_window, nobs - n_window):
window = series.iloc[t - n_window : t + n_window + 1].values
ma.append(scalar * np.sum(weights * window))
return pd.Series(ma, name=series.name, index=series.iloc[n_window:-n_window].index)
m2_ewma = ewma(np.log(m2["M2SL"].resample("QS").mean()).diff().iloc[1:], 0.95, 10 * 4)
cpi_ewma = ewma(
np.log(cpi["CPIAUCSL"].resample("QS").mean()).diff().iloc[1:], 0.95, 10 * 4
)
# -
# After constructing the moving averages using the $\beta = 0.95$ filter of Lucas (with a window of 10 years on either side), we plot each of the series below. Although they appear to move together prior for part of the sample, after 1990 they appear to diverge.
# +
fig, ax = plt.subplots(figsize=(13, 3))
ax.plot(m2_ewma, label="M2 Growth (EWMA)")
ax.plot(cpi_ewma, label="CPI Inflation (EWMA)")
ax.legend()
# +
endog = cpi_ewma
exog = sm.add_constant(m2_ewma)
exog.columns = ["const", "M2"]
mod = sm.RecursiveLS(endog, exog)
res = mod.fit()
print(res.summary())
# -
res.plot_recursive_coefficient(1, alpha=None)
# The CUSUM plot now shows substantial deviation at the 5% level, suggesting a rejection of the null hypothesis of parameter stability.
res.plot_cusum()
# Similarly, the CUSUM of squares shows substantial deviation at the 5% level, also suggesting a rejection of the null hypothesis of parameter stability.
res.plot_cusum_squares()
# ## Example 3: Linear restrictions and formulas
# ## Linear restrictions
#
# It is not hard to implement linear restrictions, using the `constraints` parameter in constructing the model.
# +
endog = dta["WORLDCONSUMPTION"]
exog = sm.add_constant(
dta[["COPPERPRICE", "INCOMEINDEX", "ALUMPRICE", "INVENTORYINDEX"]]
)
mod = sm.RecursiveLS(endog, exog, constraints="COPPERPRICE = ALUMPRICE")
res = mod.fit()
print(res.summary())
# -
# ## Formula
#
# One could fit the same model using the class method `from_formula`.
mod = sm.RecursiveLS.from_formula(
"WORLDCONSUMPTION ~ COPPERPRICE + INCOMEINDEX + ALUMPRICE + INVENTORYINDEX",
dta,
constraints="COPPERPRICE = ALUMPRICE",
)
res = mod.fit()
print(res.summary())
| examples/notebooks/recursive_ls.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
import seaborn as sns
import glob
import os
# +
# construct big classification
big_class = {}
big_class['multiple_object'] = ['EllipVar', 'Symbiotic*','SB*','DQHer',
'Nova-like','EB*betLyr','AMHer','Nova','EB*Algol',
'EB*WUMa','CataclyV*',
'DwarfNova','EB*']
big_class['star'] = ['brownD*','SG*','RCrB_Candidate', 'HV*', 'WR*', 'YellowSG*','gammaDor','RotV*alf2CVn',
'Erupt*RCrB','BlueStraggler','Eruptive*','V*?', 'Pulsar',
'PulsV*bCep','low-mass*','post-AGB*', 'Pec*','pMS*','HotSubdwarf',
'PM*','*inNeb','pulsV*SX','RGB*','HB*','BYDra',
'PulsV*RVTau', 'BlueSG*','Irregular_V*','WD*','Ae*','RedSG*',
'AGB*','OH/IR','Be*','Cepheid','PulsV*delSct','RotV*','PulsV*',
'PulsV*WVir','S*','RSCVn','deltaCep','TTau*','Em*','Orion_V*',
'YSO','V*','C*','Mira','LPV*','Star','RRLyr']
big_class['AGN-like'] = ['LINER','Blazar','AGN','BLLac','QSO','Galaxy']
big_class['other_SN'] = ['SNIb-pec', 'SNIb/c','SNII-pec','SN', 'SNIbn','SNIc-BL','SNI',
'SNIb','SNIIb','SLSN-II','SNIIP','SLSN-I','SNIc','SNIIn','SNII',
'SN Ibn','SN Ic-BL','SN I', 'SN Ib','SN IIb','SN IIP', 'SN Ic','SN IIn',
'SN II', 'SN Icn', 'SNIcn', 'SN Ib/c','SN Ib-pec','SN IIL', 'SN II-pec']
big_class['other_TNS'] = ['Mdwarf', 'LBV','TDE','Other','CV','Varstar', 'M dwarf','LRN',
'FRB']
big_class['SNIa'] = ['Ia', 'SN Ia', 'SN Ia-91T-like', 'SN Ia-91bg-like', 'SN Ia-CSM',
'SN Ia-pec', 'SN Iax[02cx-like]']
# reverse it
class_dict = {}
for key in big_class.keys():
for item in big_class[key]:
class_dict[item] = key
# -
# # Full initial data set (all available labels from SIMBAD and TNS)
# this folder is avaliable through zenodo:
# https://zenodo.org/record/5645609#.Yc5SpXXMJNg
dirname_input = '../../../../data/AL_data/'
flist = os.listdir(dirname_input)
# +
simbad_alerts = []
tns_alerts = []
tns_classes = []
simbad_objects = []
tns_objects = []
simbad_classes = []
# read all tns file and n_files_simbad random simbad file
for name in flist:
if 'simbad' in name:
d1 = pd.read_parquet(dirname_input + name)
simbad_alerts.append(d1.shape[0])
nobjs = np.unique(d1['objectId'].values).shape[0]
simbad_objects.append(nobjs)
d2 = d1.drop_duplicates(subset=['objectId'], keep='first')
simbad_classes = simbad_classes + list(d2['cdsxmatch'].values)
elif 'tns' in name:
d1 = pd.read_parquet(dirname_input + name)
tns_alerts.append(d1.shape[0])
nobjs = np.unique(d1['objectId'].values).shape[0]
tns_objects.append(nobjs)
d2 = d1.drop_duplicates(subset=['objectId'], keep='first')
tns_classes = tns_classes + list(d2['TNS'].values)
# -
# number of alerts with SIMBAD classification
sum(simbad_alerts)
# number of objects with SIMBAD classification
len(simbad_classes)
# +
# check classes of all SIMBAD objects
simbad_orig_classes, simbad_orig_numbers = \
np.unique(simbad_classes, return_counts=True)
simbad_orig_classes_perc = 100*np.round(simbad_orig_numbers/len(simbad_classes), 4)
df_simbad_orig = pd.DataFrame(np.array([simbad_orig_classes,
simbad_orig_numbers,
simbad_orig_classes_perc]).transpose(),
columns=['class', 'number', 'perc'])
# -
df_simbad_orig.to_csv('../../../../referee/data/simbad_orig_classes.csv',
index=False)
# number of alerts with TNS classification
sum(tns_alerts)
# number of objects with TNS classification
sum(tns_objects)
# +
# check classes of all TNS objects
tns_orig_classes, tns_orig_numbers = \
np.unique(tns_classes, return_counts=True)
tns_orig_classes_perc = 100*np.round(tns_orig_numbers/len(tns_classes), 4)
df_tns_orig = pd.DataFrame(np.array([tns_orig_classes,
tns_orig_numbers,
tns_orig_classes_perc]).transpose(),
columns=['class', 'number', 'perc'])
# -
df_tns_orig.to_csv('../../../../referee/data/tns_orig_classes.csv',
index=False)
# # Raw vs feature extraction
fname = '../../../../referee/data/raw.csv.gz'
data = pd.read_csv(fname)
data.shape
data.shape[0] - sum(tns_alerts)
np.unique(data['objectId'].values).shape
np.unique(data['objectId'].values).shape[0] - sum(tns_objects)
# +
data_raw = []
galaxy = []
other_sn = []
mult = []
other_tns = []
snia = []
for i in range(data.shape[0]):
objtype = data.iloc[i]['TNS']
if objtype == '-99':
objtype = data.iloc[i]['cdsxmatch']
data_raw.append(class_dict[objtype])
big = class_dict[objtype]
if big == 'AGN-like':
galaxy.append(objtype)
if big == 'other_SN':
other_sn.append(objtype)
if big == 'multiple_object':
mult.append(objtype)
if big == 'other_TNS':
other_tns.append(objtype)
if big == 'SNIa':
snia.append([objtype, data.iloc[i]['objectId']])
data_raw = np.array(data_raw)
galaxy = np.array(galaxy)
other_tns = np.array(other_tns)
mult = np.array(mult)
snia = np.array(snia)
# -
np.unique(snia[:,1]).shape
# +
sntype, freq = np.unique(galaxy, return_counts=True)
print('Galaxy-sub-type --- number')
for i in range(len(sntype)):
print(sntype[i], ' -- ', freq[i])
# -
sntype, freq = np.unique(other_sn, return_counts=True)
print('SN-sub-type --- number')
for i in range(len(sntype)):
print(sntype[i], ' -- ', freq[i])
# +
sntype, freq = np.unique(other_tns, return_counts=True)
print('Other TNS-sub-type --- number')
for i in range(len(sntype)):
print(sntype[i], ' -- ', freq[i])
# +
sntype, freq = np.unique(mult, return_counts=True)
print('Multiple object-sub-type --- number')
for i in range(len(sntype)):
print(sntype[i], ' -- ', freq[i])
# +
features = pd.read_csv('../../../../referee/data/features.csv', index_col=False)
features_class = []
for i in range(features.shape[0]):
objtype = features.iloc[i]['type']
features_class.append(class_dict[objtype])
features_class = np.array(features_class)
# +
objId = []
for i in range(features.shape[0]):
candid = features.iloc[i]['id']
indx = list(data['candid'].values).index(candid)
objId.append([data.iloc[indx]['objectId'],data.iloc[indx]['TNS']])
# -
len(objId)
np.unique(np.array(objId)[:,0]).shape
Ia_flag = np.array([item in big_class['SNIa'] for item in np.array(objId)[:,1]])
Ia_id = np.array(objId)[Ia_flag]
np.unique(Ia_id[:,0]).shape
# +
types_raw, number_raw = np.unique(data_raw, return_counts=True)
types_features, number_features = np.unique(features_class, return_counts=True)
raw_pop = pd.DataFrame()
raw_pop['type'] = types_raw
raw_pop['sample fraction'] = number_raw.astype(float)/len(data_raw)
raw_pop['number'] = number_raw
raw_pop['sample'] = 'raw'
c1 = pd.DataFrame()
c1['type'] = np.unique(features_class, return_counts=True)[0]
c1['sample fraction'] = np.unique(features_class, return_counts=True)[1]/len(features_class)
c1['sample'] = 'after feature extraction'
c1['number'] = np.unique(features_class, return_counts=True)[1]
pop = pd.concat([raw_pop,c1], ignore_index=True)
# -
pop
sum(pop['number'][pop['sample'] == 'after feature extraction'])
sum(pop['number'][pop['sample'] == 'raw'])
# +
c = ['#F5622E', '#15284F']
f, ax = plt.subplots(figsize=(8, 5))
sns.set_palette('Spectral')
sns.barplot(x="sample fraction", y="type", data=pop,
hue='sample', ci=None, palette=c)
ax.set(xlim=(0, 0.9), ylabel="")
ax.set_xlabel(xlabel="fraction of full sample", fontsize=14)
ax.set_yticklabels(types_raw, fontsize=14)
sns.despine(left=True, bottom=True)
plt.tight_layout()
#plt.show()
plt.savefig('../../../../referee/plots/perc_raw_features.pdf')
# -
pop
# # Queried sample
# +
res_queried = {}
for strategy in ['RandomSampling', 'UncSampling']:
flist = glob.glob('../../../../referee/' + strategy + '/queries/queried_' + strategy + '_v*.dat')
res_queried[strategy] = {}
for name in big_class.keys():
res_queried[strategy][name] = []
for j in range(len(flist)):
data = pd.read_csv(flist[j], delim_whitespace=True, index_col=False)
data_class = np.array([class_dict[item] for item in data['type'].values])
sntype, freq = np.unique(data_class, return_counts=True)
for i in range(len(freq)):
res_queried[strategy][sntype[i]].append(freq[i]/data.shape[0])
# -
for strategy in ['RandomSampling', 'UncSampling']:
print('**** ' + strategy + ' ****')
for key in res_queried[strategy].keys():
print(key, ' -- ', np.round(100* np.mean(res_queried[strategy][key]), 2),
' -- ', np.round(100*np.std(res_queried[strategy][key]),2))
print('\n')
# +
df1 = pd.DataFrame()
df1['type'] = res_queried['RandomSampling'].keys()
df1['sample fraction'] = [np.mean(res_queried['RandomSampling'][key])
for key in res_queried['RandomSampling'].keys()]
df1['strategy'] = 'RandomSampling'
df2 = pd.DataFrame()
df2['type'] = res_queried['UncSampling'].keys()
df2['sample fraction'] = [np.mean(res_queried['UncSampling'][key])
for key in res_queried['UncSampling'].keys()]
df2['strategy'] = 'UncSampling'
df = pd.concat([df2, df1], ignore_index=True)
# +
c = ['#F5622E', '#15284F']
types = ['multiple_objects', 'star', 'AGN-like', 'other_SN', 'other_TNS', 'SNIa']
f, ax = plt.subplots(figsize=(8, 5))
sns.set_palette('Spectral')
sns.barplot(x="sample fraction", y="type", data=df,
hue='strategy', ci=None, palette=c)
ax.set(xlim=(0, 0.9), ylabel="")
ax.set_xlabel(xlabel="fraction of full sample", fontsize=14)
ax.set_yticklabels(types, fontsize=14)
sns.despine(left=True, bottom=True)
plt.tight_layout()
#plt.show()
plt.savefig('../../../../referee/plots/queried_classes.pdf')
# -
# # Photometrically classified Ia sample
# +
res_photIa = {}
for strategy in ['RandomSampling', 'UncSampling']:
res_photIa[strategy] = {}
for name in big_class.keys():
res_photIa[strategy][name] = []
res_photIa[strategy]['tot'] = []
flist = glob.glob('../../../../referee/' + strategy + '/class_prob/v*/class_prob_' + \
strategy + '_loop_299.csv')
for name in flist:
data = pd.read_csv(name)
phot_Ia = data[data['prob_Ia']> 0.5]
data_class = np.array([class_dict[item] for item in phot_Ia['type'].values])
sntype, freq = np.unique(data_class, return_counts=True)
for i in range(len(freq)):
res_photIa[strategy][sntype[i]].append(freq[i]/data_class.shape[0])
res_photIa[strategy]['tot'].append(phot_Ia.shape[0])
# -
np.mean(res_photIa['RandomSampling']['other_SN'])
for strategy in ['RandomSampling', 'UncSampling']:
print('**** ' + strategy + ' ****')
for key in res_photIa[strategy].keys():
if key != 'tot':
print(key, ' -- ', np.round(100* np.mean(res_photIa[strategy][key]), 2),
' -- ', np.round(100*np.std(res_photIa[strategy][key]),2))
print('\n')
for strategy in ['RandomSampling', 'UncSampling']:
print(strategy,' ', np.mean(res_photIa[strategy]['tot']), ' +/- ',
np.std(res_photIa[strategy]['tot']))
print('\n')
# +
res = []
for strategy in ['RandomSampling', 'UncSampling']:
for key in big_class.keys():
mean = np.mean(res_photIa[strategy][key])
std = np.std(res_photIa[strategy][key])
line = [key, mean, std, strategy]
res.append(line)
res2 = pd.DataFrame(data=res, columns=['type', 'perc', 'std', 'strategy'])
# +
c = ['#F5622E', '#15284F']
types = ['multiple_objects', 'star', 'AGN-like', 'other_SN', 'other_TNS', 'SNIa']
f, ax = plt.subplots(figsize=(8, 5))
sns.set_palette('Spectral')
sns.barplot(x="perc", y="type", data=res2,
hue='strategy', ci=None, palette=c)
ax.set(xlim=(0, 0.9), ylabel="")
ax.set_xlabel(xlabel="fraction of full sample", fontsize=14)
ax.set_yticklabels(types, fontsize=14)
sns.despine(left=True, bottom=True)
plt.tight_layout()
#plt.show()
plt.savefig('../../../../referee/plots/photom_classified.pdf')
# -
res2
# # number of Ia in test sample for best model
fname = '../../../../referee/UncSampling/queries/queried_UncSampling_v68.dat'
data = pd.read_csv(fname, index_col=False, delim_whitespace=True)
sum(data['type'].values == 'Ia')
1600-132-5
| code/plots/01_06_populations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import findspark
findspark.init()
from pyspark.sql.session import SparkSession
spark = SparkSession.builder.appName('demo').master("local[*]").getOrCreate()
df1 = spark.read.option("header",True).csv("C:/Users/Sudip/Desktop/pyspark-project/2010-summary.csv")
df1.printSchema()
df1.show(10)
df1.take(10)
#we know the physcal plan by using this command
# We can apply sort function as a transformation to the previous dataframe
#
df1.sort("count").explain()
# we can make any dataframe to table or view by using this command
df1.createOrReplaceTempView("flightData")
sql1 = spark.sql("select dest_country_name, count, origin_country_name from flightData")
sql1.show(5)
# Now we are good to use our sql skills while doing transformations as per our business logic
spark.sql("select dest_country_name from flightData where dest_country_name='Egypt'").show()
spark.sql("select count(dest_country_name) from flightData").show()
spark.sql("select dest_country_name, count(1) from flightData Group By dest_country_name").show(5)
spark.sql("select * from flightData").show(5)
spark.sql("select dest_country_name, rank() over(partition by dest_country_name order by count) rank \
from flightData").show(5)
spark.sql("select *, \
row_number() over(order by count) Row_number \
from flightData").show(5)
spark.sql("select *, \
dense_rank() over(partition by origin_country_name order by count) Dense_Rank \
from flightData").show(5)
#in Python in oder to query aggregate function we need to import
from pyspark.sql.functions import max
spark.sql("select max(count) from flightData group by dest_country_name").show(5)
from pyspark.sql.functions import max, min, count, avg
spark.sql("select min(count) from flightData").show()
spark.sql("select max(count) from flightData").show()
spark.sql("select avg(count) from flightData").show()
spark.sql("select round(avg(count)) from flightData").show()
spark.sql("""
SELECT DEST_COUNTRY_NAME, sum(count) as dest_total
FROM flightData
GROUP BY DEST_COUNTRY_NAME
ORDER BY sum(count) DESC
LIMIT 5
""").show()
# ### Working with pyspark
# +
#select only one column in pyspark
df1.select(df1["dest_country_name"]).limit(5).show(5)
# -
df = spark.read.format("csv").option("header", True) \
.option("inferSchema", True) \
.load("C:/Users/Sudip/Desktop/pyspark-project/*.csv")
| PySpark-SQL.ipynb |