markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Define new financial instruments What we have now prices of financial instruments: - bonds (assume: fixed price) - stocks - exchange rates - oil - dots $\Longrightarrow$ Tradeables with variable prices We can form a portfolio by - holding some cash (possibly less than 0, that's called debts) ...
Log_Data = plt.figure() %matplotlib inline plt.plot(np.log(aapl['Adj Close'])) plt.ylabel('logarithmic price') plt.xlabel('year') plt.title('Logarithmic price history of Apple stock')
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
Now the roughness of the chart looks more even $\Rightarrow$ We should model increments proportional to the stock price! This leads us to some assumptions for the stock price process: - the distribution of relative changes is constant over time - Small changes appear often, large changes rarly: changes are normally di...
S0 = 1 sigma = 0.2/np.sqrt(252) mu = 0.08/252 %matplotlib inline for i in range(0, 5): r = np.random.randn((1000)) plt.plot(S0 * np.cumprod(np.exp(sigma *r +mu))) S0 = 1.5 # start price K = 1.0 # strike price mu = 0 # average growth sigma = 0.2/np.sq...
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
Optionprices:
S0 = np.linspace(0.0, 2.0,21) C = [] for k in range(21): C.append(MC_call_price(k*2/20, K, mu, sigma, N, M)) C plt.plot(S0, C) plt.ylabel('Call price') plt.xlabel('Start price') plt.title('Call price') plt.show()
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
This curve can also be calculated theoretically. Using stochastic calculus, one can deduce the famous Black-Scholes equation, to calculate this curve. We will not go into detail ...
from IPython.display import Image Image("Picture_Then_Miracle_Occurs.PNG")
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
... but will just state the final result! Black Scholes formula: $${\displaystyle d_{1}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+(r-q+{\frac {1}{2}}\sigma ^{2})(T-t)\right]}$$ $${\displaystyle d_{2}=d_{1}-\sigma {\sqrt {T-t}}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{...
d_1 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) + 0.5 * (σ ** 2) * (T-t)) d_2 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) - 0.5 * (σ ** 2) * (T-t)) call = lambda σ, T, t, S, K: S * sp.stats.norm.cdf( d_1(σ, T, t, S, K) ) - K * sp.stats.norm.cdf( d_2(σ, T, t, S, K) ) Delta = la...
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
For small prices we do not need to own shares, to hedge the option. For high prices we need exactly one share. The interesting area is around the strike price. Simulate a portfolio consisting of 1 call option and $-\Delta$ Shares: $$P = C - \Delta S$$ In approximation, the portfolio value should be constant!
N = 10 #runs def Simulate_Price_Series(S0, sigma, N, M): for n in (1,N): r = np.random.randn((M)) S = S0 * np.cumprod(np.exp(sigma *r)) for m in (1,M): P.append = Delta(sigma, M, m, S, K)* return S plt.plot(1+np.cumsum(np.diff(S) * Delta(sigma, 4, 0, S, ...
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
Challenges 1) the price depends on the calibration of $\sigma$! Parameters may not be constant over time! 2) the price depends on the validity of the model The main problem is the second one: A) $\sigma$ and $\mu$ may change over time. Hence changes of volatility should adapted in the price $\Longrightarrow$ new more...
np.histogram(np.diff(aapl['Adj Close'])) plt.hist(np.diff(aapl['Adj Close']), bins='auto') # plt.hist passes it's arguments to np.histogram plt.title("Histogram of daily returns for Apple") plt.show()
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
This is not a normal distribution! 2) normally distributed increments are not realistic. Real distributions are - Heavy tails: - Gain/Loss asymmetry - Aggregational Gaussianity - Intermittency (parameter changes over time) - Volatility clustering - Leverage effect - Volume/volatility correlation: - Slow decay of autoc...
def MC_call_price_Loc_Vol(S0, K, mu, sigma, N, M): CSum = 0 SSum = 0 for n in range(N): r = np.random.randn((M)) r2 = np.random.randn((M)) vol = vol0 * np.cumprod(np.exp(sigma*r2) S = S0 * np.cumprod(np.exp(vol * r)) SSum += S CSum += call_price(S[M-1], K) ...
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
Proposed solution Find a way to price an option without the assumption of a market model, without the need to calibrate and recalibrate the model.
def iterate_series(n=1000, S0 = 1): while True: r = np.random.randn((n)) S = np.cumsum(r) + S0 yield S, r for (s, r) in iterate_series(): t, t_0 = 0, 0 for t in np.linspace(0, len(s)-1, 100): r = s[int(t)] / s[int(t_0)] t_0 = t break state = (stock_val, besitz)
notebooks/TD Learning Black Scholes1.ipynb
FinTechies/HedgingRL
mit
all(iterable) takes an iteratable and return true if each element is true. It simple join all elements of an iterable with an and operator. Remember zero and empty string are treated as False in python 3.
#def all(iterable): # for element in iterable: # if not element: # return False # return True x = [1,2,0] print(all(x)) y = [1,2,3,4] print(all(y))
python3/built-ins.ipynb
fmalazemi/CheatSheets
gpl-3.0
any(iterable) Similar to all(iterable) but use or operator.
#def any(iterable): # for element in iterable: # if element: # return True # return False print(any([0])) print(any([1,2,3])) print(any([''])) print(any([' '])) #Rememebr: space is not equivilant to empty string
python3/built-ins.ipynb
fmalazemi/CheatSheets
gpl-3.0
bin(x) return a string of binary representation of x (x in any radix.)
print(bin(7)) type(bin(7))
python3/built-ins.ipynb
fmalazemi/CheatSheets
gpl-3.0
int(x, base) Convert a number of string x of base base to an integer in base 10
int(3.4) int('3') int("10", 10)
python3/built-ins.ipynb
fmalazemi/CheatSheets
gpl-3.0
ord(c) return integer of given unicode c
ord('W')
python3/built-ins.ipynb
fmalazemi/CheatSheets
gpl-3.0
pow(x, y [, z]) return x^y, if z is given return x^y mod z
pow(3,4) pow()
python3/built-ins.ipynb
fmalazemi/CheatSheets
gpl-3.0
we've now dropped the last of the discrete numerical inexplicable data, and removed children from the mix Extracting the samples we are interested in
# Let's extract ADHd and Bipolar patients (mutually exclusive) ADHD = X.loc[X['ADHD'] == 1] ADHD = ADHD.loc[ADHD['Bipolar'] == 0] BP = X.loc[X['Bipolar'] == 1] BP = BP.loc[BP['ADHD'] == 0] print ADHD.shape print BP.shape # Keeping a backup of the data frame object because numpy arrays don't play well with certain s...
Code/Assignment-9/Independent Analysis.ipynb
Upward-Spiral-Science/spect-team
apache-2.0
we see here that there 1383 people who have ADHD but are not Bipolar and 440 people who are Bipolar but do not have ADHD Dimensionality reduction PCA
combined = pd.concat([ADHD, BP]) combined_backup = pd.concat([ADHD, BP]) pca = PCA(n_components = 24, whiten = "True").fit(combined) combined = pca.transform(combined) print sum(pca.explained_variance_ratio_) combined = pd.DataFrame(combined) ADHD_reduced_df = combined[:1383] BP_reduced_df = combined[1383:] ADHD_r...
Code/Assignment-9/Independent Analysis.ipynb
Upward-Spiral-Science/spect-team
apache-2.0
We see here that most of the variance is preserved with just 24 features. Manifold Techniques ISOMAP
combined = manifold.Isomap(20, 20).fit_transform(combined_backup) ADHD_iso = combined[:1383] BP_iso = combined[1383:] print pd.DataFrame(ADHD_iso).head()
Code/Assignment-9/Independent Analysis.ipynb
Upward-Spiral-Science/spect-team
apache-2.0
Multi dimensional scaling
mds = manifold.MDS(20).fit_transform(combined_backup) ADHD_mds = combined[:1383] BP_mds = combined[1383:] print pd.DataFrame(ADHD_mds).head()
Code/Assignment-9/Independent Analysis.ipynb
Upward-Spiral-Science/spect-team
apache-2.0
As is evident above, the 2 manifold techniques don't really offer very different dimensionality reductions. Therefore we are just going to roll with Multi dimensional scaling Clustering and other grouping experiments Mean-Shift - mds
ADHD_clust = pd.DataFrame(ADHD_mds) BP_clust = pd.DataFrame(BP_mds) # This is a consequence of how we dropped columns, I apologize for the hacky code data = pd.concat([ADHD_clust, BP_clust]) # Let's see what happens with Mean Shift clustering bandwidth = estimate_bandwidth(data.get_values(), quantile=0.2, n_samples=...
Code/Assignment-9/Independent Analysis.ipynb
Upward-Spiral-Science/spect-team
apache-2.0
Though I'm not sure how to tweak the hyper-parameters of the bandwidth estimation function, there doesn't seem to be much difference. Minute variations to the bandwidth result in large cluster differences. Perhaps the data isn't very suitable for a contrived clustering technique like Mean-Shift. Therefore let us attemp...
kmeans = KMeans(n_clusters=2) kmeans.fit(data.get_values()) labels = kmeans.labels_ centroids = kmeans.cluster_centers_ print('Estimated number of clusters: %d' % len(centroids)) print data.shape for label in [0, 1]: ds = data.get_values()[np.where(labels == label)] plt.plot(ds[:,0], ds[:,1], '.') ...
Code/Assignment-9/Independent Analysis.ipynb
Upward-Spiral-Science/spect-team
apache-2.0
As is evident from the above 2 experiments, no clear clustering is apparent.But there is some significant overlap and there 2 clear groups Classification Experiments Let's experiment with a bunch of classifiers
ADHD_mds = pd.DataFrame(ADHD_mds) BP_mds = pd.DataFrame(BP_mds) BP_mds['ADHD-Bipolar'] = 0 ADHD_mds['ADHD-Bipolar'] = 1 data = pd.concat([ADHD_mds, BP_mds]) class_labels = data['ADHD-Bipolar'] data = data.drop(['ADHD-Bipolar'], axis = 1, inplace = False) print data.shape data = data.get_values() # Leave one Out cros...
Code/Assignment-9/Independent Analysis.ipynb
Upward-Spiral-Science/spect-team
apache-2.0
6.4 Residuals and Image Quality<a id='deconv:sec:iqa'></a> Using CLEAN or another deconvolution methods produces 'nicer' images than the dirty image (except when deconvolution gets out of control). What it means for an image to be 'nicer' is not a well defined metric, in fact it is almost completely undefined. When we ...
def generalGauss2d(x0, y0, sigmax, sigmay, amp=1., theta=0.): """Return a normalized general 2-D Gaussian function x0,y0: centre position sigmax, sigmay: standard deviation amp: amplitude theta: rotation angle (deg)""" #norm = amp * (1./(2.*np.pi*(sigmax*sigmay))) #normalization factor norm ...
6_Deconvolution/6_4_residuals_and_iqa.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
Left: dirty image from a 6 hour KAT-7 observation at a declination of $-30^{\circ}$. Right: deconvolved image. The deconvolved image does not have the same noisy PSF structures around the sources that the dirty image does. We could say that these imaging artefacts are localized and related to the PSF response to bright...
#load deconvolved image fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits') deconvImg = fh[0].data #load residual image fh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits') residImg = fh[0].data peakI = np.max(deconvImg) print 'Pea...
6_Deconvolution/6_4_residuals_and_iqa.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
Method 1 will always result in a lower dynamic range than Method 2 as the deconvoled image includes the sources where method 2 only uses the residuals. Method 3 will result in a dynamic range which varies depending on the number of pixels sampled and which pixels are sampled. One could imagine an unlucky sampling where...
fig = plt.figure(figsize=(8, 7)) gc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits', \ figure=fig) gc1.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis') gc1.hide_axis_labels() gc1.hide_tick_labels() plt.title('Residual Image') gc1.add_color...
6_Deconvolution/6_4_residuals_and_iqa.ipynb
KshitijT/fundamentals_of_interferometry
gpl-2.0
看一下 imagenet 的分類
from keras.applications import imagenet_utils imagenet_utils.CLASS_INDEX_PATH from urllib.request import urlopen import json with urlopen(imagenet_utils.CLASS_INDEX_PATH) as jsonf: data = jsonf.read() class_dict = json.loads(data.decode()) [class_dict[str(i)][1] for i in range(1000)]
Week07/01-Keras-pretrained.ipynb
tjwei/HackNTU_Data_2017
mit
Imagenet 2012 網頁 http://image-net.org/challenges/LSVRC/2012/signup 資料下載 http://academictorrents.com/browse.php?search=imagenet 一千張圖片 https://www.dropbox.com/s/vippynksgd8c6qt/ILSVRC2012_val_1000.tar?dl=0
# 下載 圖片 import os import urllib from urllib.request import urlretrieve dataset = 'ILSVRC2012_val_1000.tar' def reporthook(a,b,c): print("\rdownloading: %5.1f%%"%(a*b*100.0/c), end="") if not os.path.isfile(dataset): origin = "https://www.dropbox.com/s/vippynksgd8c6qt/ILSVRC2012_val_1000.tar?dl=1" ...
Week07/01-Keras-pretrained.ipynb
tjwei/HackNTU_Data_2017
mit
Task: Check the Database Take a look at your data folder. You should find two subfolders, each of which will contain a single data.mdb file (or possibly also a lock file): 1. imagenet_cars_boats_train (train for training, not locomotives!) 2. imagenet_cars_boats_val (val for validation or testing) Part 2: Configure the...
# Configure how you want to train the model and with how many GPUs # This is set to use two GPUs in a single machine, but if you have more GPUs, extend the array [0, 1, 2, n] gpus = [0] # Batch size of 32 sums up to roughly 5GB of memory per device batch_per_device = 32 total_batch_size = batch_per_device * len(gpus) ...
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 3: Using Caffe2 Operators to Create a CNN Caffe2 comes with ModelHelper which will do a lot of the heavy lifting for you when setting up a model. Throughout the docs and tutorial this may also be called a model helper object. The only required parameter is name. It is an arbitrary name for referencing the network ...
# LAB WORK AREA FOR PART 3 # Clear workspace to free allocated memory, in case you are running this for a second time. workspace.ResetWorkspace() # 1. Create your model helper object for the training model with ModelHelper # 2. Create your database reader with CreateDB
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 4: Image Transformations (requires Caffe2 to be compiled with opencv) Now that we have a reader we should take a look at how we're going to process the images. Since images that are found in the wild can be wildly different sizes, aspect ratios, and orientations we can and should train on as much variety as we can...
# LAB WORK AREA FOR PART 4 def add_image_input_ops(model): raise NotImplementedError # Remove this from the function stub
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 5: Creating a Residual Network Now you get the opportunity to use Caffe2's Resnet-50 creation function! During our Setup we from caffe2.python.models import resnet. We can use that for our create_resnet50_model_ops function that we still need to create and the main part of that will be the resnet.create_resnet50()...
# LAB WORK AREA FOR PART 5 def create_resnet50_model_ops(model, loss_scale): raise NotImplementedError #remove this from the function stub
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 6: Make the Network Learn Caffe2 model helper object has several built in functions that will help with this learning by using backpropagation where it will be adjusting weights as it runs through iterations. AddWeightDecay Iter net.LearningRate Below is a reference implementation: ```python def add_parameter_up...
# LAB WORK AREA FOR PART 6 def add_parameter_update_ops(model): raise NotImplementedError #remove this from the function stub
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 7: Gradient Optimization If you run the network as is you may have issues with memory. Without memory optimization we could reduce the batch size, but we shouldn't have to do that. Caffe2 has a memonger function for this purpose which will find ways to reuse gradients that we created. Below is a reference implemen...
# LAB WORK AREA FOR PART 7 def optimize_gradient_memory(model, loss): raise NotImplementedError # Remove this from the function stub
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 8: Training the Network with One GPU Now that you've established be basic components to run ResNet-50, you can try it out on one GPU. Now, this could be a lot easier just going straight into the data_parallel_model and all of its optimizations, but to help explain the components needed and to build the helper func...
# LAB WORK AREA FOR PART 8 device_opt = core.DeviceOption(caffe2_pb2.CUDA, gpus[0]) with core.NameScope("imonaboat"): with core.DeviceScope(device_opt): add_image_input_ops(train_model) losses = create_resnet50_model_ops(train_model) blobs_to_gradients = train_model.AddGradientOperators(los...
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 8 ... part ~~2~~ Deux: Train! Here's the fun part where you can tinker with the number of epochs to run and mess with the display. We'll leave this for you to play with as a fait accompli since you worked so hard to get this far!
num_epochs = 1 for epoch in range(num_epochs): # Split up the images evenly: total images / batch size num_iters = int(train_data_count / total_batch_size) for iter in range(num_iters): # Stopwatch start! t1 = time.time() # Run this iteration! workspace.RunNet(train_model.net...
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 9: Getting Parallelized You get bonus points if you can say "getting parallelized" three times fast without messing up. You just saw some interesting numbers in the last step. Take note of those and see how things scale up when we use more GPUs. We're going to use Caffe2's data_parallel_model and its function cal...
# LAB WORK AREA for Part 9 # Reinitializing our configuration variables to accomodate 2 (or more, if you have them) GPUs. gpus = [0, 1] # Batch size of 32 sums up to roughly 5GB of memory per device batch_per_device = 32 total_batch_size = batch_per_device * len(gpus) # This model discriminates between two labels: c...
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 10: Create a Test Model After every epoch of training, we like to run some validation data through our model to see how it performs. Like training, this is another net, with its own data reader. Unlike training, this net does not perform backpropagation. It only does a forward pass and compares the output of the n...
# LAB WORK AREA for Part 10 # Create your test model with ModelHelper # Create your reader with CreateDB # Use multi-GPU with Parallelize_GPU, but don't utilize backpropagation # Use workspace.RunNetOnce and workspace.CreateNet to fire up the test network workspace.RunNetOnce(test_model.param_init_net) workspace...
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Get Ready to Display the Results At the end of every epoch we will take a look at how the network performs visually. We will also report on the accuracy of the training model and the test model. Let's not force you to write your own reporting and display code, so just run the code block below to get those features read...
%matplotlib inline from caffe2.python import visualize from matplotlib import pyplot as plt def display_images_and_confidence(): images = [] confidences = [] n = 16 data = workspace.FetchBlob("gpu_0/data") label = workspace.FetchBlob("gpu_0/label") softmax = workspace.FetchBlob("gpu_0/softmax")...
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Part 11: Run Multi-GPU Training and Get Test Results You've come a long way. Now is the time to see it all pay off. Since you already ran ResNet once, you can glance at the code below and run it. The big difference this time is your model is parallelized! The additional components at the end deal with accuracy so you ...
# Start looping through epochs where we run the batches of images to cover the entire dataset # Usually you would want to run a lot more epochs to increase your model's accuracy num_epochs = 2 for epoch in range(num_epochs): # Split up the images evenly: total images / batch size num_iters = int(train_data_coun...
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
If you enjoyed this tutorial and would like to see it in action in a different way, check Caffe2's Python examples to try a script version of this multi-GPU trainer. We also have some more info below in the Appendix and a Solutions section that you can use to run the expected output of this tutorial. Appendix Here are ...
print(str(train_model.param_init_net.Proto())[:1000] + '\n...')
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Solutions This section below contains working examples for your reference. You should be able to execute these cells in order and see the expected output. Note: this assumes you have at least 2 GPUs
# SOLUTION for Part 1 from caffe2.python import core, workspace, model_helper, net_drawer, memonger, brew from caffe2.python import data_parallel_model as dpm from caffe2.python.models import resnet from caffe2.proto import caffe2_pb2 import numpy as np import time import os from IPython import display workspace...
caffe2/python/tutorials/Multi-GPU_Training.ipynb
sf-wind/caffe2
apache-2.0
Visualize channel over epochs as an image This will produce what is sometimes called an event related potential / field (ERP/ERF) image. Two images are produced, one with a good channel and one with a channel that does not show any evoked field. It is also demonstrated how to reorder the epochs using a 1D spectral embe...
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path()
0.23/_downloads/775a4c9edcb81275d5a07fdad54343dc/channel_epochs_image.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show event-related fields images
# and order with spectral reordering # If you don't have scikit-learn installed set order_func to None from sklearn.manifold import spectral_embedding # noqa from sklearn.metrics.pairwise import rbf_kernel # noqa def order_func(times, data): this_data = data[:, (times > 0.0) & (times < 0.350)] this_data /=...
0.23/_downloads/775a4c9edcb81275d5a07fdad54343dc/channel_epochs_image.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This is called an "if / else" statement. It basically allows you to create a "fork" in the flow of your program based on a condition that you define. If the condition is True, the "if"-block of code is executed. If the condition is False, the else-block is executed. Here, our condition is simply the value of the varia...
x = 5 if (x > 0): print "x is positive" else: print "x is negative"
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
So what types of conditionals are we allowed to use in an if / else statement? Anything that can be evaluated as True or False! For example, in natural language we might ask the following true/false questions: is a True? is a less than b? is a equal to b? is a equal to "ATGCTG"? is (a greater than b) and (b greater th...
a = True if a: print "Hooray, a was true!" a = True if a: print "Hooray, a was true!" print "Goodbye now!" a = False if a: print "Hooray, a was true!" print "Goodbye now!"
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
Since the line print "Goodbye now!" is not indented, it is NOT considered part of the if-statement. Therefore, it is always printed regardless of whether the if-statement was True or False.
a = True b = False if a and b: print "Apple" else: print "Banana"
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
Since a and b are not both True, the conditional statement "a and b" as a whole is False. Therefore, we execute the else-block.
a = True b = False if a and not b: print "Apple" else: print "Banana"
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
By using "not" before b, we negate its current value (False), making b True. Thus the entire conditional as a whole becomes True, and we execute the if-block.
a = True b = False if not a and b: print "Apple" else: print "Banana"
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
"not" only applies to the variable directly in front of it (in this case, a). So here, a becomes False, so the conditional as a whole becomes False.
a = True b = False if not (a and b): print "Apple" else: print "Banana"
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
When we use parentheses in a conditional, whatever is within the parentheses is evaluated first. So here, the evaluation proceeds like this: First Python decides how to evaluate (a and b). As we saw above, this must be False because a and b are not both True. Then Python applies the "not", which flips that False into...
a = True b = False if a or b: print "Apple" else: print "Banana"
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
As you would probably expect, when we use "or", we only need a or b to be True in order for the whole conditional to be True.
cat = "Mittens" if cat == "Mittens": print "Awwww" else: print "Get lost, cat" a = 5 b = 10 if (a == 5) and (b > 0): print "Apple" else: print "Banana" a = 5 b = 10 if ((a == 1) and (b > 0)) or (b == (2 * a)): print "Apple" else: print "Banana"
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
Ok, this one is a little bit much! Try to avoid complex conditionals like this if possible, since it can be difficult to tell if they're actually testing what you think they're testing. If you do need to use a complex conditional, use parentheses to make it more obvious which terms will be evaluated first! Note on ind...
x = 6 * -5 - 4 * 2 + -7 * -8 + 3 # ******add your code here!*********
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
2. Built-in functions Python provides some useful built-in functions that perform specific tasks. What makes them "built-in"? Simply that you don’t have to "import" anything in order to use them -- they're always available. This is in contrast the the non-built-in functions, which are packaged into modules of similar ...
name = raw_input("Your name: ") print "Hi there", name, "!" age = int(raw_input("Your age: ")) #convert input to an int print "Wow, I can't believe you're only", age
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
[ Definition ] len() Description: Returns the length of a string (also works on certain data structures). Doesn’t work on numerical types. Syntax: len(string) Examples:
print len("cat") print len("hi there") seqLength = len("ATGGTCGCAT") print seqLength
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
[ Definition ] abs() Description: Returns the absolute value of a numerical value. Doesn't accept strings. Syntax: abs(number) Examples:
print abs(-10) print abs(int("-10")) positiveNum = abs(-23423) print positiveNum
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
[ Definition ] round() Description: Rounds a float to the indicated number of decimal places. If no number of decimal places is indicated, rounds to zero decimal places. Synatx: round(someNumber, numDecimalPlaces) Examples:
print round(10.12345) print round(10.12345, 2) print round(10.9999, 2)
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
If you want to learn more built in functions, go here: https://docs.python.org/2/library/functions.html 3. Modules Modules are groups of additional functions that come with Python, but unlike the built-in functions we just saw, these functions aren't accessible until you import them. Why aren’t all functions just bui...
import math print math.sqrt(4) print math.log10(1000) print math.sin(1) print math.cos(0)
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
[ Definition ] The random module Description: contains functions for generating random numbers. See full list of functions here: https://docs.python.org/2/library/random.html Examples:
import random print random.random() # Return a random floating point number in the range [0.0, 1.0) print random.randint(0, 10) # Return a random integer between the specified range (inclusive) print random.gauss(5, 2) # Draw from the normal distribution given a mean and standard deviation # this code will o...
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
4. Test your understanding: practice set 2 For the following blocks of code, first try to guess what the output will be, and then run the code yourself. These examples may introduce some ideas and common pitfalls that were not explicitly covered in the text above, so be sure to complete this section. The first block b...
# RUN THIS BLOCK FIRST TO SET UP VARIABLES! a = True b = False x = 2 y = -2 cat = "Mittens" print a print (not a) print (a == b) print (a != b) print (x == y) print (x > y) print (x = 2) print (a and b) print (a and not b) print (a or b) print (not b or a) print not (b or a) print (not b) or a print (not ...
24_Prelab_Python-II/Lesson2.ipynb
greenelab/GCB535
bsd-3-clause
### Task 1: Select what features you'll use. ### features_list is a list of strings, each of which is a feature name. ### The first feature must be "poi". names = np.array(my_dataset.keys()) print names.shape, names[:5], "\n" features_list = my_dataset.itervalues().next().keys() features_list.sort() features_list.remo...
python/py/nanodegree/intro_ml/EnronPOI-Copy2.ipynb
austinjalexander/sandbox
mit
### Task 3: Create new feature(s) # scale scaler = MinMaxScaler() scaler = scaler.fit(X_train) X_train = scaler.transform(X_train) print X_train.shape X_test = scaler.transform(X_test) print X_test.shape X_train
python/py/nanodegree/intro_ml/EnronPOI-Copy2.ipynb
austinjalexander/sandbox
mit
### Task 4: Try a varity of classifiers ### Please name your classifier clf for easy export below. ### Note that if you want to do PCA or other multi-stage operations, ### you'll need to use Pipelines. For more info: ### http://scikit-learn.org/stable/modules/pipeline.html classifiers = dict() def grid_searcher(clf):...
python/py/nanodegree/intro_ml/EnronPOI-Copy2.ipynb
austinjalexander/sandbox
mit
Initial Results GaussianNB() &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Accuracy: 0.25560&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Precision: 0.18481&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Recall: 0.79800&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;F1: 0.30011&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;F2: 0.47968 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Total predictions: 10000&nbsp;&nbsp;&n...
grid_searcher(GaussianNB()) grid_searcher(DecisionTreeClassifier()) grid_searcher(SVC()) grid_searcher(KMeans())
python/py/nanodegree/intro_ml/EnronPOI-Copy2.ipynb
austinjalexander/sandbox
mit
Although the original images consisted of 92 x 112 pixel images, the version available through scikit-learn contains images downscaled to 64 x 64 pixels. To get a sense of the dataset, we can plot some example images. Let's pick eight indices from the dataset in a random order:
import numpy as np np.random.seed(21) idx_rand = np.random.randint(len(X), size=8)
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
We can plot these example images using Matplotlib, but we need to make sure we reshape the column vectors to 64 x 64 pixel images before plotting:
import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(14, 8)) for p, i in enumerate(idx_rand): plt.subplot(2, 4, p + 1) plt.imshow(X[i, :].reshape((64, 64)), cmap='gray') plt.axis('off')
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
You can see how all the faces are taken against a dark background and are upright. The facial expression varies drastically from image to image, making this an interesting classification problem. Try not to laugh at some of them! Preprocessing the dataset Before we can pass the dataset to the classifier, we need to pre...
n_samples, n_features = X.shape X -= X.mean(axis=0)
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
We repeat this procedure for every image to make sure the feature values of every data point (that is, a row in X) are centered around zero:
X -= X.mean(axis=1).reshape(n_samples, -1)
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
The preprocessed data can be visualized using the preceding code:
plt.figure(figsize=(14, 8)) for p, i in enumerate(idx_rand): plt.subplot(2, 4, p + 1) plt.imshow(X[i, :].reshape((64, 64)), cmap='gray') plt.axis('off') plt.savefig('olivetti-pre.png')
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
Training and testing the random forest We continue to follow our best practice to split the data into training and test sets:
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, random_state=21 )
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
Then we are ready to apply a random forest to the data:
import cv2 rtree = cv2.ml.RTrees_create()
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
Here we want to create an ensemble with 50 decision trees:
num_trees = 50 eps = 0.01 criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS, num_trees, eps) rtree.setTermCriteria(criteria)
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
Because we have a large number of categories (that is, 40), we want to make sure the random forest is set up to handle them accordingly:
rtree.setMaxCategories(len(np.unique(y)))
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
We can play with other optional arguments, such as the number of data points required in a node before it can be split:
rtree.setMinSampleCount(2)
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
However, we might not want to limit the depth of each tree. This is again, a parameter we will have to experiment with in the end. But for now, let's set it to a large integer value, making the depth effectively unconstrained:
rtree.setMaxDepth(1000)
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
Then we can fit the classifier to the training data:
rtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train);
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
We can check the resulting depth of the tree using the following function:
rtree.getMaxDepth()
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
This means that although we allowed the tree to go up to depth 1000, in the end only 25 layers were needed. The evaluation of the classifier is done once again by predicting the labels first (y_hat) and then passing them to the accuracy_score function:
_, y_hat = rtree.predict(X_test) from sklearn.metrics import accuracy_score accuracy_score(y_test, y_hat)
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
We find 87% accuracy, which turns out to be much better than with a single decision tree:
from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier(random_state=21, max_depth=25) tree.fit(X_train, y_train) tree.score(X_test, y_test)
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
Not bad! We can play with the optional parameters to see if we get better. The most important one seems to be the number of trees in the forest. We can repeat the experiment with a forest made from 100 trees:
num_trees = 100 eps = 0.01 criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS, num_trees, eps) rtree.setTermCriteria(criteria) rtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train); _, y_hat = rtree.predict(X_test) accuracy_score(y_test, y_hat)
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
mbeyeler/opencv-machine-learning
mit
Imports, logging, and data On top of doing the things we already know, we now need to import also the Baseline algorithm, which is conveniently accessible through the bestPy.algorithms subpackage.
from bestPy import write_log_to from bestPy.datastructures import Transactions from bestPy.algorithms import Baseline # Additionally import the baseline algorithm logfile = 'logfile.txt' write_log_to(logfile, 20) file = 'examples_data.csv' data = Transactions.from_csv(file)
examples/04.1_AlgorithmsBaseline.ipynb
yedivanseven/bestPy
gpl-3.0
Creating a new Baseline object This is really easy. All you need to do is:
algorithm = Baseline()
examples/04.1_AlgorithmsBaseline.ipynb
yedivanseven/bestPy
gpl-3.0
Inspecting the new recommendation object with Tab completion reveals binarize as a first attribute.
algorithm.binarize
examples/04.1_AlgorithmsBaseline.ipynb
yedivanseven/bestPy
gpl-3.0
What its default value or True means is that, instead of judging an article's popularity by how many times it was bought, we are only going to count each unique customer only once. How often a given customer bought a given article no longer matters. It's 0 or 1. Hence the attribute's name. You can set it to False if yo...
algorithm.binarize = False
examples/04.1_AlgorithmsBaseline.ipynb
yedivanseven/bestPy
gpl-3.0
Up to you to test and to act accordingly. An that's it with setting up the configurable parameters of the Baseline algorithm. Without data, there is nothing else we can do for now, other than convincing us that there is indeed no data associated with the algorithm yet.
algorithm.has_data
examples/04.1_AlgorithmsBaseline.ipynb
yedivanseven/bestPy
gpl-3.0
Attaching data to the Baseline algorithm To let the algorithm act on our data, we call its operating_on() method, which takes a data object of type Transactions as argument. Inspecting the has_data attribute again tells us whether we were successful or not.
recommendation = algorithm.operating_on(data) recommendation.has_data
examples/04.1_AlgorithmsBaseline.ipynb
yedivanseven/bestPy
gpl-3.0
Note: Of course, you can also directly instantiate the algorithm with data attached python recommendation = Baseline().operating_on(data) and configure its parameters (the binarize attribute) later. Making a baseline recommendation Now that we have data attached to our algorithm, Tab completion shows us that an additio...
recommendation.for_one()
examples/04.1_AlgorithmsBaseline.ipynb
yedivanseven/bestPy
gpl-3.0
As discussed above, these numbers correpsond to either the count of unique buyers or the count of buys, depending on whether the attribute binarize is set to True or False, respectively.
recommendation.binarize = True recommendation.for_one()
examples/04.1_AlgorithmsBaseline.ipynb
yedivanseven/bestPy
gpl-3.0
An that's all for the baseline algorithm Remark on the side What actually happens when you try to set the attrribute binarize to something else than the boolean values True or False? Let's try!
recommendation.binarize = 'foo'
examples/04.1_AlgorithmsBaseline.ipynb
yedivanseven/bestPy
gpl-3.0
Andrews Curves D. F. Andrews introduced 'Andrews Curves' in his 1972 paper for plotthing high dimensional data in two dimeion. The underlying principle is simple: Embed the high dimensiona data in high diemnsion only using a space of functions and then visualizing these functions. Consider A $d$ dimensional data point...
def andrews_curves(data, granularity=1000): """ Parameters ----------- data : array like ith row is the ith observation jth column is the jth feature Size (m, n) => m replicats with n features granularity : int linspace granularity for theta Ret...
python/AndrewsCurves.ipynb
saketkc/notebooks
bsd-2-clause
Andrews Curves for iris dataset
df = pd.read_csv('https://raw.githubusercontent.com/pandas-dev/pandas/master/pandas/tests/data/iris.csv') df_grouped = df.groupby('Name') df_setosa = df.query("Name=='Iris-setosa'") fig, ax = plt.subplots(figsize=(8,8)) index = 0 patches = [] for key, group in df_grouped: group = group.drop('Name', axis=1) ...
python/AndrewsCurves.ipynb
saketkc/notebooks
bsd-2-clause
PCA
X = df[['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth']] y = df['Name'].astype('category').cat.codes target_names = df['Name'].astype('category').unique() pca = PCA(n_components=2) X_r = pca.fit(X).transform(X) fig, ax = plt.subplots(figsize=(8,8)) colors = CB_color_cycle[:3] lw = 2 for color, i, target_n...
python/AndrewsCurves.ipynb
saketkc/notebooks
bsd-2-clause
We import the raw data as soon as possible into a Pandas DataFrame to avoid custom Python glue code and to be able to use the "standardized" methods for data wrangling of the Pandas framework.
import pandas as pd # set width of column for nicer output pd.set_option('max_colwidth', 130) raw_logs = pd.DataFrame(log_file_paths, columns=['path']) raw_logs.head()
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
We clean up these ugly, different, OS-specific file separators by using a common one. Note: We could have also used os.sep that gives us the OS-specific separator. In Windows, this would be \. But if you plan to extract data later e. g. by regular expressions, this is getting really unreadable, because \ is also the c...
raw_logs['path'] = raw_logs['path'].str.replace("\\", "/") raw_logs.head()
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
The are many information in the file path alone: The last directory in the path contains the name of the build job The first part of the file name is the build number The second part of the file name is the build id Let's say we need that information later on, so we extract it with a nice regular expression with name...
# TODO: uses still regex, too slow? Consider "split"? logs = raw_logs.join(raw_logs['path'].str.extract( r"^.*" + \ "/(?P<jobname>.*)/" + \ "(?P<build_number>.*?)_" + \ "(?P<build_id>.*?)_.*\.log$", expand=True)) logs.head()
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
In the case of the Travis build log dumps, we got multiple files for each build run. We just need the first ones, that's why we throw away all the other build logs with the same build number.
logs = logs.drop_duplicates(subset=['build_number'], keep='first') logs.head()
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
After dropping possible multiple build log files, we can use the build number as new index (aka key) for our DataFrame.
logs = logs.set_index(['build_number'], drop=True) logs.head()
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
So far, we've just extracted metadata from the file path of the build log files. Now we are getting to the interesting parts: Extracting information from the content of the bulid log files. For this, we need to load the content of the log files into our DataFrame. We do this with a little helper method that simply retu...
def load_file_content(file_path): with open(file_path, mode='r', encoding="utf-8") as f: return f.read()
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
We use the function above in the apply call upon the path Series. Note: For many big files, this could take some time to finish.
logs['content'] = logs['path'].apply(load_file_content) logs.head()
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
Because it could get a little bit confusing with so much columns in a single DataFrame, we delete the path columns because it's not needed anymore.
log_data = logs.copy() del(log_data['path']) log_data.head()
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0
Let's have a look at some of the contents of a build log file. This is where the analysis gets very specific depending on the used contiuous integration server, the build system, the programming language etc. . But the main idea is the same: Extract some interesing features that show what's going on in your build! Let'...
# TODO put every line in a new row
notebooks/Travis CI Build Breaker Analysis.ipynb
feststelltaste/software-analytics
gpl-3.0