markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.
def count_words(data): """Return a word count dictionary from the list of words in data.""" dictionary = {} for n in data: dictionary[n]= data.count(n) return dictionary assert count_words(tokenize('this and the this from and a a a')) == \ {'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}
assignments/assignment07/AlgorithmsEx01.ipynb
edwardd1/phys202-2015-work
mit
Write a function sort_word_counts that return a list of sorted word counts: Each element of the list should be a (word, count) tuple. The list should be sorted by the word counts, with the higest counts coming first. To perform this sort, look at using the sorted function with a custom key and reverse argument.
def sort_word_counts(wc): """Return a list of 2-tuples of (word, count), sorted by count descending.""" l = [(i,wc[i]) for i in wc] return sorted(l, key = lambda x:x[1], reverse = True) print(sort_word_counts(count_words(tokenize('this and a the this this and a a a')))) assert sort_word_counts(count_words...
assignments/assignment07/AlgorithmsEx01.ipynb
edwardd1/phys202-2015-work
mit
Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt: Read the file into a string. Tokenize with stop words of 'the of and a to in is it that as'. Perform a word count, the sort and save the result in a variable named swc.
txt = open('mobydick_chapter1.txt', 'r') x = txt.read() swc = sort_word_counts(count_words(tokenize(s = x, stop_words = ['the', 'of', 'and', 'to', 'in', 'is', 'it', 'that', 'as', 'a']))) string = '' x = (tokenize(s = x, stop_words = ['the', 'of', 'and', 'to', 'in', 'is', 'it', 'that', 'as', 'a'])) for things in x: ...
assignments/assignment07/AlgorithmsEx01.ipynb
edwardd1/phys202-2015-work
mit
Create a "Cleveland Style" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...
x = np.array(swc) plt.plot(x[0:50,1], range(50),'o') assert True # use this for grading the dotplot
assignments/assignment07/AlgorithmsEx01.ipynb
edwardd1/phys202-2015-work
mit
Init
import os import numpy as np import dill import pandas as pd %load_ext rpy2.ipython %%R library(ggplot2) library(plyr) library(dplyr) library(tidyr) library(gridExtra) if not os.path.isdir(workDir): os.makedirs(workDir)
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
Determining the probability of detecting the taxa across the entire gradient
# max 13C shift max_13C_shift_in_BD = 0.036 # min BD (that we care about) min_GC = 13.5 min_BD = min_GC/100.0 * 0.098 + 1.66 # max BD (that we care about) max_GC = 80 max_BD = max_GC / 100.0 * 0.098 + 1.66 # 80.0% G+C max_BD = max_BD + max_13C_shift_in_BD ## BD range of values BD_vals = np.arange(min_BD, max_BD, 0.0...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
skewed normal distribution
F = os.path.join(workDir, 'ampFrags_real_kde_dif.pkl') with open(F, 'rb') as inFH: kde = dill.load(inFH) kde # probability at each location in gradient pdf = {} for k,v in kde.items(): pdf[k] = v.evaluate(BD_vals) pdf.keys() df = pd.DataFrame(pdf) df['BD'] = BD_vals df.head(n=3) %%R -i df -w 800 -h 350 df.g...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
small uniform distribution
F = os.path.join(workDir, 'ampFrags_sm_kde_dif.pkl') with open(F, 'rb') as inFH: kde = dill.load(inFH) kde # probability at each location in gradient pdf = {} for k,v in kde.items(): pdf[k] = v.evaluate(BD_vals) pdf.keys() df = pd.DataFrame(pdf) df['BD'] = BD_vals df.head(n=3) %%R -i df -w 800 -h 350 df.g =...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
Notes Even with fragment sizes of 1-2 kb, the taxa would likely not be detected even if the gradient contained 1e9 16S copies of the taxon. Does this make sense based on the theory of diffusion used? with DBL 'smearing' Determining the probability of detecting in all fragments skewed normal distribution
BD_vals = np.arange(min_BD, max_BD, 0.001) F = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL.pkl') with open(F, 'rb') as inFH: kde = dill.load(inFH) kde # probability at each location in gradient pdf = {} for k,v in kde.items(): for kk,vv in v.items(): pdf[kk] = vv.evaluate(BD_vals) pdf.keys() df ...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
Notes Even if 1% of DNA is in DBL (that then diffuses back into the gradient): the probably of detecting a taxa in all the gradient positions is >= 1e-7 this is feasible for matching the emperical data! Combined plot
%%R -w 800 -h 300 # plot formatting title.size=16 p2.skn.f = p2.skn + ggtitle('Gaussian BD') + theme( plot.title = element_text(size=title.size) ) p2.skn.dbl.f = p2.skn.dbl + ggtitle('Gaussian BD + DBL') + theme( plot.title = element_text(size=title.size) ) # combined plot #p.c...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
Combined plot (v2)
%%R -w 800 -h 300 p1.skn.e = p1.skn + scale_x_continuous(limits=c(1.675, 1.775)) p2.skn.e = p2.skn + scale_x_continuous(limits=c(1.675, 1.775)) + scale_y_log10(limits=c(1e-12, 150)) p1.skn.dbl.e = p1.skn.dbl + scale_x_continuous(limits=c(1.675, 1.775)) p2.skn.dbl.e = p2.skn.dbl + scale_x_continuous...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
small fragment size distribution
BD_vals = np.arange(min_BD, max_BD, 0.001) F = os.path.join(workDir, 'ampFrags_sm_kde_dif_DBL.pkl') with open(F, 'rb') as inFH: kde = dill.load(inFH) kde # probability at each location in gradient pdf = {} for k,v in kde.items(): pdf[k] = v.evaluate(BD_vals) pdf.keys() df = pd.DataFrame(pdf) df['BD'] = BD_va...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
with DBL 'smearing' (smaller DBL) Determining the probability of detecting in all fragments skewed normal distribution
BD_vals = np.arange(min_BD, max_BD, 0.001) F = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL_fa1e-4.pkl') with open(F, 'rb') as inFH: kde = dill.load(inFH) kde # probability at each location in gradient pdf = {} for k,v in kde.items(): pdf[k] = v.evaluate(BD_vals) pdf.keys() df = pd.DataFrame(pdf) df['BD'...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
DBL with abundance-weighted smearing
BD_vals = np.arange(min_BD, max_BD, 0.001) F = os.path.join(workDir, 'ampFrags_real_kde_dif_DBL-comm.pkl') with open(F, 'rb') as inFH: kde = dill.load(inFH) kde # probability at each location in gradient pdf = {} for libID,v in kde.items(): for taxon,k in v.items(): pdf[taxon] = k.evaluate(BD_vals) pd...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
Plotting pre-frac abundance vs heavy fraction P
%%R -i workDir F = file.path(workDir, 'comm.txt') df.comm = read.delim(F, sep='\t') %>% mutate(rel_abund = rel_abund_perc / 100) df.comm %>% print df.g.s = df.g %>% filter(BD > 1.75) %>% group_by(BD) %>% mutate(P_rel_abund = P / sum(P)) %>% group_by(taxon_name) %>% summarize(mean_P = mean(P)) ...
ipynb/bac_genome/n3/probability_of_frag_detect.ipynb
nick-youngblut/SIPSim
mit
Load the PDF in PDFPlumber:
pdf = pdfplumber.open("2014-bulletin-first-10-pages.pdf") print(len(pdf.pages))
examples/la-precinct-bulletin/la-precinct-bulletin.ipynb
jsfenfen/parsing-prickly-pdfs
bsd-2-clause
Let's look at the first 15 characters on the first page of the PDF:
first_page = pdf.pages[0] chars = pd.DataFrame(first_page.chars) chars.head(15)
examples/la-precinct-bulletin/la-precinct-bulletin.ipynb
jsfenfen/parsing-prickly-pdfs
bsd-2-clause
Extract the precint ID The corresponding characters are about 37–44 pixels from the top, and on the left half of the page.
pd.DataFrame(first_page.crop((0, 37, first_page.width / 2, 44 )).chars) def get_precinct_id(page): cropped = page.crop((0, 37, page.width / 2, 44 )) text = "".join((c["text"] for c in cropped.chars)) trimmed = re.sub(r" +", "|", text) return trimmed for page in pdf.pages: print(get_precinct_id(pag...
examples/la-precinct-bulletin/la-precinct-bulletin.ipynb
jsfenfen/parsing-prickly-pdfs
bsd-2-clause
We can do the same for the number of ballots cast
def get_ballots_cast(page): cropped = page.crop((0, 48, page.width / 3, 60)) text = "".join((c["text"] for c in cropped.chars)) count = int(text.split(" ")[0]) return count for page in pdf.pages: print(get_ballots_cast(page))
examples/la-precinct-bulletin/la-precinct-bulletin.ipynb
jsfenfen/parsing-prickly-pdfs
bsd-2-clause
... and for the number of registered voters in each precinct
def get_registered_voters(page): cropped = page.crop((0, 62, page.width / 3, 74)) text = "".join((c["text"] for c in cropped.chars)) count = int(text.split(" ")[0]) return count for page in pdf.pages: print(get_registered_voters(page))
examples/la-precinct-bulletin/la-precinct-bulletin.ipynb
jsfenfen/parsing-prickly-pdfs
bsd-2-clause
Getting the results for each race is a bit trickier The data representation isn't truly tabular, but it's structured enough to allow us to create tabular data from it. This function divides the first column of the result-listings into columns (explicitly defined, in pixels) and rows (separated by gutters of whitespace)...
def get_results_rows(page): first_col = page.crop((0, 77, 212, page.height)) table = first_col.extract_table( v=(0, 158, 180, 212), h="gutters", x_tolerance=1) return table get_results_rows(first_page)
examples/la-precinct-bulletin/la-precinct-bulletin.ipynb
jsfenfen/parsing-prickly-pdfs
bsd-2-clause
Let's restructure that slightly, so that each row contains information about the relevant race:
def get_results_table(page): rows = get_results_rows(page) results = [] race = None for row in rows: name, affil, votes = row if name == "VOTER NOMINATED": continue if votes == None: race = name else: results.append((race, name, affil, int(votes)))...
examples/la-precinct-bulletin/la-precinct-bulletin.ipynb
jsfenfen/parsing-prickly-pdfs
bsd-2-clause
From there, we can start to do some calculations:
def get_jerry_brown_pct(page): table = get_results_table(page) brown_votes = table[table["name"] == "EDMUND G BROWN"]["votes"].iloc[0] kashkari_votes = table[table["name"] == "NEEL KASHKARI"]["votes"].iloc[0] brown_prop = float(brown_votes) / (kashkari_votes + brown_votes) return (100 * brown_prop)....
examples/la-precinct-bulletin/la-precinct-bulletin.ipynb
jsfenfen/parsing-prickly-pdfs
bsd-2-clause
Computation This notebook read data from the file:
data_file = 'results/usALEX-5samples-PR-leakage-dir-ex-all-ph.csv' data = pd.read_csv(data_file).set_index('sample') data data[['E_gauss_w', 'E_kde_w', 'S_gauss']] E_ref, S_ref = data.E_gauss_w, data.S_gauss res = linregress(E_ref, 1/S_ref) slope, intercept, r_val, p_val, stderr = res
usALEX - Corrections - Gamma factor fit.ipynb
tritemio/multispot_paper
mit
For more info see scipy.stats.linearregress.
Sigma = slope Sigma Omega = intercept Omega
usALEX - Corrections - Gamma factor fit.ipynb
tritemio/multispot_paper
mit
Pearson correlation coefficient:
r_val
usALEX - Corrections - Gamma factor fit.ipynb
tritemio/multispot_paper
mit
Coefficient of determination $R^2$:
r_val**2
usALEX - Corrections - Gamma factor fit.ipynb
tritemio/multispot_paper
mit
P-value (to test the null hypothesis that the slope is zero):
p_val
usALEX - Corrections - Gamma factor fit.ipynb
tritemio/multispot_paper
mit
Gamma computed from the previous fitted values:
gamma = (Omega - 1)/(Omega + Sigma - 1) '%.6f' % gamma with open('results/usALEX - gamma factor - all-ph.csv', 'w') as f: f.write('%.6f' % gamma) beta = Omega + Sigma - 1 '%.6f' % beta with open('results/usALEX - beta factor - all-ph.csv', 'w') as f: f.write('%.6f' % beta)
usALEX - Corrections - Gamma factor fit.ipynb
tritemio/multispot_paper
mit
Fit plot
import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline %config InlineBackend.figure_format='retina' # for hi-dpi displays sns.set_style('whitegrid') x = np.arange(0, 1, 0.01) plt.plot(E_ref, 1/S_ref, 's', label='dsDNA samples') plt.plot(x, intercept + slope*x, 'k', label='fit (slope = %.2f)' % slop...
usALEX - Corrections - Gamma factor fit.ipynb
tritemio/multispot_paper
mit
Intro This notebook is used as utility/tool for analysis of text. The goal is to get some insight about the structure, content and quality of the text. Examples: analysis of CV, personal articles, job ads. Load Text Load text you want to analyse
def load_text(filepaths): """ Load text you want to analyse. :param filepaths: list of paths to text files to load :return: single string representing all retrieved text """ text = "" for path in filepaths: with open(path, 'r', encoding='UTF-8') as f: text += "\n"+f.read(...
src/test/textStatsTool.ipynb
5agado/conversation-analyzer
apache-2.0
Basic Stats Length, count and richness, Ngram distribution and mosr relevant features.
words = statsUtil.getWords(text) types = set(words) print("Total length: {:.0f}".format(len(text))) print("Tokens count: {:.0f}".format(len(words))) print("Distinct tokens count: {:.0f}".format(len(set(words)))) print("Lexical richness: {0:.5f}".format(len(types)/len(words))) def plot_most_common(most_common_ngrams, ...
src/test/textStatsTool.ipynb
5agado/conversation-analyzer
apache-2.0
Prose Stats “Over the whole document, make the average sentence length 15-20 words, 25-33 syllables and 75-100 characters.”
# prose stats sentences = list(filter(lambda x : len(x)>0, map(str.strip, re.split(r'[\.\?!\n]', text)))) sen_len = [len(sent) for sent in sentences] print("Average sentence len {}. Max {}, min {}".format(np.mean(sen_len), max(sen_len), min(sen_len))) for sent in sentences: if len(sent)>300: print("* " + s...
src/test/textStatsTool.ipynb
5agado/conversation-analyzer
apache-2.0
VGG
(val_classes, train_classes, val_labels, train_labels, val_filenames, train_filenames, test_filenames) = get_classes('data/fish/') print(val_classes) print(train_classes) print(val_labels) print(train_labels) print(val_filenames) print(train_filenames) print(test_filenames) # removing path remove_path = lambda y...
deeplearning1/nbs-custom-mine/lesson7_02_practice.ipynb
sainathadapa/fastai-courses
apache-2.0
Precompute convolutional output
model.compile(optimizer=Adam(), loss='categorical_crossentropy', metrics=['accuracy']) conv_train_features = model.predict(train_data) conv_test_features = model.predict(test_data) conv_val_features = model.predict(val_data)
deeplearning1/nbs-custom-mine/lesson7_02_practice.ipynb
sainathadapa/fastai-courses
apache-2.0
Fully convolutional net (FCN) Since we're using a larger input, the output of the final convolutional layer is also larger. So, we probably don't want to put a dense layer in there - that would be a lot of parameters! Instead, let's use a fully convolutional net (FCN); this also has the benefit that they tend to genera...
lrg_model = Sequential([ BatchNormalization(axis=1, input_shape=model.output_shape[1:]), Convolution2D(128, 3, 3, activation='relu', border_mode='same'), BatchNormalization(axis=1), MaxPooling2D(), Convolution2D(128, 3, 3, activation='relu', border_mode='same'), BatchNormalization(axis=1), M...
deeplearning1/nbs-custom-mine/lesson7_02_practice.ipynb
sainathadapa/fastai-courses
apache-2.0
Bounding boxes and Multi-Output
import ujson as json anno_classes = ['alb', 'bet', 'dol', 'lag', 'other', 'shark', 'yft'] bb_json = {} for c in anno_classes: j = json.load(open('{}annos/{}_labels.json'.format('data/fish/', c), 'r')) for l in j: if 'annotations' in l.keys() and len(l['annotations'])>0: bb_json[l['filename']...
deeplearning1/nbs-custom-mine/lesson7_02_practice.ipynb
sainathadapa/fastai-courses
apache-2.0
Note how the elements of sin(y) are in increasing order. Standard import statement
from openanalysis.sorting import SortingAlgorithm,SortAnalyzer import numpy as np # for doing vstack()
doc/OpenAnalysis/02 - Sorting.ipynb
OpenWeavers/openanalysis
gpl-3.0
SortingAlgorithm is the base class providing the standards to implement sorting algorithms, SortAnalyzer visualizes and analyses the algorithm SortingAlgorithm class Any sorting algorithm, which has to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class. D...
class BubbleSort(SortingAlgorithm): # Derived from SortingAlgorithm def __init__(self): SortingAlgorithm.__init__(self, "Bubble Sort") # Initializing with name def sort(self, array, visualization=False): # MUST have this signature SortingAlgorithm.sort(self...
doc/OpenAnalysis/02 - Sorting.ipynb
OpenWeavers/openanalysis
gpl-3.0
SortAnalyzer class This class provides the visualization and analysis methods. Let's see its methods in detail __init__(self, sorter): Initializes visualizer with a Sorting Algorithm. sorter is a class, which is derived from SortingAlgorithm visualize(self, num=100, save=False): Visualizes the given algorithm wi...
bubble_visualizer = SortVisualizer(BubbleSort) bubble_visualizer.efficiency()
doc/OpenAnalysis/02 - Sorting.ipynb
OpenWeavers/openanalysis
gpl-3.0
Q: Write a Python script to add key to a dictionary. e.g. Sample Dictionary : {0: 10, 1: 20} Expected Result : {0: 10, 1: 20, 2: 30}
a = {0: 10, 1: 20} a[2] = 30 print(a)
Section 1 - Core Python/Chapter 05 - Data Types/5.0.1 Answers - Data_Type.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
Q: Write a Python script to concatenate following dictionaries to create a new one. e.g: Sample Dictionary : dic1={1:10, 2:20} dic2={3:30, 4:40} dic3={5:50,6:60} Expected Result : {1: 10, 2: 20, 3: 30, 4: 40, 5: 50, 6: 60}
names1={1:10, 2:20} names2={3:30, 4:40} names3={5:50,6:60} # names1.update(names2) new_dict = {} for ls in (names1, names2, names3): new_dict.update(ls) print(new_dict) d1={1:2,3:4} d2={5:6,7:9} d3={10:8,13:22} d4 = dict(d1) d4.update(d2) d4.update(d3) print(d4)
Section 1 - Core Python/Chapter 05 - Data Types/5.0.1 Answers - Data_Type.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
Q: Write a Python script to check if a given key already exists in a dictionary.
dict = {1: 2, 3: 4, 5: 6, 7: 9, 10: 8, 13: 22} found = True for key in dict: if(key == 11): print("key found") break; else: print("key not found") found = False
Section 1 - Core Python/Chapter 05 - Data Types/5.0.1 Answers - Data_Type.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
Q: Write a Python program to iterate over dictionaries using for loops. ans: please look above examples Q: Write a Python script to generate and print a dictionary that contains number (between 1 and n) in the form (x, x*x). Sample Dictionary ( n = 5) : Expected Output : {1: 1, 2: 4, 3: 9, 4: 16, 5: 25} and Write a Py...
a = { b : b*b for b in range(1,10) } print(a)
Section 1 - Core Python/Chapter 05 - Data Types/5.0.1 Answers - Data_Type.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
Q: Write a Python script to sort (ascending and descending) a dictionary by value.
# Ans e = {1:39,8:110, 4:34, 3:87, 7:110, 2:87} sortE = sorted(e.items(), key=lambda value: value[1]) print(sortE) # Ans e = {1:39,8:110, 4:34, 3:87, 7:110, 2:87} sortE = sorted(e.items(), key=lambda value: value[1], reverse=True) print(sortE)
Section 1 - Core Python/Chapter 05 - Data Types/5.0.1 Answers - Data_Type.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
Q: Write a Python script to merge two Python dictionaries. TIP: use update Q: Write a Python program to sum all the items in a dictionary.
e = {1:39,8:110, 4:34, 3:87, 7:110, 2:87}
Section 1 - Core Python/Chapter 05 - Data Types/5.0.1 Answers - Data_Type.ipynb
mayank-johri/LearnSeleniumUsingPython
gpl-3.0
This assignment In this assignment we will study a fundamental problem in crawling the web: deciding when to re-index changing pages. Search engines like those deployed at Google and Microsoft periodically visit, or crawl, web pages to check their current status and record any changes to the page. Some pages, such a...
# Use this cell to examine the dataset, if you like. !head -n 25 crawl.json # This cell loads the JSON data into Python. import json with open("crawl.json", "r") as f: crawl_json = json.load(f)
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 1 Fill in the function json_description to determine: the number of records, and the set of possible top level keys (field names) in each record.
def json_description(crawl_json_records): """Produces information about a JSON object containing crawl data. Args: crawl_json_records (list): A list of JSON objects such as the crawl_json variable above. Returns: 2-tuple: An (int, set) pair. The intege...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 2 What is the granularity of the dataset as represented in crawl.json? Write a one to two sentence description in the cell below:
# Use this cell for your explorations. q2_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q2_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 3 It will be more convenient to work with the data in Pandas DataFrame with a rectangular format. Fill in the function make_crawls_dataframe in the cell below. Then run the cell below that to create the table crawls.
def make_crawls_dataframe(crawl_json_records): """Creates a Pandas DataFrame from the given list of JSON records. The DataFrame corresponds to the following relation: crawls(primary key (url, hour), updated) Each hour in which a crawl happened for a page (regardless of whether it found a ...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 4 There are other reasonable ways to represent the data in a relational database (or in DataFrames). One alternate schema uses 2 relations to represent the data. The first relation is: crawls_that_found_changes(primary key (url, hour)) The primary key for the second relation is url. Define a schema for tha...
# Use this cell for your explorations. Write the schema # in text in the string below. q4_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q4_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 5 In the following we will construct visualizations to understand several key quantities of interest. The following cell constructs some key summary statistics that we will be using in the next few questions.
crawl_stats = ( crawls['updated'] .groupby(crawls.index.get_level_values('url')) .agg({ 'number of crawls': 'count', 'proportion of updates': 'mean', 'number of updates': 'sum' }) )
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Part 1: What was the distribution of the number of crawls for each page? Did most get crawled all 719 times? (For this and the other parts of this question, create a visualization to answer the question.)
... # Leave this for grading purposes q5_p1_plot = plt.gcf()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Part 2 What was the distribution of the number of positive checks for each page?
... # Leave this for grading purposes q5_p2_plot = plt.gcf()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Part 3 What is the relationship between the number of crawls for each page and the number of positive checks? Construct a scatter plot relating these two quantities.
... # Leave this for grading purposes q5_p3_plot = plt.gcf()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Part 4 In 2 or 3 sentences, describe what you discovered in your initial explorations.
q5_p4_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q5_p4_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Making a timeline from one site It will be useful to be able to look at timelines of positive checks or changes for sites. The function display_points, defined below, will help.
def display_points(points, xlim, title): """Displays a timeline with points on it. Args: points (ndarray): A list of floats in the range [xlim[0], xlim[1]], each one a point to display in the timeline. xlim (list-like): A list/tuple/array with 2 elements giving the ...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Example Usage
display_points([1,4,30,50], [0, 75], "Example")
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 6 We want to understand the behavior of page changes in order to determine how often a page should be visited. To do this, we need more than summary statistics of the 1000 sites. We also need to examine the patterns of positive checks for sites. Let's examine when the positive checks occurred for a handful of...
# This cell identifies a few categories of pages and # associates a label in the 'Desc' column of crawl_stats crawl_stats['Desc'] = "" # default blank description # Normal pages have a moderate number of updates and # were successfully crawled most times. crawl_stats.loc[ (crawl_stats['proportion of updates...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Examining the Times of Positive Checks Consider what you learned about the positive checks in the previous question. Do these occur at regular intervals? Or do they appear to occur more randomly in time? Write a 2 to 3 sentence explanation of what you see:
# Use this cell for your explorations. q6_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q6_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Understanding this distribution of positive checks will help us determine the distribution of changes. However, determining this distribution is difficult to do if we do not crawl the page sufficiently often. (We will soon see that it is also hard to do if the site changed too many times.) For these reasons, we will fo...
def compute_q_q_pairs(url): """Computes lists of uniform-distribution-quantiles and data-quantiles for a page. Args: url (str): The URL of a page. Returns: 2-tuple: A pair (AKA a 2-tuple). Both components are lists or arrays of length n, where n is the number of positiv...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
The following code constructs the final Q-Q plot for the "Normal" sites defined above
for url in pages_to_display[pages_to_display['Desc'] == "Normal"].index.values: print(url) (pdf_q, obs_q) = compute_q_q_pairs(url) sns.regplot(pdf_q, np.array(obs_q)) plt.xlabel("True uniform quantile") plt.ylabel("Actual quantile") plt.show()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Part 2 Describe your findings.
# Use this cell for your explorations. q7_p2_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q7_p2_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 8 Even if the updates were distributed uniformly, the Q-Q plot will not appear as exactly a straight line. To get a sense of what a uniform-quantile plot might look like if the data were truly distributed according to the uniform distribution, simulate $n$ observations from the uniform distribution on the int...
url = 'http://a.hatena.ne.jp/Syako/simple' n = np.count_nonzero(crawls.loc[url]['updated']) ... # Leave this for grading purposes q8_plot = plt.gcf()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
How do the empirical quantile plots from the previous quesion compare to your simulated quantile plot? Optionally, we suggest looking at a few other sites and a few other simulated sets of data to see how well they match.
# Use this cell for your explorations. q8_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q8_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Estimating the Update Rate: A Simple Approach How would you estimate the change rate for a page? For example, imagine that in 720 hourly visits to a page, we observed a change in the page on 36 visits. One estimate for the rate of changes is: $$\frac {36\text{ changes}}{720\text{ hours}} = \frac{1}{20} \text{ changes ...
q9_answer = r""" Put your answer here and delete these two sentences. Some steps are already filled in to get you started; the '...' indicate where you need to fill in one or more lines. **Step 1.** The probability of the data given $\lambda$ is: $$L(\lambda) = e^{-(\lambda N)} \frac{(\lambda N)^{n}}{n!}$$ ... Th...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 10 Add a 'simple mle' column to the crawl_stats table containing the 'simple mle' estimator we derived earlier for each website. Then, make a plot that displays the distribution of these MLEs for the sites with at least 700 crawls.
... # Leave this at the end so we can grab the plot for grading q10_plot = plt.gcf() _ = ok.grade('q10') _ = ok.backup()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
The Impact of Hourly Observations The histogram that you made for previous problem has a small mode at 1. Why is this? It is not possible for our estimate of the rate of changes to be greater than once an hour because we only observe the page once an hour. So if the rate of changes is large, we are likely to underestim...
def sample_poisson_process(rate, length): """Generates n points from Poisson(rate*length) and locates them uniformly at random on [0, length]. Args: rate (float): The average number of points per unit length. length (float): The length of the line segment. Returns: ndarray: A...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
The snap_times function in the next cell will help you simulate the hourly observations from the crawler.
def snap_times(update_times, window_length, process_length): """Given a list of change times, produces a list of the windows that detected a change (that is, where at least one change happened inside the window). This has the effect of 'snapping' each change to the next hour (or whatever the window...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 12 Use the functions sample_poisson_process and snap_times to examine what happens when we visit hourly. Look at examples where * the length of time is 24 hours, * the rate is 1/8, 1/4, 1/2, 1, and 2 changes per hour, and * the window_length is 1. For each value of rate, simulate one set of change on the inte...
# Use this cell for your explorations, then write your conclusions # in the string below. q12_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q12_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Simulation study of the rate estimate Now let us extend our Monte Carlo study to compute the MLE from the observed positive checks. Since we know the true change rate in the simulations, we can compare the MLE to the true change rate and see how accurate it is as an estimator. (We couldn't have done that with our rea...
def simulate_censored_rate_estimate(true_rate, num_visits): """Simulates updates to a page and visits by a web crawler, and returns the proportion of visits in which an update was observed. Args: true_rate (float): The average number of updates per unit length of time. (...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
The following helper function can be used to simulate many rate estimates using the function you just completed.
def simulate_many_rate_estimates(true_rate, num_visits): return np.mean([simulate_censored_rate_estimate(true_rate, num_visits) for _ in range(1000)])
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
The following code will simulate rate estimates for various values of $\lambda$ (this cell may take up to 1 minute to run):
num_visits = 719 rates = list(np.arange(0, 4, .2)) estimates = [simulate_many_rate_estimates(r, num_visits) for r in rates]
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Finally, the following code will plot the estimated values for $\lambda$ against their true values. You will need this plot to answer the next question.
plt.plot(rates, estimates, label = 'average estimate'); plt.plot([rates[0], rates[-1]], [rates[0], rates[-1]], linestyle='dashed', color='g', label='true rate') plt.xlabel("True rate of updates per hour") plt.ylabel("Average estimate of the rate of updates per hour") plt.legend(); _ = ok.grade('q13') _ = ok....
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 14 Part 1 Explain why the estimated rates seem to level off at 1.
q14_p1_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q14_p1_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Part 2 How far off is the estimate from the truth for $\lambda$ less than 0.25?
q14_p2_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q14_p2_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Estimating the update rate: An improved approach Our simulation study has shown us that our MLE estimate for the rate is biased. It systematically underestimates the quantity we're trying to estimate, the rate. This bias is small for small $\lambda$, but can we eliminate it? We can recast the problem slightly by recon...
q15_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q15_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
To check your answers, fill in the numerical values of each probability, for $\lambda=0.5$
# The probability that a Poisson(0.5) random variable is equal to 0 Prob_pois_half_equals_0 = ... # The probability that a Poisson(0.5) random variable is greater than or equal to 1 Prob_pois_half_gte_1 = ... _ = ok.grade('q15') _ = ok.backup()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 16 With this modified model, the MLE of $\lambda$, given $n$ updates in $N$ visits for a site, is: $$\lambda^* = \log \left(\frac{n}{N - n} + 1 \right)$$ Show that that's true. Hint: If you find an expression like $\frac{e^{-\lambda}}{1 - e^{-\lambda}}$, it often helps to convert such an expression to $\frac{1...
q16_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q16_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
What happens when we observe a change every hour? Then, our MLE $\lambda^*$ takes the value $\infty$. Why is that true? Intuitively, if there is a change on each visit, we can always increase the likelihood of our data (which is the likelihood that every hour saw at least one change) by increasing $\lambda$. An MLE ...
... _ = ok.grade('q17') _ = ok.backup()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Notice that this distribution is quite different than the earlier one we found in question 10. Now there is not a pile up of estimates at 1. How accurate are our estimates? We don't know the true update rates of the pages, so we can't know exactly how accurate our estimates are. But this is often the case in data sci...
def simulate_change_rate_estimate(change_rate, num_observations, estimator): """Simulates hourly change observations for a website and produces an estimate of the change rate. Args: change_rate (float): The hourly change rate of the website to be used in the simulation....
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 19 Complete the definition of plot_bootstrap_rate_estimates in the cell below. Read the docstring carefully so you know what to do. Then run the provided cell below that to plot the bootstrap rate estimate distributions for a few pages.
def plot_bootstrap_rate_estimates(page_url, num_simulations): """Simulates positive check observations for a website many times. For each simulation, the page's change rate is estimated based on the simulated positive checks. The estimation method is the modified_mle function implemented above. Then ...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Looking at the error distribution is good, but it's also useful to characterize error with a single number. (This facilitates comparisons among different estimators, for example.) A common way to summarize error in estimation is the root mean squared error, or RMSE. The RMSE is the square root of the MSE, or mean squ...
def estimate_rmse(page_url, num_simulations): """Simulates update observations for a website many times. For each simulation, the page's change rate is estimated based on the simulated observations. (The estimation method is the modified MLE.) Then this function produces an estimate of the RMSE of es...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Question 21 Create a visualization to display the RMSEs you computed. Then, create another visualization to see the relationship (across the 1000 pages) between RMSE and the modified MLE.
...
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Is the model reasonable? All the foregoing work has assumed a particular probability model for website updates: the number of changes follows a Poisson($\lambda$) distribution and the locations of the changes, given the number of changes, follows a uniform distribution. Like most models, this is certainly imperfect. ...
# Feel free to use this cell to experiment. Then write your answer # in the string below. q22_answer = r""" Put your answer here, replacing this text. """ display(Markdown(q22_answer))
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Submitting your assignment Congratulations, you're done with this homework! Run the next cell to run all the tests at once.
_ = ok.grade_all()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
Finally, run the next cell to submit the assignment to OkPy so that the staff will know to grade it. You can submit as many times as you want, and you can choose which submission you want us to grade by going to https://okpy.org/cal/data100/sp17/. After you've done that, make sure you've pushed your changes to Github ...
_ = ok.submit()
sp17/hw/hw5/hw5.ipynb
DS-100/sp17-materials
gpl-3.0
I now instatiate an instance of this class with a correctly set street attribute in the address dict. Then, everything works well when we want the query the street address from this company:
cp1 = Company(name="Meier GmbH", address={"street":"Herforstweg 4"}) cp1.get_name() cp1.get_address() cp1.get_address().get("street")
fp_lesson_3_monads.ipynb
dennisproppe/fp_python
apache-2.0
However, when we want to get the street name when the company doesn't have a street attribute, this lookup will fail and throw an error:
cp2 = Company("Schultze AG") cp2.get_name() cp2.get_address().get("street")
fp_lesson_3_monads.ipynb
dennisproppe/fp_python
apache-2.0
What we would normally do to allieviate this issue is to write a function that deals with null values:
def get_street(company): address = company.get_address() if address: if address.has_key("street"): return address.get("street") return None return None get_street(cp2) cp3 = Company(name="Wifi GbR", address={"zipcode": 11476} ) get_street(cp3)
fp_lesson_3_monads.ipynb
dennisproppe/fp_python
apache-2.0
We now see that we are able to complete the request without an error, returning None, if there is no address given or if there is no dict entry for "street" in the address. But wouldn't it be nice to have this handled once and for all? Enter the "Maybe" monad!
class Maybe(): def __init__(self, value): self.value = value def bind(self, fn): if self.value is None: return self return fn(self.value) def get_value(self): return self.value
fp_lesson_3_monads.ipynb
dennisproppe/fp_python
apache-2.0
Now, we can rewrite the get_street as get_street_from_company, using two helper function
def get_address(company): return Maybe(company.get_address()) def get_street(address): return Maybe(address.get('street')) def get_street_from_company(company): return (Maybe(company) .bind(get_address) .bind(get_street) .get_value()) get_street_from_company(cp1) g...
fp_lesson_3_monads.ipynb
dennisproppe/fp_python
apache-2.0
Data Prep Load in cleaned experiment data, generated from this notebook. Filter to clients that loaded the discopane (this is for the control group, since we have already cross referenced the TAAR logs. Clients who did not load the discopane never saw the control, serving as noise in the experiment.
S3_PATH = "s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/cleaned_data/" clean_data = sqlContext.read.parquet(S3_PATH).filter("discopane_loaded = true") clean_data.groupby("branch").agg(F.countDistinct("client_id")).show()
analysis/TAARExperimentV2Retention.ipynb
maurodoglio/taar
mpl-2.0
Grab the min and max submission dates for filtering main_summary.
min_date = clean_data.select(F.min('submission_date_s3').alias('min_d')).collect()[0].min_d max_date = clean_data.select(F.max('submission_date_s3').alias('max_d')).collect()[0].max_d print min_date, max_date
analysis/TAARExperimentV2Retention.ipynb
maurodoglio/taar
mpl-2.0
Load in main_summary, filtered to the min date of the experiment, and 42 days beyond its compleition to allow for 6-week Retention Analysis. We then join main_summary with the experiment data.
ms = ( sqlContext.read.option("mergeSchema", True) .parquet("s3://telemetry-parquet/main_summary/v4") .filter("submission_date_s3 >= '{}'".format(min_date)) .filter("submission_date_s3 <= '{}'".format(date_plus_x_days(max_date, 7*N_WEEKS))) .filter("normalized_channel = 'release'") .filter(...
analysis/TAARExperimentV2Retention.ipynb
maurodoglio/taar
mpl-2.0
Calculate Retention Data Perform 12-week retention analysis based on this example. [1-12]-Week Retention are additionally included since we can get them at a low cost. We expand out to 12-week retention to better validate the data since the TAAR branches exhibit suspiciously similar retention values.
joined = ( joined.withColumn("period", get_period("enrollment_date", "submission_date_s3")) .filter("enrollment_date <= '{}'".format(max_date)) ).distinct().cache() joined.count() ret_df = get_retention(joined) ret_df.to_csv("taar_v2_retention.csv", index=False)
analysis/TAARExperimentV2Retention.ipynb
maurodoglio/taar
mpl-2.0
Write to s3 since this job is quite expensive and should only be run once.
%%bash aws s3 cp taar_v2_retention.csv s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/
analysis/TAARExperimentV2Retention.ipynb
maurodoglio/taar
mpl-2.0
Load processed Retention Data This section loads the data generated above without having to the re-run the entire notebook.
%%bash aws s3 cp s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/taar_v2_retention.csv . ret_df = pd.read_csv("taar_v2_retention.csv") ret_df.fillna(0, inplace=True) plt.rcParams['figure.figsize'] = (12, 6) fig, ax = plt.subplots() for group, data in ret_df.groupby("branch"): (data.sort_values("period") ...
analysis/TAARExperimentV2Retention.ipynb
maurodoglio/taar
mpl-2.0
Investigate nearly identical retention lines for TAAR Branches Let's look at 6-week retention over time by each enrollment date
day_over_day_retention = [] for i in range(40): d = date_plus_x_days("20180312", i) joinedx = joined.filter("enrollment_date = '{}'".format(d)) ret_dfx = get_retention(joinedx) week6 = ret_dfx[ret_dfx.period == 6.0] for b, data in week6.groupby("branch"): x = { 'branch': b, ...
analysis/TAARExperimentV2Retention.ipynb
maurodoglio/taar
mpl-2.0
We see increased variability with time, which is most certainly due to the study being front-loaded with participants. Looking at enrollment:
(joined.groupby("enrollment_date") .agg(F.countDistinct("client_id").alias("number of participants")) .sort("enrollment_date") .toPandas() .plot(x='enrollment_date'))
analysis/TAARExperimentV2Retention.ipynb
maurodoglio/taar
mpl-2.0