markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Determine the number of aneurysms for which a variable is lower for CTA than for 3DRA.
numberofcases = np.empty(len(df_input.columns)) for i, variable in enumerate(df_input.columns): numberofcases[i] = sum(df_3dra.loc[j, variable] > df_cta.loc[j, variable] for j in df_input.index.levels[0]) s_numberofcases = pd.Series(numberofcases, index=df_input.columns)
data_analysis.ipynb
ajgeers/3dracta
bsd-2-clause
Compose a dataframe with the obtained statistical results, corresponding to the 'online table' of the journal paper.
d = {'M': s_numberofcases, 'P': s_pvalue, 'Mean (%)': s_mean, 'SE (%)': s_standarderror} df_output = pd.DataFrame(d, columns=['M', 'P', 'Mean (%)', 'SE (%)']) df_output
data_analysis.ipynb
ajgeers/3dracta
bsd-2-clause
Boxplot Make boxplots showing the distributions of the relative differences over all aneurysms.
# extract arrays to plot from dataframe array_yticklabels = ['$\mathregular{' + variable.replace('%', '\%') + '}$' for variable in df_reldiff.columns] array_reldiff = df_reldiff.values # create plot fig, ax = plt.subplots() bp = ax.boxplot(array_reldiff, sym='+', vert=0, patch_artist=True) # set labels ax.set_xlabel('Relative difference (%)', fontsize=18) ax.set_xlim(0, 130) ax.set_yticklabels(array_yticklabels, fontsize=12) # format box, whiskers, etc. plt.setp(ax.get_xticklabels(), fontsize=12) plt.setp(bp['boxes'], color='black') plt.setp(bp['medians'], color='white') plt.setp(bp['whiskers'], color='black', linestyle='-') plt.setp(bp['fliers'], color='black', markersize=5) plt.tight_layout()
data_analysis.ipynb
ajgeers/3dracta
bsd-2-clause
So basically here we are just flattening the images dimensions
# normalize x_train /= 255 x_test /= 255 # flatten images input_dim = 784 epochs = 20 # create training+test positive and negative pairs digit_indices = [np.where(y_train == i)[0] for i in range(10)] tr_pairs, tr_y = create_pairs(x_train, digit_indices) digit_indices = [np.where(y_test == i)[0] for i in range(10)] te_pairs, te_y = create_pairs(x_test, digit_indices) tr_pairs.shape
Siamese Network - MNSIT - Breakdown.ipynb
gth158a/learning
apache-2.0
so this dimensions are basically: + 108400 is the number of items in the list + 2 each entry in the list has 2 parts (which are the pairs) + 784 is the dimension of the flatten image scratch that. There is no list anymore since the function calls for np.array() before returning
tr_pairs[:,0].shape tr_pairs[:,1].shape tr_y.shape len(digit_indices) np.where(y_test == 2)
Siamese Network - MNSIT - Breakdown.ipynb
gth158a/learning
apache-2.0
np.where returns a list, so you need the [0] to get a array only
np.where(y_test == 2)[0]
Siamese Network - MNSIT - Breakdown.ipynb
gth158a/learning
apache-2.0
so the code above creates an array for each of the numbers, each array has the indices for each of the numbers create_pairs function python def create_pairs(x, digit_indices): '''Positive and negative pair creation. Alternates between positive and negative pairs. ''' pairs = [] labels = [] n = min([len(digit_indices[d]) for d in range(10)]) - 1 for d in range(10): for i in range(n): z1, z2 = digit_indices[d][i], digit_indices[d][i + 1] pairs += [[x[z1], x[z2]]] inc = random.randrange(1, 10) dn = (d + inc) % 10 z1, z2 = digit_indices[d][i], digit_indices[dn][i] pairs += [[x[z1], x[z2]]] labels += [1, 0] return np.array(pairs), np.array(labels)
digit_indices[0][2], digit_indices[0][2+1] z1, z2 = digit_indices[0][2], digit_indices[0][2+1]
Siamese Network - MNSIT - Breakdown.ipynb
gth158a/learning
apache-2.0
so we are getting the indices of two intances of the same number
x_test[z1][:10] x_test[z2][:10]
Siamese Network - MNSIT - Breakdown.ipynb
gth158a/learning
apache-2.0
then we are concatenating them
pairs += [[x_test[z1], x_test[z2]]] len(pairs)
Siamese Network - MNSIT - Breakdown.ipynb
gth158a/learning
apache-2.0
so we keep building the pairs
inc = random.randrange(1, 10) inc d = 4 dn = (d + inc) % 10 dn
Siamese Network - MNSIT - Breakdown.ipynb
gth158a/learning
apache-2.0
here we are building a pair with a true and false instance thats why the inc on dn
# this adds the negative pair # z1, z2 = digit_indices[d][i], digit_indices[dn][i] # pairs += [[x[z1], x[z2]]]
Siamese Network - MNSIT - Breakdown.ipynb
gth158a/learning
apache-2.0
the label is 1 for the first example and zero for the fake that we just made
# labels += [1, 0] pairs = [] labels = [] n = min([len(digit_indices[d]) for d in range(10)]) - 1 # how many indices are the per number # get the min of all the indices n for d in range(10): print(len(digit_indices[d])) # then it substract 1 - I assume its because the range is non-inclusive n = 10 ran = 5 l = [] l += [1, 0] l for d in range(ran): for i in range(n): z1, z2 = digit_indices[d][i], digit_indices[d][i + 1] print(z1.shape, z2.shape) pairs += [[x_train[z1], x_train[z2]]] print(pairs) inc = random.randrange(1, 10) dn = (d + inc) % 10 z1, z2 = digit_indices[d][i], digit_indices[dn][i] pairs += [[x_train[z1], x_train[z2]]] labels += [1, 0] np.array(pairs), np.array(labels) # network definition base_network = create_base_network(input_dim) input_a = Input(shape=(input_dim,)) input_b = Input(shape=(input_dim,)) base_network.summary() # because we re-use the same instance `base_network`, # the weights of the network will be shared across the two branches processed_a = base_network(input_a) processed_b = base_network(input_b) distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([processed_a, processed_b]) model = Model([input_a, input_b], distance) model.summary() # loss is specified manually # optimizer is RMSprop tset = [tr_pairs[:, 0], tr_pairs[:, 1]] len(tset) len(tr_y) # train rms = RMSprop() model.compile(loss=contrastive_loss, optimizer=rms) model.fit([tr_pairs[:, 0], tr_pairs[:, 1]], tr_y, batch_size=128, epochs=epochs, validation_data=([te_pairs[:, 0], te_pairs[:, 1]], te_y)) # compute final accuracy on training and test sets pred = model.predict([tr_pairs[:, 0], tr_pairs[:, 1]]) tr_acc = compute_accuracy(pred, tr_y) pred = model.predict([te_pairs[:, 0], te_pairs[:, 1]]) te_acc = compute_accuracy(pred, te_y) print('* Accuracy on training set: %0.2f%%' % (100 * tr_acc)) print('* Accuracy on test set: %0.2f%%' % (100 * te_acc))
Siamese Network - MNSIT - Breakdown.ipynb
gth158a/learning
apache-2.0
Variance Once two statistician of height 4 feet and 5 feet have to cross a river of AVERAGE depth 3 feet. Meanwhile, a third person comes and said, "what are you waiting for? You can easily cross the river" It's the average distance of the data values from the mean <img style="float: left;" src="img/variance.png" height="320" width="320"> <br> <br> Standard Deviation It is the square root of variance. This will have the same units as the data and mean.
shoes_before.head() import matplotlib.pyplot as plt %matplotlib inline #Find the standard deviation of the sales print("Before Campaign:") print(shoes_before.std()) print() print("During Campaign:") print(shoes_during.std()) print() print("After Campaign:") print(shoes_after.std())
Module_2f_ABTesting.ipynb
amitkaps/hackermath
mit
Co-variance covariance as a measure of the (average) co-variation between two variables, say x and y. Covariance describes both how far the variables are spread out, and the nature of their relationship, Covariance is a measure of how much two variables change together. Compare this to Variance, which is just the range over which one measure (or variable) varies. <img style="float: left;" src="img/covariance.png" height="270" width="270"> <br> <br> <br> <br>
#Find the covariance of the sales #Use the cov function print("Before Campaign:") print(shoes_before.cov()) print() print("During Campaign:") print(shoes_during.cov()) print() print("After Campaign:") print(shoes_after.cov())
Module_2f_ABTesting.ipynb
amitkaps/hackermath
mit
Correlation Extent to which two or more variables fluctuate together. A positive correlation indicates the extent to which those variables increase or decrease in parallel; a negative correlation indicates the extent to which one variable increases as the other decreases. <img style="float: left;" src="img/correlation.gif" height="270" width="270"> <br> <br> <br>
import seaborn as sns sns.jointplot(x= "UA101", y ="UA102", data=shoes_before) sns.jointplot(x= "UA101", y ="UA102", data=shoes_during) #Find correlation between sales print("Before Campaign:") print(shoes_before.corr()) print() print("During Campaign:") print(shoes_during.corr()) print() print("After Campaign:") print(shoes_after.corr())
Module_2f_ABTesting.ipynb
amitkaps/hackermath
mit
Correlation != Causation correlation between two variables does not necessarily imply that one causes the other. <img style="float: left;" src="img/correlation_not_causation.gif" height="570" width="570">
#Let's do some analysis on UA101 now #Find difference between mean sales before and after the campaign np.mean(shoes_after.UA101) - np.mean(shoes_before.UA101)
Module_2f_ABTesting.ipynb
amitkaps/hackermath
mit
On average, the sales after campaign is more than the sales before campaign. But is the difference legit? Could it be due to chance? Classical Method : t-test Hacker's Method : provided in notebook 2c Effect Size Because you can't argue with all the fools in the world. It's easier to let them have their way, then trick them when they're not paying attention - Christopher Paolini
#Find %increase in mean sales (np.mean(shoes_after.UA101) - np.mean(shoes_before.UA101))/np.mean(shoes_after.UA101) * 100
Module_2f_ABTesting.ipynb
amitkaps/hackermath
mit
Would business feel comfortable spending millions of dollars if the increase is going to be 25%. Does it work for the company? Maybe yes - if margins are good and this increase is considered good. But if the returns from the campaign does not let the company break even, it makes no sense to take that path. Someone tells you the result is statistically significant. The first question you should ask? How large is the effect? To answer such a question, we will make use of the concept confidence interval In plain english, confidence interval is the range of values the measurement metric is going to take. An example would be: 90% of the times, the increase in average sales (before and after campaign) would be within the bucket 3.4 and 6.7 (These numbers are illustrative. We will derive those numbers below) Hacker's way to do this: Bootstrapping (We will use the library here though) Where do we go from here? First of all there are two points to be made. Whey do we need signficance testing if confidence intervals can provide us more information? How does it relate to the traditional statistical procedure of finding confidence intervals For the first one: What if sales in the first month after campaign was 80 and the month before campaign was 40. The difference is 40. And confidence interval,as explained above, using replacements, would always generate 40. But if we do the significance testing, as detailed above - where the labels are shuffled, the sales are equally likely to occur in both the groups. And so, significance testing would answer that there was no difference. But don't we all know that the data is too small to make meaningful inferences? For the second one: Traditional statistics derivation assumes normal distribution. But what if the underlying distribution isn't normal? Also, people relate to resampling much better :-) Standard Error It is a measure of how far the estimate to be off, on average. More technically, it is the standard deviation of the sampling distribution of a statistic(mostly the mean). Please do not confuse it with standard deviation. Standard deviation is a measure of the variability of the observed quantity. Standard error, on the other hand, describes variability of the estimate.
from scipy import stats #Standard error for mean sales before campaign stats.sem(shoes_before.UA101)
Module_2f_ABTesting.ipynb
amitkaps/hackermath
mit
Hypothesis Testing (We are covering, what is referred to as, frequentist method of Hypothesis testing) We would like to know if the effects we see in the sample(observed data) are likely to occur in the population. The way classical hypothesis testing works is by conducting a statistical test to answer the following question: Given the sample and an effect, what is the probability of seeing that effect just by chance? Here are the steps on how we would do this Define null hypothesis Compute test statistic Compute p-value Interpret the result If p-value is very low(most often than now, below 0.05), the effect is considered statistically significant. That means that effect is unlikely to have occured by chance. The inference? The effect is likely to be seen in the population too. This process is very similar to the proof by contradiction paradigm. We first assume that the effect is false. That's the null hypothesis. Next step is to compute the probability of obtaining that effect (the p-value). If p-value is very low(<0.05 as a rule of thumb), we reject the null hypothesis.
import numpy as np import pandas as pd from scipy import stats import matplotlib as mpl %matplotlib inline import seaborn as sns sns.set(color_codes=True) #Mean of sales before campaign shoes_before.UA101.mean() #Confidence interval on the mean of sales before campaign stats.norm.interval(0.95, loc=shoes_before.UA101.mean(), scale = shoes_before.UA101.std()/np.sqrt(len(shoes_before))) #Find 80% Confidence interval #Find mean and 95% CI on mean of sales after campaign print(shoes_after.UA101.mean()) stats.norm.interval(0.95, loc=shoes_after.UA101.mean(), scale = shoes_after.UA101.std()/np.sqrt(len(shoes_after))) #What does confidence interval mean? #Effect size print("Effect size:", shoes_after.UA101.mean() - shoes_before.UA101.mean() )
Module_2f_ABTesting.ipynb
amitkaps/hackermath
mit
Null Hypothesis: Mean sales aren't significantly different Perform t-test and determine the p-value.
stats.ttest_ind(shoes_before.UA101, shoes_after.UA101, equal_var=True)
Module_2f_ABTesting.ipynb
amitkaps/hackermath
mit
p-value is the probability that the effective size was by chance. And here, p-value is almost 0. Conclusion: The sales difference is significant. Assumption of t-test One assumption is that the data used came from a normal distribution. <br> There's a Shapiro-Wilk test to test for normality. If p-value is less than 0.05, then there's a low chance that the distribution is normal.
stats.shapiro(shoes_before.UA101) ?stats.shapiro
Module_2f_ABTesting.ipynb
amitkaps/hackermath
mit
1.1 b. Find the average of a list of numbers using a for loop
runningTotal = 0 listOfNumbers = [4,7,9,1,8,6] for each in listOfNumbers: runningTotal = runningTotal + each # each time round the loop add the next item to the running total average = runningTotal/len(listOfNumbers) # the average is the runningTotal at the end / how many numbers print(listOfNumbers) print("The average of these numbers is {0:.2f}".format(average))
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
1.1 c. Write a program that prints string in reverse. Print character by character Input: Python Expected Output: n o h t y P Hint: can use range, len functions<br> P y t h o n 0 1 2 3 4 5 -6 -5 -4 -3 -2 -1
word = "Python" #print(len(word)) for char in range(len(word) - 1, -1, -1): # range(start=5, end=-1, de -1) print(word[char])
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
1.2 Write a Python program to count the number of even and odd numbers from a series of numbers. <br>Hint: review of for loop, if statements
numbers = (1, 2, 3, 4, 5, 6, 7, 8, 9) # Declaring the tuple count_odd = 0 count_even = 0 #type your code here for x in numbers: if not x % 2: count_even+=1 else: count_odd+=1 print("Number of even numbers :",count_even) print("Number of odd numbers :",count_odd)
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
1.3 Check if given list of strings have ECORI site motif and print value that doesn't contain the motif until two strings with the motif are found motif = "GAATTC" (5' for ECORI restriction site) Output: <br>AGTGAACCGTCAGATCCGCTAGCGCGAATTC doesn't contain the motif GGAGACCGACACCCTCCTGCTATGGGTGCTGCTGCTC doesn't contain the motif TGGGTGCCCGGCAGCACCGGCGACGCACCGGTCGC doesn't contain the motif CACCATGGTGAGCAAGGGCGAGGAGAATAACATGGCC doesn't contain the motif Two strings in given list contain the motif
motif = "GAATTC" count = 0 dna_strings = ['AGTGAACCGTCAGATCCGCTAGCGCGAATTC','GGAGACCGACACCCTCCTGCTATGGGTGCTGCTGCTC','TGGGTGCCCGGCAGCACCGGCGACGCACCGGTCGC', 'CACCATGGTGAGCAAGGGCGAGGAGAATAACATGGCC','ATCATCAAGGAGTTCATGCGCTTCAAGAATTC','CATGGAGGGCTCCGTGAACGGCCACGAGTTCGAGA' ,'TCGAGGGCGAGGGCGAGGGCCGCCCCTACGAGGCCTT'] #type your code for item in dna_strings: if(item.find(motif) >= 1): count+=1 if(count==2): print("Two strings in given list contain the motif") break; else: print(item ,': doesn\'t contain the motif')
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
1.4 Write a Python program that prints all the numbers in range from 0 to 10 except 5 and 10. <br> Hint: use continue
#type your code here for value in range(10): if (value == 5 or value==10): continue print(value,end=' ') print("\n")
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
1.5 (Multi-Part) Next series of tasks about lists and list manipulations: <br>a. Create a list of 5 of your favorite things.
my_favorites=['Music', 'Movies', 'Coding', 'Biology', 'Python']
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
b. Use the print() function to print your list.
print(my_favorites)
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
c. Use the print() function to print out the middle element.
print(my_favorites[2])
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
d. Now replace the middle element with a different item, your favorite song, or song bird.
my_favorites[2]='European robin'
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
e. Use the same print statement from b. to print your new list. Check out the differences.
print(my_favorites)
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
f. Add a new element to the end. Read about append().
my_favorites.append('Monkeys')
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
g. Add a new element to the beginning. Read about insert().
my_favorites.insert(0, 'Evolution')
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
h. Add a new element somewhere other than the beginning or the end.
my_favorites.insert(3, 'Coffee')
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
1.6 Write a script that splits a string into a list: Save the string sapiens, erectus, neanderthalensis as a variable. Print the string. Split the string into individual words and print the result of the split. (Think about the ', '.) Store the resulting list in a new variable. Print the list. Sort the list alphabetically and print (hint: lookup the function sorted()). Sort the list by length of each string and print. (The shortest string should be first). Check out documentation of the key argument.`
#type your code hominins='sapiens, erectus, neanderthalensis' print(hominins) hominin_individuals=hominins.split(',') print('hominin_individuals') hominin_individuals=sorted(hominin_individuals) print("List: ", hominin_individuals) hominin_individuals=sorted(hominin_individuals, key=len) print(hominin_individuals)
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Extra Pratice: 1.7 Use list comprehension to generate a list of tuples. The tuples should contain sequences and lengths from the previous problem. Print out the length and the sequence (i.e., "4\tATGC\n").
sequences=['ATGCCCGGCCCGGC','GCGTGCTAGCAATACGATAAACCGG', 'ATATATATCGAT','ATGGGCCC'] seq_lengths=[(seq, len(seq)) for seq in sequences] print(seq_lengths)
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Question 2: Dictionaries and Set 2.1 Create a dictionary store DNA restriction enzyme names and their motifs from: <br>https://www.neb.com/tools-and-resources/selection-charts/alphabetized-list-of-recognition-specificities <br>eg: EcoRI = GAATTC AvaII = GGACC BisI = GGACC
enzymes = { 'EcoRI':'GAATTC','AvaII':'GGACC', 'BisI':'GCATGCGC' , 'SacII': r'CCGCGG','BamHI': 'GGATCC'} print(enzymes)
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
a. look up the motif for a particular SacII enzyme
print(enzymes['SacII'])
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
b. add below two enzymes and their motifs to dictionary <br> KasI: GGCGCC AscI: GGCGCGCC EciI: GGCGGA
enzymes['KasI'] = 'GGCGCC' enzymes['AscI'] = 'GGCGCGCC' print(enzymes)
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
2.2 Suppose dna is a string variable that contains only 'A','C','G' or 'T' characters. Write code to find and print the character and frequency (max_freq) of the most frequent character in string dna? <br>
dna = 'AAATTCGTGACTGTAA' dna_counts= {'T':dna.count('T'),'C':dna.count('C'),'G':dna.count('G'),'A':dna.count('A')} print(dna_counts) max_freq= sorted(dna_counts.values())[-1] print(max_freq)
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
2.3 If you create a set using a DNA sequence, what will you get back? Try it with this sequence: GATGGGATTGGGGTTTTCCCCTCCCATGTGCTCAAGACTGGCGCTAAAAGTTTTGAGCTTCTCAAAAGTCTAGAGCCACCGTCCAGGGAGCAGGTAGCTGCTGGGCTCCGGGGACACTTTGCGTTCGGGCTGGGAGCGTGCTTTCCACGACGGTGACACGCTTCCCTGGATTGGCAGCCAGACTGCCTTCCGGGTCACTGCCATGGAGGAGCCGCAGTCAGATCCTAGCGTCGAGCCCCCTCTGAGTCAGGAAACATTTTCAGACCTATGGAAACTACTTCCTGAAAACAACGTTCTGTCCCCCTTGCCGTCCCAAGCAATGGATGATTTGATGCTGTCCCCGGACGATATTGAACAATGGTTCACTGAAGACCCAGGTCCAGATGAAGCTCCCAGAATTCGCCAGAGGCTGCTCCCCCCGTGGCCCCTGCACCAGCAGCTCCTACACCGGCGGCCCCTGCACCAGCCCCCTCCTGGCCCCTGTCATCTTCTGTCCCTTCCCAGAAAACCTACCAGGGCAGCTACGGTTTCCGTCTGGGCTTCTTGCATTCTGGGACAGCCAAGTCTGTGACTTGCACGTACTCCCCTGCCCTCAACAAGATGTTTTGCCAACTGGCCAAGACCTGCCCTGTGCAGCTGTGGGTTGATTCCACACCCCCGCCCGGCACCCGCGTCCGCGCCATGGCCATCTACAAGCAGTCACAGCACATGACGGAGGTTGTGAGGCGCTGCCCCCACCATGAGCGCTGCTCAGATAGCGATGGTCTGGCCCCTCCTCAGCATCTTATCCGAGTGGAAGGAAATTTGCGTGTGGAGTATTTGGATGACAGAAACACTTTTCGTGGGGTTTTCCCCTCCCATGTGCTCAAGACTGGCGCTAAAAGTTTTGAGCTTCTCAAAAGTCTAGAGCCACCGTCCAGGGAGCAGGTAGCTGCTGGGCTCCGGGGACACTTTGCGTTCGGGCTGGGAGCGTGCTTTCCACGACGGTGACACGCTTCCCTGGATTGGCAGCCAGACTGCCTTCCGGGTCACTGCCATGGAGGAGCCGCAGTCAGATCCTAGCGTCGAGCCCCCTCTGAGTCAGGAAACATTTTCAGACCTATGGAAACTACTTCCTGAAAACAACGTTCTGTCCCCCTTGCCGTCCCAAGCAATGGATGATTTGATGCTGTCCCCGGACGATATTGAACAATGGTTCACTGAAGACCCAGGTCCAGATGAAGCTCCCAGAATTCGCCAGAGGCTGCTCCCCCCGTGGCCCCTGCACCAGCAGCTCCTACACCGGCGGCCCCTGCACCAGCCCCCTCCTGGCCCCTGTCATCTTCTGTCCCTTCCCAGAAAACCTACCAGGGCAGCTACGGTTTCCGTCTGGGCTTCTTGCATTCTGGGACAGCCAAGTCTGTGACTTGCACGTACTCCCCTGCCCTCAACAAGATGTTTTGCCAACTGGCCAAGACCTGCCCTGTGCAGCTGTGGGTTGATTCCACACCCCCGCCCGGCACCCGCGTCCGCGCCATGGCCATCTACAAGCAGTCACAGCACATGACGGAGGTTGTGAGGCGCTGCCCCCACCATGAGCGCTGCTCAGATAGCGATGGTCTGGCCCCTCCTCAGCATCTTATCCGAGTGGAAGGAAATTTGCGTGTGGAGTATTTGGATGAC
DNA='GATGGGATTGGGGTTTTCCCCTCCCATGTGCTCAAGACTGGCGCTAAAAGTTTTGAGCTTCTCAAAAGTCTAGAGCCACCGTCCAGGGAGCAGGTAGCTGCTGGGCTCCGGGGACACTTTGCGTTCGGGCTGGGAGCGTGCTTTCCACGACGGTGACACGCTTCCCTGGATTGGCAGCCAGACTGCCTTCCGGGTCACTGCCATGGAGGAGCCGCAGTCAGATCCTAGCGTCGAGCCCCCTCTGAGTCAGGAAACATTTTCAGACCTATGGAAACTACTTCCTGAAAACAACGTTCTGTCCCCCTTGCCGTCCCAAGCAATGGATGATTTGATGCTGTCCCCGGACGATATTGAACAATGGTTCACTGAAGACCCAGGTCCAGATGAAGCTCCCAGAATTCGCCAGAGGCTGCTCCCCCCGTGGCCCCTGCACCAGCAGCTCCTACACCGGCGGCCCCTGCACCAGCCCCCTCCTGGCCCCTGTCATCTTCTGTCCCTTCCCAGAAAACCTACCAGGGCAGCTACGGTTTCCGTCTGGGCTTCTTGCATTCTGGGACAGCCAAGTCTGTGACTTGCACGTACTCCCCTGCCCTCAACAAGATGTTTTGCCAACTGGCCAAGACCTGCCCTGTGCAGCTGTGGGTTGATTCCACACCCCCGCCCGGCACCCGCGTCCGCGCCATGGCCATCTACAAGCAGTCACAGCACATGACGGAGGTTGTGAGGCGCTGCCCCCACCATGAGCGCTGCTCAGATAGCGATGGTCTGGCCCCTCCTCAGCATCTTATCCGAGTGGAAGGAAATTTGCGTGTGGAGTATTTGGATGACAGAAACACTTTTCGTGGGGTTTTCCCCTCCCATGTGCTCAAGACTGGCGCTAAAAGTTTTGAGCTTCTCAAAAGTCTAGAGCCACCGTCCAGGGAGCAGGTAGCTGCTGGGCTCCGGGGACACTTTGCGTTCGGGCTGGGAGCGTGCTTTCCACGACGGTGACACGCTTCCCTGGATTGGCAGCCAGACTGCCTTCCGGGTCACTGCCATGGAGGAGCCGCAGTCAGATCCTAGCGTCGAGCCCCCTCTGAGTCAGGAAACATTTTCAGACCTATGGAAACTACTTCCTGAAAACAACGTTCTGTCCCCCTTGCCGTCCCAAGCAATGGATGATTTGATGCTGTCCCCGGACGATATTGAACAATGGTTCACTGAAGACCCAGGTCCAGATGAAGCTCCCAGAATTCGCCAGAGGCTGCTCCCCCCGTGGCCCCTGCACCAGCAGCTCCTACACCGGCGGCCCCTGCACCAGCCCCCTCCTGGCCCCTGTCATCTTCTGTCCCTTCCCAGAAAACCTACCAGGGCAGCTACGGTTTCCGTCTGGGCTTCTTGCATTCTGGGACAGCCAAGTCTGTGACTTGCACGTACTCCCCTGCCCTCAACAAGATGTTTTGCCAACTGGCCAAGACCTGCCCTGTGCAGCTGTGGGTTGATTCCACACCCCCGCCCGGCACCCGCGTCCGCGCCATGGCCATCTACAAGCAGTCACAGCACATGACGGAGGTTGTGAGGCGCTGCCCCCACCATGAGCGCTGCTCAGATAGCGATGGTCTGGCCCCTCCTCAGCATCTTATCCGAGTGGAAGGAAATTTGCGTGTGGAGTATTTGGATGAC' DNA_set = set(DNA) print('DNA_set contains {}'.format(DNA_set))
bicf_nanocourses/courses/python_1/lectures/day_2/Day2-Lesson-3-Exercise1_key.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
2. Scraping The amount of insight you can gain by scraping free information from the net is astonishing, especially given how easy it is to scrape. Whatever you want to learn about, there is free information out there (and probably a Python package to help). We will work on the simplest scenario: pulling off files that do not need authentication or setting cookies. We will ignore robot rules and pretend that we are a Firefox browser. For instance, this will get you BIST's main site:
url = "http://bist.eu/" req = Request(url, headers={'User-Agent': 'Mozilla/5.0'}) content = urlopen(req).read()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
If we inspect what it is, it looks like HTML:
content[:50]
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
The way we read it, the content is a byte array:
type(content)
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Convert it to UTF-8 to see what it contains in a nicer way:
print(content.decode("utf-8")[:135])
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
You can save it to a file so that you do not have to scrape it again. This is important when you scrape millions of files, and the scraping has to be restarted. Have mercy on the webserver, or you risk getting blacklisted.
with open("bist.html", 'wb') as file: file.write(content) file.close()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
You can technically open this in a browser, but it is only the HTML part of the site: the style files, images, and a whole lot of other things that make a page working are missing. Lazy people can launch a browser from Python to see what the page looks like:
from subprocess import run run(["firefox", "-new-tab", "bist.html"])
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Exercise 1. Copy an image location from the BIST website. Scrape it and display it. For the display, you can import Image from IPython.display, and then use Image(data=content). Technically you could use imshow from Matplotlib, but then you would have decompress the image first. We put everything together in a function that only attempts to download a file if it is not present locally. If you do not specify a filename, it tries to extract it from the URL by taking everything after the last "/" character as the filename.
def get_file(url, filename=None): if filename is None: slash = url.rindex("/") filename = url[slash+1:] if os.path.isfile(filename): with open(filename, 'rb') as file: content = file.read() else: req = Request(url, headers={'User-Agent': 'Mozilla/5.0'}) content = urlopen(req).read() with open(filename, 'wb') as file: file.write(content) file.close() return content
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
3. Cleaning structured data We will integrate several data sources, as this is the most common pattern you encounter in day to day data science. We will have two structured sources of data, and one semistructured: A table of journal titles and matching abbreviations. This will help us standardize journal titles in the data, as some will be given as full titles, others as abbreviations. A table of journals and matching Impact Factors. Search results in HTML format that contain the metadata of the papers we want to study We start with the easier part, obtaining and cleaning the structured data. Let us get the journal abbrevations:
abbreviations = get_file("https://github.com/JabRef/abbrv.jabref.org/raw/master/journals/journal_abbreviations_webofscience.txt")
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Let's see what it contains:
print(abbreviations.decode("utf-8")[:1000])
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
This is fairly straightforward: the file starts with a bunch of comments, then two columns containing the full title and the abbreviation, separated by the "=" character. We can load it with Pandas without thinking much:
abbs = pd.read_csv("journal_abbreviations_webofscience.txt", sep='=', comment='#', names=["Full Journal Title", "Abbreviation"])
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
In principle, it looks okay:
abbs.head()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
If we take a closer look at a particular element, we will notice that something is off:
abbs.iloc[0, 1]
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
The string starts with an unnecessary white space. We also do not want to miss any possible match because of differences in the use of capitalization. So we strip starting and trailing white space and convert every entry to upper case:
abbs = abbs.applymap(lambda x: x.upper().strip())
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
This is done: it was an easy and clean data set. Let us go for the next one. Their Journal Citation Records reduced scientific contributions to a single number, allowing incompetent people to make unfounded judgements on the quality of your research output. So when you encounter a question like this: "I want to know which JCR journal is paid and easy to get publication", you do not even blink an eye. Fortunately, the same thread has a download link for the 2015 report in an XLS file. We go for it:
get_file("https://www.researchgate.net/file.PostFileLoader.html?id=558730995e9d9735688b4631&assetKey=AS%3A273803718922244%401442291301717", "2014_SCI_IF.xlsx");
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
We can open it in LibreOffice or OpenOffice, using the lazy way again:
run(["soffice", "2014_SCI_IF.xlsx"])
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
The two first lines are junk, and there is an index column. Let us load it in Pandas:
ifs = pd.read_excel("2014_SCI_IF.xlsx", skiprows=2, index_col=0)
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
If this fails, you do not have the package for reading Excel files. Install it with import pip; pip.main(["install", "xlrd"]). Now this table definitely looks fishy:
ifs.head()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
What are those empty columns? They do not seem to be in the file. The answer lies at the end of the table:
ifs.tail()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Some stupid multi-column copyright notice at the bottom distorts the entire collection. This is why spreadsheets are the most hated data format for structured data, as the uncontrolled blend of content and formatting lets ignorant people give you a headache. The journals are listed according to their ranking, in decreasing order. We know that we do not care about journals that do not have an Impact Factor. We take this as a cut-off for rows, and re-read only the rows and columns we are interested in:
skip = len(ifs) - ifs["Journal Impact Factor"].last_valid_index() + 1 ifs = pd.read_excel("2014_SCI_IF.xlsx", skiprows=2, index_col=0, skip_footer=skip, parse_cols="A,B,E")
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
This looks much better:
ifs.head()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
But not perfect:
ifs.sample(n=10)
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Now you see why we converted everything to upper case before. Exercise 2. Convert everything in the column "Full Journal Title" to upper case. You will encounter a charming inconsistency in Pandas. 4. Cleaning and filtering of semi-structured data The horror starts when even more control is given to humans who create the data. We move on to studying the search results for the authors Lewenstein and Acín on arXiv. The metadata is entered by humans, which makes it inconsistent and chaotic. The advanced search on arXiv is simple and handy. The search URL has a well-defined structure that we can exploit. We create a wrapper function around our scraping engine, which gets the first 400 results for a requested author and returns a parsed HTML file. The library BeautifulSoup does the parsing: it understand the hierarchical structure of the file, and allows you navigate the hierarchy with a handful of convenience functions.
def get_search_result(author): filename = author + ".html" url = "https://arxiv.org/find/all/1/au:+" + author + "/0/1/0/all/0/1?per_page=400" page = get_file(url, filename) return BeautifulSoup(page, "html.parser")
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
It is a common pattern to exploit search URLs to get what you want. Many don't let you pull off too many results in a single page, and you have to step through the "Next" links over and over again to get all the results you want. This is also not difficult, but it has the extra complication that you have to extract the "Next" link and call it recursively. Let us study the results:
lewenstein = get_search_result("Lewenstein_M")
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
We can view the source in a browser:
run(["firefox", "-new-tab", "view-source:file://" + os.getcwd() + "/Lewenstein_M.html"])
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Skipping the boring header part, the source code of the search result looks like this: ```html <h3>Showing results 1 through 389 (of 389 total) for <a href="/find/all/1/au:+Lewenstein_M/0/1/0/all/0/1?skip=0&amp;query_id=ff0631708b5d0dd5">au:Lewenstein_M</a></h3> <dl> <dt>1. <span class="list-identifier"><a href="/abs/1703.09814" title="Abstract">arXiv:1703.09814</a> [<a href="/pdf/1703.09814" title="Download PDF">pdf</a>, <a href="/ps/1703.09814" title="Download PostScript">ps</a>, <a href="/format/1703.09814" title="Other formats">other</a>]</span></dt> <dd> <div class="meta"> <div class="list-title mathjax"> <span class="descriptor">Title:</span> Efficient Determination of Ground States of Infinite Quantum Lattice Models in Three Dimensions </div> <div class="list-authors"> <span class="descriptor">Authors:</span> <a href="/find/cond-mat/1/au:+Ran_S/0/1/0/all/0/1">Shi-Ju Ran</a>, <a href="/find/cond-mat/1/au:+Piga_A/0/1/0/all/0/1">Angelo Piga</a>, <a href="/find/cond-mat/1/au:+Peng_C/0/1/0/all/0/1">Cheng Peng</a>, <a href="/find/cond-mat/1/au:+Su_G/0/1/0/all/0/1">Gang Su</a>, <a href="/find/cond-mat/1/au:+Lewenstein_M/0/1/0/all/0/1">Maciej Lewenstein</a> </div> <div class="list-comments"> <span class="descriptor">Comments:</span> 11 pages, 9 figures </div> <div class="list-subjects"> <span class="descriptor">Subjects:</span> <span class="primary-subject">Strongly Correlated Electrons (cond-mat.str-el)</span>; Computational Physics (physics.comp-ph) </div> </div> </dd> ``` This is the entire first result. It might look intimidating, but as long as you know that in HTML a mark-up starts with `<whatever>` and ends with `</whatever>`, you will find regular and hierarchical patterns. If you stare hard enough, you will see that the `<dd>` tag contains most of the information we want: it has the authors, the title, the journal reference if there is one (not in the example shown), and the primary subject. It does not actually matter what `<dd>` is: we are not writing a browser, we are scraping data. As a quick sanity check, we can easily extract the titles, and verify that they match the number of search results. The `lewenstein` object is instance of the BeautifulSoup class, which has methods to find all instances of a given mark-up. We use this to find the titles:
titles = [] for dd in lewenstein.find_all("dd"): titles.append(dd.find("div", class_="list-title mathjax")) len(titles)
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
So far so good, although the titles do not really look like we expect them:
titles[0]
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
We define a helper function to extract the title:
def extract_title(title): start = title.index(" ") return title[start+1:-1] titles = [] for dd in lewenstein.find_all("dd"): titles.append(extract_title(dd.find("div", class_ = "list-title mathjax").text)) titles[0]
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
The next problem we face is that not all of these papers belong to Maciej Lewenstein: some impostors have the same abbreviated name M. Lewenstein. They are easy to detect if they uses the non-abbreviated name. Let us run through the page again, noting which subject the the impostors publish in. For this, let us introduce another auxiliary function that extract the short name of the the subject. We also note the primary subject when the abbreviated form of the name appears. We use another function to drop "." and merge multiple white space to a single one. We will use a simple regular expression to find the candidate Lewensteins.
def extract_subject(long_subject): start = long_subject.index("(") return long_subject[start+1:-1] def drop_punctuation(string): result = string.replace(".", " ") return " ".join(result.split()) true_lewenstein = ["Maciej Lewenstein", "M Lewenstein"] impostors = set() primary_subjects = set() for dd in lewenstein.find_all("dd"): div = dd.find("div", class_="list-authors") subject = extract_subject(dd.find("span", class_ = "primary-subject").text) names = [drop_punctuation(a.text) for a in div.find_all("a")] for name in names: if re.search("M.* Lewenstein", name): if name not in true_lewenstein: impostors.add(name + " " + subject) elif "Maciej" not in name: primary_subjects.add(subject) print(impostors) print(primary_subjects)
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
So it is only one person, and we can be reasonably confident that Maciej Lewenstein is unlikely to publish in these subjects. The other good news is that all the short forms of the name belong to physics papers, and not computer science. Armed with this knowledge, we can filter out the correct manuscripts. We need to filter one more thing: we are only interested in papers for which the journal reference is given. Further digging in the HTML code lets us find the correct tag. While we are putting together the correct records, we also normalize his name. Annoyingly, the &lt;dd&gt; tag we have been focusing on does not contain the arXiv ID or the year. We zip the main loop's iterator with the &lt;dt&gt; tag, because only this one contains the arXiv ID, from which the year can be extracted. This always goes in pair with the &lt;dd&gt; tag, completing the metadata of the manuscripts. We define yet another set of auxiliary functions to extract everything we need. The routine for extracting the journal title already performs stripping and converting to upper case. We assume that the name of the journal is the stuff on the matching line before the first digit (which is probably the volume or some other bibliographic information).
def extract_journal(journal): start = journal.index(" ") raw = journal[start+1:-1] m = re.search("\d", raw) return drop_punctuation(raw[:m.start()]).strip().upper() def extract_title(title): start = title.index(" ") return title[start+1:-1] def extract_id_and_year(arXiv): start = arXiv.index(":") if "/" in arXiv: year_index = arXiv.index("/") else: year_index = start year = arXiv[year_index+1:year_index+3] if year[0] == "9": year = int("19" + year) else: year = int("20" + year) return arXiv[start+1:], year papers = [] for dd, dt in zip(lewenstein.find_all("dd"), lewenstein.find_all("dt")): id_, year = extract_id_and_year(dt.find("a", attrs={"title": "Abstract"}).text) div = dd.find("div", class_="list-authors") subject = extract_subject(dd.find("span", class_ = "primary-subject").text) journal = dd.find("div", class_ = "list-journal-ref") if journal: names = [drop_punctuation(a.text) for a in div.find_all("a")] for i, name in enumerate(names): if re.search("M.* Lewenstein", name): if name not in true_lewenstein: break else: names[i] = "Maciej Lewenstein" else: papers.append([id_, extract_title(dd.find("div", class_ = "list-title mathjax").text), names, subject, year, extract_journal(journal.text)])
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
We would be almost done if journal names were all entered same way. Of course they were not. Let us try to standardize them: this is where we first combine two sources of data. We use the journal abbreviations table to default to the long title of the journal in our paper collection.
for i, paper in enumerate(papers): journal = paper[-1] long_name = abbs[abbs["Abbreviation"] == journal] if len(long_name) > 0: papers[i][-1] = long_name["Full Journal Title"].values[0]
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
There will still be some rotten apples:
def find_rotten_apples(paper_list): rotten_apples = [] for paper in paper_list: match = ifs[ifs["Full Journal Title"] == paper[-1]] if len(match) == 0: rotten_apples.append(paper[-1]) return sorted(rotten_apples) rotten_apples = find_rotten_apples(papers) rotten_apples
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Now you start to feel the pain of being a data scientist. The sloppiness of manual data entering is unbounded. Your duty is to clean up this mess. A quick fix is to tinker with the drop_punctuation function to replace the retarded encoding of JRC. Then go through the creation of the papers array and the standardization again.
def drop_punctuation(string): result = string.replace(".", " ") result = result.replace(",", " ") result = result.replace("(", " ") result = result.replace(": ", "-") return " ".join(result.split())
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Exercise 3. Cut the number of rotten apples in half by defining a replacement dictionary and doing another round of standardization.
len(rotten_apples)
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
It is the same drill with our other contender, except that the short version of his name is uniquely his. On the other hand, that single accent in the surname introduces N+1 spelling variants, which we should standardize.
acin = get_search_result("Acin_A") for dd, dt in zip(acin.find_all("dd"), acin.find_all("dt")): id_, year = extract_id_and_year(dt.find("a", attrs={"title": "Abstract"}).text) div = dd.find("div", class_="list-authors") subject = extract_subject(dd.find("span", class_ = "primary-subject").text) journal = dd.find("div", class_ = "list-journal-ref") if journal: names = [drop_punctuation(a.text) for a in div.find_all("a")] journal = extract_journal(journal.text) long_name = abbs[abbs["Abbreviation"] == journal] if len(long_name) > 0: journal = long_name["Full Journal Title"].values[0] papers.append([id_, extract_title(dd.find("div", class_ = "list-title mathjax").text), names, subject, year, journal]) for paper in papers: names = paper[2] for i, name in enumerate(names): if re.search("A.* Ac.n", name): names[i] = "Antonio Acín" rotten_apples = find_rotten_apples(papers) rotten_apples
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Finally, we do another combination of sources. We merge the data set with the table of Impact Factors. The merge will be done based on the full journal titles.
db = pd.merge(pd.DataFrame(papers, columns=["arXiv", "Title", "Authors", "Primary Subject", "Year", "Full Journal Title"]), ifs, how="inner", on=["Full Journal Title"])
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Since the two of them co-authored papers, there are duplicated. They are easy to drop, since they have the same arXiv ID:
db = db.drop_duplicates(subset="arXiv")
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Notice that we lost a few papers:
print(len(papers), len(db))
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
This can be one of two reasons: the papers were published in journals that are not in JCR (very unlikely), or we failed matching the full journal name (very likely). It is getting tedious, so we live it here for now. 5. Visual analysis Since we only focus on two authors, we can add an additional column to help identifying who it is. We also care about the co-authored papers.
def identify_key_authors(authors): if "Maciej Lewenstein" in authors and "Antonio Acín" in authors: return "AAML" elif "Maciej Lewenstein" in authors: return "ML" else: return "AA" db["Group"] = db["Authors"].apply(lambda x: identify_key_authors(x))
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Let's start plotting distributions:
groups = ["AA", "ML", "AAML"] fig, ax = plt.subplots(ncols=1) for group in groups: data = db[db["Group"] == group]["Journal Impact Factor"] sns.distplot(data, kde=False, label=group) ax.legend() ax.set_yscale("log") plt.show()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
The logarithmic scale makes the raw number of papers appear more balanced, which is fair given the difference in age between the two authors. A single Nature paper makes a great outlier:
db[db["Journal Impact Factor"] == db["Journal Impact Factor"].max()]
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Actually, Toni has another Nature, but that is not on arXiv yet. Not all of Maciej's paper are on arXiv either, especially not the old ones. We can do the same plots with subjects:
subjects = db["Primary Subject"].drop_duplicates() fig, ax = plt.subplots(ncols=1) for subject in subjects: data = db[db["Primary Subject"] == subject]["Journal Impact Factor"] sns.distplot(data, kde=False, label=subject) ax.legend() ax.set_yscale("log") plt.show()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
You are safe with quant-ph and quantum gases, but stay clear of atom physics. It is amusing to restrict the histogram to Professor Acín's subset:
fig, ax = plt.subplots(ncols=1) for subject in subjects: data = db[(db["Primary Subject"] == subject) & (db["Group"] != "ML")] if len(data) > 1: sns.distplot(data["Journal Impact Factor"], kde=False, label=subject) ax.legend() ax.set_yscale("log") plt.show()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
His topics are somewhat predictable. Let's add one more column to indicate the number of authors:
db["#Authors"] = db["Authors"].apply(lambda x: len(x)) sns.stripplot(x="#Authors", y="Journal Impact Factor", data=db)
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
How about the length of title? Number of words in title?
db["Length of Title"] = db["Title"].apply(lambda x: len(x)) db["Number of Words in Title"] = db["Title"].apply(lambda x: len(x.split())) fig, axes = plt.subplots(ncols=2, figsize=(12, 5)) sns.stripplot(x="Length of Title", y="Journal Impact Factor", data=db, ax=axes[0]) sns.stripplot(x="Number of Words in Title", y="Journal Impact Factor", data=db, ax=axes[1]) plt.show()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
Do our authors maintain IF over time?
fig, axes = plt.subplots(nrows=2, figsize=(10, 5)) data = db[db["Group"] != "ML"] sns.stripplot(x="Year", y="Journal Impact Factor", data=data, ax=axes[0]) axes[0].set_title("AA") data = db[db["Group"] != "AA"].sort_values(by="Year") sns.stripplot(x="Year", y="Journal Impact Factor", data=data, ax=axes[1]) axes[1].set_title("ML") plt.tight_layout() plt.show()
Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb
peterwittek/qml-rg
gpl-3.0
We start by a simple hello world. This code plots the numbers $[0, 10)$ and adds a title Hello zmpl. Notice you don't need to call zmpl.show to see the results.
zplt.clf() zplt.plot(range(10)) zplt.title('Hello zmpl!')
examples/zmpl_demo.ipynb
willyd/zmpl
mit
Now we can acheive some interactivity very simply without using the ion and pause functions. This code will generate a random image of 100 by 100 pixels and displays them using imshow.
# disable auto draw zplt.auto_draw(False) for i in range(10): zplt.clf() data = np.random.random((100, 100, 3)) zplt.hold(True) zplt.imshow(data) zplt.plot(range(25, 76), range(25, 76), 'k-', linewidth=2.0) zplt.axis('square') zplt.axis('tight') zplt.hold(False) zpl
examples/zmpl_demo.ipynb
willyd/zmpl
mit
We can also use some module objects like figure and axes
# enable auto draw zplt.auto_draw(True) f1 = zplt.figure(1) ax1 = f1.add_subplot(111) ax1.plot(range(10), 'g-') zplt.title('Figure 1') f2 = zplt.figure(2) ax2 = f2.add_subplot(111) ax2.plot(range(20), 'r-') zplt.title('Figure 2')
examples/zmpl_demo.ipynb
willyd/zmpl
mit
The objects are special proxy types that are created on the fly by our simple RPC system.
print(type(f1)) print(dir(f1)[:20]) print(type(ax1)) print(dir(ax1)[:20])
examples/zmpl_demo.ipynb
willyd/zmpl
mit
1.2. BigQuery Storage & Spark SQL - Python Create Dataproc Cluster with Jupyter This notebook is designed to be run on Google Cloud Dataproc. Follow the links below for instructions on how to create a Dataproc Cluster with the Juypter component installed. Tutorial - Install and run a Jupyter notebook on a Dataproc cluster Blog post - Apache Spark and Jupyter Notebooks made easy with Dataproc component gateway Python 3 Kernel Use a Python 3 kernel (not PySpark) to allow you to configure the SparkSession in the notebook and include the spark-bigquery-connector required to use the BigQuery Storage API. Scala Version Check what version of Scala you are running so you can include the correct spark-bigquery-connector jar
!scala -version
notebooks/python/1.2. BigQuery Storage & Spark SQL - Python.ipynb
GoogleCloudDataproc/cloud-dataproc
apache-2.0
Create Spark Session Include the correct version of the spark-bigquery-connector jar If you are using scala version 2.11 use 'gs://spark-lib/bigquery/spark-bigquery-latest.jar'. If you are using scala version 2.12 use 'gs://spark-lib/bigquery/spark-bigquery-latest_2.12.jar'.
from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName('1.2. BigQuery Storage & Spark SQL - Python')\ .config('spark.jars', 'gs://spark-lib/bigquery/spark-bigquery-latest.jar') \ .getOrCreate()
notebooks/python/1.2. BigQuery Storage & Spark SQL - Python.ipynb
GoogleCloudDataproc/cloud-dataproc
apache-2.0
Read BigQuery table into Spark Dataframe Use filter() to query data from a partitioned table.
table = "bigquery-public-data.wikipedia.pageviews_2020" df_wiki_pageviews = spark.read \ .format("bigquery") \ .option("table", table) \ .option("filter", "datehour >= '2020-03-01' AND datehour < '2020-03-02'") \ .load() df_wiki_pageviews.printSchema()
notebooks/python/1.2. BigQuery Storage & Spark SQL - Python.ipynb
GoogleCloudDataproc/cloud-dataproc
apache-2.0
Create temp table Create temp table to be used in Spark SQL queries
df_wiki_pageviews.createOrReplaceTempView("wiki_pageviews")
notebooks/python/1.2. BigQuery Storage & Spark SQL - Python.ipynb
GoogleCloudDataproc/cloud-dataproc
apache-2.0
Select required columns and apply a filter using WHERE
df_wiki_en = spark.sql(""" SELECT title, wiki, views FROM wiki_pageviews WHERE views > 1000 AND wiki in ('en', 'en.m') """).cache() df_wiki_en
notebooks/python/1.2. BigQuery Storage & Spark SQL - Python.ipynb
GoogleCloudDataproc/cloud-dataproc
apache-2.0
Create a wiki en pageviews table
df_wiki_en.createOrReplaceTempView("wiki_en")
notebooks/python/1.2. BigQuery Storage & Spark SQL - Python.ipynb
GoogleCloudDataproc/cloud-dataproc
apache-2.0
Group by title and find top pages by page views
df_wiki_en_totals = spark.sql(""" SELECT title, SUM(views) as total_views FROM wiki_en GROUP BY title ORDER BY total_views DESC """) df_wiki_en_totals
notebooks/python/1.2. BigQuery Storage & Spark SQL - Python.ipynb
GoogleCloudDataproc/cloud-dataproc
apache-2.0
By default, cross_val_score will use StratifiedKFold for classification, which ensures that the class proportions in the dataset are reflected in each fold. If you have a binary classification dataset with 90% of data point belonging to class 0, that would mean that in each fold, 90% of datapoints would belong to class 0. If you would just use KFold cross-validation, it is likely that you would generate a split that only contains class 0. It is generally a good idea to use StratifiedKFold whenever you do classification. StratifiedKFold would also remove our need to shuffle iris. Let's see what kinds of folds it generates on the unshuffled iris dataset. Each cross-validation class is a generator of sets of training and test indices:
cv = StratifiedKFold(n_splits=5) for train, test in cv.split(X, y): print(test)
Day_1_Scientific_Python/scikit-learn/13-Cross-Validation.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
As you can see, there are a couple of samples from the beginning, then from the middle, and then from the end, in each of the folds. This way, the class ratios are preserved. Let's visualize the split:
def plot_cv(cv, y): masks = [] X = np.ones((len(y), 1)) for train, test in cv.split(X, y): mask = np.zeros(len(y), dtype=bool) mask[test] = 1 masks.append(mask) plt.matshow(masks) plot_cv(StratifiedKFold(n_splits=5), iris.target)
Day_1_Scientific_Python/scikit-learn/13-Cross-Validation.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause
For comparison, again the standard KFold, that ignores the labels:
plot_cv(KFold(n_splits=5), iris.target)
Day_1_Scientific_Python/scikit-learn/13-Cross-Validation.ipynb
paris-saclay-cds/python-workshop
bsd-3-clause