markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now let's use the above arrangement of axes as a template for the figurefirst:figure objects that we have drawn in an svg file. Below is a rendering of the svg file showing where we would like the templated sets of axes to go.
display(SVG('mpl_fig_for_templates.svg'))
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
FlyRanch/figurefirst
mit
Now we call add_mpl_fig_to_figurefirst_svg with the necessary arguments to add a layer to the above svg file. This new layer is similar to the layer created previously using mpl_fig_to_figurefirst_svg, except that instead of placing this layer into an empty svg, we are adding it to an existing svg. The one trick being used here is that the svg file mpl_fig_for_templates.svg has figurefirst:figure labels that point to a non-existent template. You can inspect the svg file in Inkscape's svg editor to find that each rectangle has a figurefirst:figure tag with a figurefirst:template attribute that points to an mpl_template. By setting the figurefirst_figure_name to mpl_template we can insert the template that those figurefirst:figure tags are looking for. <em>You can ignore the Warning that appears - this is printed out the first time the layout is loaded and it can't find the templates. </em>
fifi_svg_filename = 'mpl_fig_for_templates.svg' mpl_fig = fig output_filename = 'mpl_fig_for_templates_output.svg' layout = fifi.mpl_fig_to_figurefirst_svg.add_mpl_fig_to_figurefirst_svg( fifi_svg_filename, mpl_fig, output_filename, design_layer_name='mpl_design_layer', figurefirst_figure_name='mpl_template') plt.close('all')
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
FlyRanch/figurefirst
mit
Now we can load the layout that has the added layer generated from the Matplotlib figure, and make a sample plot.
layout = fifi.svg_to_axes.FigureLayout('mpl_fig_for_templates_output.svg', make_mplfigures=True) # collect the group and axis names # Note: you could also do this by noting your figurefirst:figure names from the layout file import numpy as np groups_and_axes = np.unique(np.ravel(layout.axes.keys())) groups = [g for g in groups_and_axes if 'group' in g] axes = [g for g in groups_and_axes if 'ax' in g] # plot some random data for group in groups: for axis in axes: ax = layout.axes[(group, axis)] ax.plot(np.random.random(10), np.random.random(10)) ax.set_xticklabels([]) # to make plot less ugly ax.set_yticklabels([]) # to make plot less ugly # add the figure (group) to the layout as a new layer layout.append_figure_to_layer(layout.figures[group], group, cleartarget=True, save_traceback=True) ## Hide the design layers and save the new svg file ########################## layout.set_layer_visibility(inkscape_label = 'template_layout',vis = False) layout.set_layer_visibility(inkscape_label = 'mpl_design_layer',vis = False) layout.write_svg('mpl_fig_for_templates_output_with_plots.svg') plt.close('all') display(SVG('mpl_fig_for_templates_output_with_plots.svg'))
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
FlyRanch/figurefirst
mit
The second line of this code uses the keyword if to tell Python that we want to make a choice. If the test that follows the if statement is true, the body of the if (i.e., the lines indented underneath it) are executed. If the test is false, the body of the else is executed instead. Only one or the other is ever executed. Conditional statements don’t have to include an else. If there isn’t one, Python simply does nothing if the test is false:
num = 53 print('before conditional...') if num > 100: print('53 is greater than 100') print('...after conditional')
02-Python1/02-Python-1-Logic_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
We can also chain several tests together using elif, which is short for “else if”. The following Python code uses elif to print the sign of a number.
num = -3 if num > 0: print(num, "is positive") elif num == 0: print(num, "is zero") else: print(num, "is negative")
02-Python1/02-Python-1-Logic_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
One important thing to notice in the code above is that we use a double equals sign == to test for equality rather than a single equals sign because the latter is used to mean assignment. We can also combine tests using and and or. and is only true if both parts are true:
if (1 > 0) and (-1 > 0): print('both parts are true') else: print('at least one part is false')
02-Python1/02-Python-1-Logic_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Checking out Data Now that we’ve seen how conditionals work, we can use them to check for the suspicious features we saw in our inflammation data. In the first couple of plots, the maximum inflammation per day seemed to rise like a straight line, one unit per day. We can check for this inside the for loop we wrote with the following conditional: python if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20: print('Suspicious looking maxima!') We also saw a different problem in the third dataset; the minima per day were all zero (looks like a healthy person snuck into our study). We can also check for this with an elif condition: python elif data.min(axis=0).sum() == 0: print('Minima add up to zero!') And if neither of these conditions are true, we can use else to give the all-clear: python else: print('Seems OK!') lets test that out then!
import numpy as np data = np.loadtxt(fname='data/inflammation-01.csv', delimiter=',') if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20: print('Suspicious looking maxima!') elif data.min(axis=0).sum() == 0: print('Minima add up to zero!') else: print('Seems OK!') data = numpy.loadtxt(fname='inflammation-03.csv', delimiter=',') if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20: print('Suspicious looking maxima!') elif data.min(axis=0).sum() == 0: print('Minima add up to zero!') else: print('Seems OK!')
02-Python1/02-Python-1-Logic_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
<a id='strings'></a> Strings In Python, human language text gets represented as a string. These contain sequential sets of characters and they are offset by quotation marks, either double (") or single ('). We will explore different kinds of operations in Python that are specific to human language objects, but it is useful to start by trying to see them as the computer does, as numerical representations.
# The iconic string print("Hello, World!") # Assign these strings to variables a = "Hello" b = 'World' # Try out arithmetic operations. # When we add strings we call it 'concatenation' print(a+" "+b) print(a*5) # Unlike a number that consists of a single value, a string is an ordered # sequence of characters. We can find out the length of that sequence. len("Hello, World!") ## EX. How long is the string below? this_string = "It was the best of times; it was the worst of times." len(this_string)
01-IntroToPython/.ipynb_checkpoints/00-PythonBasics-checkpoint.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='lists'></a> Lists The numbers and strings we have just looked at are the two basic data types that we will focus our attention on in this workshop. (In a few days, we will look at a third data type, boolean, which consists of just True/False values.) When we are working with just a few numbers or strings, it is easy to keep track of them, but as we collect more we will want a system to organize them. One such organizational system is a list. This contains values (regardless of type) in order, and we can perform operations on it very similarly to the way we did with numbers. A list in which each element is a string The creators of Python recognize that human language has many important yet idiosyncratic features, so they have tried to make it easy for us to identify and manipulate them. For example, in the demonstration at the very beginning of the workshop, we referred to the idea of the suffix: the final letters of a word tell us something about its grammatical role and potentially the author's argument. We can analyze or manipulate certain features of a string using its methods. These are basically internal functions that every string automatically possesses. Note that even though the method may transform the string at hand, they don't change it permanently!
# Let's assign a couple lists to variables list1 = ['Call', 'me', 'Ishmael'] list2 = ['In', 'the', 'beginning'] ## Q. Predict what will happen when we perform the following operations print(list1+list2) print(list1*5) # As with a string, we can find out the length of a list len(list1) # Sometimes we just want a single value from the list at a time print(list1[0]) print(list1[1]) print(list1[2]) # Or maybe we want the first few print(list1[0:2]) print(list1[:2]) # Of course, lists can contain numbers or even a mix of numbers and strings list3 = [7,8,9] list4 = [7,'ate',9] # And python is smart with numbers, so we can add them easily! sum(list3) ## EX. Concatenate 'list1' and 'list2' into a single list. ## Retrieve the third element from the combined list. ## Retrieve the fourth through sixth elements from the combined list. new_list = list1+list2 new_list[3:]
01-IntroToPython/.ipynb_checkpoints/00-PythonBasics-checkpoint.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='tricks'></a> A couple of useful tricks <a id='string_methods'></a> String Methods The creators of Python recognize that human language has many important yet idiosyncratic features, so they have tried to make it easy for us to identify and manipulate them. For example, in the demonstration at the very beginning of the workshop, we referred to the idea of the suffix: the final letters of a word tell us something about its grammatical role and potentially the author's argument. We can analyze or manipulate certain features of a string using its <i>methods</i>. These are basically internal functions that every string automatically possesses. Note that even though the method may transform the string at hand, they don't change it permanently!
# Let's assign a variable to perform methods upon greeting = "Hello, World!" # We saw the 'endswith' method at the very beginning # Note the type of output that gets printed greeting.startswith('H'), greeting.endswith('d') # We can check whether the string is a letter or a number this_string = 'f' this_string.isalpha() # When there are multiple characters, it checks whether *all* # of the characters belong to that category greeting.isalpha(), greeting.isdigit() # Similarly, we can check whether the string is lower or upper case greeting.islower(), greeting.isupper(), greeting.istitle() # Sometimes we want not just to check, but to change the string greeting.lower(), greeting.upper() # The case of the string hasn't changed! greeting # But if we want to permanently make it lower case we re-assign it greeting = greeting.lower() greeting # Oh hey. And strings are kind of like lists, so we can slice them similarly greeting[:3] # Strings may be like lists of characters, but as humans we often treat them as # lists of words. We tell the computer to can perform that conversion. greeting.split() ## EX. Return the second through eighth characters in 'greeting' ## EX. Split the string below into a list of words and assign this to a new variable ## Note: A slash at the end of a line allows a string to continue unbroken onto the next new_string = "It seems very strange that one must turn back, \ and be transported to the very beginnings of history, \ in order to arrive at an understanding of humanity as it is at present." print(greeting[1:8]) new_string_list = new_string.split() new_string_list
01-IntroToPython/.ipynb_checkpoints/00-PythonBasics-checkpoint.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='list_methods'></a> List Comprehension List comprehensions are a fairly advanced programming technique that we will spend more time talking about tomorrow. For now, you can think of them as list filters. Often, we don't need every value in a list, just a few that fulfill certain criteria.
# 'list1' had contained three words, two of which were in title case. # We can automatically return those words using a list comprehension [word for word in list1 if word.istitle()] # Or we can include all the words in the list but just take their first letters [word[0] for word in list1] for word in list1: print(word[0]) ## EX. Using the list of words you produced by splitting 'new_string', create ## a new list that contains only the words whose last letter is "y" y_list = [word for word in new_string_list if word.endswith('y')] print(y_list) ## EX. Create a new list that contains the first letter of each word. first_letter = [word[0] for word in new_string_list] print(first_letter) ## EX. Create a new list that contains only words longer than two letters. long_words = [word for word in new_string_list if len(word)>2] print(long_words)
01-IntroToPython/.ipynb_checkpoints/00-PythonBasics-checkpoint.ipynb
lknelson/text-analysis-2017
bsd-3-clause
which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing. In addition, you can load models created by the original C tool, both using its text and binary formats: model = gensim.models.Word2Vec.load_word2vec_format('/tmp/vectors.txt', binary=False) # using gzipped/bz2 input works too, no need to unzip: model = gensim.models.Word2Vec.load_word2vec_format('/tmp/vectors.bin.gz', binary=True) Online training / Resuming training Advanced users can load a model and continue training it with more sentences and new vocabulary words:
model = gensim.models.Word2Vec.load(temp_path) more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']] model.build_vocab(more_sentences, update=True) model.train(more_sentences, ) # cleaning up temp os.close(fs) os.remove(temp_path)
docs/notebooks/word2vec.ipynb
isohyt/gensim
lgpl-2.1
Your task is to provide a drop-down so the user of the program can select one of the 5 mailboxes. Upon running the interaction the program will: read the selected mailbox file a line at a time find any lines beginning with From:. extract out the email address from the From: line. use the isEmail() function (provided below) to ensure its a valid email address. print the email address write the email to the emails file. (for example enron-allen-inbox.txt would write the emails to enron-allen-emails.txt. NOTE: any emails from the enron.com domain are internal and should be omitted from the list. We don't need to "mass market" to ourselves! HINTS: We saw how to extract emails in the Lab. This approach should work here. The problem simplification approach is a good approach to this problem. Start with a simpler problem and add one more piece of complexity with each iteration: First solve a simpler problem which you extract the single email from the enron-onemail-inbox.txt file. Then re-write your program to use another mailbox like enron-allen-inbox.txt to make sure it prints multiple emails. Next re-write your program to omit any emails with the domain enron.com Next not only print the emails, but write them back out to the file: enron-allen-emails.txt Finally add the ipywidget drop-down where you can select a mailbox and save to a different emails file based on the inbox file name. Part 1: Problem Analysis Inputs: TODO: Inputs Outputs: TODO: Outputs Algorithm (Steps in Program): ``` TODO:Steps Here ``` Part 2: Code Solution You may write your code in several cells, but place the complete, final working copy of your code solution within this single cell below. Only the within this cell will be considered your solution. Any imports or user-defined functions should be copied into this cell.
# Step 2: Write code here
content/lessons/07-Files/HW-Files.ipynb
IST256/learn-python
mit
Part 3: Questions Did you write your own user-defined function? For what purpose? --== Double-Click and Write Your Answer Below This Line ==-- Explain how you might re-write this program to create one large file from all the mailboxes. No code, just explain it. --== Double-Click and Write Your Answer Below This Line ==-- Devise an approach to remove duplicate emails from the output file. You don't have to write as code, just explain it. --== Double-Click and Write Your Answer Below This Line ==-- Part 4: Reflection Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements? To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise. Keep your response to between 100 and 250 words. --== Double-Click and Write Your Reflection Below Here ==--
# run this code to turn in your work! from coursetools.submission import Submission Submission().submit()
content/lessons/07-Files/HW-Files.ipynb
IST256/learn-python
mit
From the above info(),We can see columns Age, Cabin and Embarked have missing values. Handling the missing values: Ignore the rows with missing data, Exclude the variable at all or we might substite it with mean or median. Age 80% of the data is available,which seems a important variable so not to exclude. Port of embarkation doesn't seem interesting. cabin 23% of the data so decided to exclude. PassengerId,Name,fare doesnt seem to contribute to any survival investigation
#exculding some coloumns for a in ['Ticket','Cabin','Embarked','Name','PassengerId','Fare']: if a in data.columns: del data[a] print "Age median values by Age and Sex:" #we are grouping by gender and class and taking median of age so we can replace with corrresponding values instead of NaN print data.groupby(['Sex','Pclass'], as_index=False).median().loc[:, ['Sex','Pclass', 'Age']] print "Age values for 5 first persons in dataset:" print data.loc[data['Age'].isnull(),['Age','Sex','Pclass']].head(5) # apply transformation: Age missing values are filled with regard to Pclass and Sex: data.loc[:, 'Age'] = data.groupby(['Sex','Pclass']).transform(lambda x: x.fillna(x.median())) print data.loc[[5,17,19,26,28],['Age','Sex','Pclass']].head(5) data['Age'] = data['Age'].fillna(data['Age'].mean())
P6 data_viz/data/.ipynb_checkpoints/P2-checkpoint.ipynb
gangadhara691/gangadhara691.github.io
mit
Data Exploration and Visualization
data_s=data survival_group = data_s.groupby('Survived') survival_group.describe()
P6 data_viz/data/.ipynb_checkpoints/P2-checkpoint.ipynb
gangadhara691/gangadhara691.github.io
mit
From the above statistics - Youngest to survive: 0.42 - Youngest to die: 1.0 - Oldest to survive: 80.0 - Oldest to die: 74.0
import matplotlib.pyplot as plt import seaborn as sns # Set style for all graphs #sns.set_style("light") #sns.set_style("whitegrid") sns.set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8})
P6 data_viz/data/.ipynb_checkpoints/P2-checkpoint.ipynb
gangadhara691/gangadhara691.github.io
mit
From the above plot we can see how female individuals are given 1st preference and based on class. Social-economic standing was a factor in survival rate of passengers by gender Class 1 - female survival rate: 96.81% Class 1 - male survival rate: 36.89% Class 2 - female survival rate: 92.11% Class 2 - male survival rate: 15.74% Class 3 - female survival rate: 50.0% Class 3 - male survival rate: 13.54% Women and children have preference First to lifeboats?
def group(d,v): if (d == 'female') and (v >= 18): return 'Woman' elif v < 18: return 'child' elif (d == 'male') and (v >= 18): return 'Man' data['Category'] = data.apply(lambda row:group(row['Sex'], row['Age']), axis=1) data.head(5) # We are dividing the Age data into 3 buckets of (0-18),(18-40),(40-90) # and labeling them as 'Childs','Adults','Seniors' respectively data['group_age'] = pd.cut(data['Age'], bins=[0,18,40,90], labels=['Childs','Adults','Seniors']) #finding mean Survival rate grouped by 'group_age','Sex'. df = data.groupby(['group_age','Sex',"Pclass"],as_index=False).mean().loc[:,['group_age','Sex',"Pclass",'Survived']] df.to_csv("titanic_group_age.csv", sep=',', encoding='utf-8') data_C=data.groupby(['Category',"Parch"]).mean() data_C.sort("Survived")["Survived"] data_C.to_csv("Category.csv", sep=',', encoding='utf-8') data['Age_group'] = pd.cut(data['Age'], bins=range(0,90,10)) data_age=data.groupby(["Age_group"]).mean() data_age.to_csv("Age_group.csv", sep=',', encoding='utf-8') %run P2
P6 data_viz/data/.ipynb_checkpoints/P2-checkpoint.ipynb
gangadhara691/gangadhara691.github.io
mit
Load some example scores (output of a classifier) The example data (from binary classification), presented next, contains: - Column 1: scores or probas (output of predict_proba()) in the range [0, 1] - Column 2: target or actual outcome (y truth)
df_results = pd.read_csv('../data/classifier_prediction_scores.csv') print('Number of rows:', df_results.shape[0]) df_results.head()
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Let's take a look at the scores distribution. As an output of the predict_proba(), the scores range is [0, 1].
df_results['scores'].hist(bins=50) plt.ylabel('Frequency') plt.xlabel('Scores') plt.title('Distribution of Scores') plt.xlim(0, 1) plt.show()
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Classification Metrics Accuracy score The accuracy_score is the fraction (default) or the count (normalize=False) of correct predictions. It is given by: $$ A = \frac{TP + TN}{TP + TN + FP + FN} $$ Where, TP is the True Positives, TN the True Negatives, FP the False Positives, and False Negative. Disavantages: - Not recommended its use in highly imbalanced datasets. - You have to set a threshold for the output of the classifiers.
# Specifying the threshold above which the predicted label is considered 1: threshold = 0.50 # Generate the predicted labels (above threshold = 1, below = 0) predicted_outcome = [0 if k <= threshold else 1 for k in df_results['scores']] print('Accuracy = %2.3f' % accuracy_score(df_results['target'], predicted_outcome))
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Confusion Matrix The confusion_matrix C provides several performance indicators: - C(0,0) - TN count - C(1,0) - FN count - C(0,1) - FP count - C(1,1) - TP count
# Get the confusion matrix: confmat = confusion_matrix(y_true=df_results['target'], y_pred=predicted_outcome) # Plot the confusion matrix fig, ax = plt.subplots(figsize=(5, 5)) ax.matshow(confmat, cmap=plt.cm.Blues, alpha=0.4) for i in range(confmat.shape[0]): for j in range(confmat.shape[1]): ax.text(x=j, y=i, s=confmat[i, j], va='center', ha='center') plt.xlabel('predicted label') plt.ylabel('true label') plt.title('Confusion Matrix') plt.show()
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
As can be seen, the number of False Negatives is very high, which, depending on the business could be harmful. Precision, Recall and F1-score Precision is the ability of the classifier not to label as positive a sample that is negative (i.e., a measure of result relevancy). $$ P = \frac{T_P}{T_P+F_P} $$ Recall is the ability of the classifier to find all the positive samples (i.e., a measure of how many truly relevant results are returned). $$ R = \frac{T_P}{T_P+F_N} $$ F1 score can be interpreted as a weighted harmonic mean of the precision and recall (in this case recall and precision are equally important). $$ P = 2\frac{P \times R}{P+R} $$ where $T_P$ is the true positives, $F_P$ the false positives, and $F_N$ the false negatives. Further information on precision, recall and f1-score. First, let's check if our dataset has class imbalance:
df_results['target'].value_counts(normalize=True)
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Rather imbalanced! Approximately 83% of the labels are 0. Let's take a look at the other metrics more appropriate for this type of datasets:
print('Precision score = %1.3f' % precision_score(df_results['target'], predicted_outcome)) print('Recall score = %1.3f' % recall_score(df_results['target'], predicted_outcome)) print('F1 score = %1.3f' % f1_score(df_results['target'], predicted_outcome))
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
As you can see, the results actually not so good as the accuracy metric would show us. Receiver Operating Characteristic (ROC) and Area Under the ROC (AUROC) The ROC is very common for binary classification problems. It is created by plotting the fraction of true positives out of the positives (TPR = true positive rate) vs. the fraction of false positives out of the negatives (FPR = false positive rate), at various threshold settings. - The roc_curve compute Receiver operating characteristic (ROC) - The roc_auc_score function computes the area under the receiver operating characteristic (ROC) curve. The curve information is summarized in one number. Unlike the previous metrics, the ROC functions above require the actual scores/probabilities (and not the predicted labels). Further information on roc_curve and roc_auc_score. This metric is rather useful for imbalanced datasets.
# Data to compute the ROC curve (FPR and TPR): fpr, tpr, thresholds = roc_curve(df_results['target'], df_results['scores']) # The Area Under the ROC curve: roc_auc = roc_auc_score(df_results['target'], df_results['scores']) # Plot ROC Curve plt.figure(figsize=(8,6)) lw = 2 plt.plot(fpr, tpr, color='orange', lw=lw, label='ROC curve (AUROC = %0.2f)' % roc_auc) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--', label='random') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.grid() plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show()
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation. ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 1 / (1 + np.exp(-x)) # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### ### Forward pass ### X = X[None, :] # TODO: Hidden layer - Replace these values with your calculations. hidden_outputs = self.activation_function(X @ self.weights_input_to_hidden) # TODO: Output layer - Replace these values with your calculations. final_outputs = hidden_outputs @ self.weights_hidden_to_output #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = y - final_outputs # Output layer error is the difference between desired target and actual output. output_error_term = error #from IPython.core.debugger import Tracer; Tracer()() # TODO: Calculate the hidden layer's contribution to the error hidden_error = output_error_term @ self.weights_hidden_to_output.T # TODO: Backpropagated error terms - Replace these values with your calculations. hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs) # Weight step (input to hidden) delta_weights_i_h += X.T @ hidden_error_term # Weight step (hidden to output) delta_weights_h_o += hidden_outputs.T @ output_error_term # TODO: Update the weights - Replace these values with your calculations. #from IPython.core.debugger import Tracer; Tracer()() self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = features @ self.weights_input_to_hidden # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = hidden_outputs @ self.weights_hidden_to_output # signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
first-neural-network/Your_first_neural_network.ipynb
gururajl/deep-learning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
import sys ### Set the hyperparameters here ### iterations = 5500 learning_rate = 0.4 hidden_nodes = 30 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim(0, 0.5)
first-neural-network/Your_first_neural_network.ipynb
gururajl/deep-learning
mit
Exercises Exercise: In NSFG Cycles 6 and 7, the variable cmdivorcx contains the date of divorce for the respondent’s first marriage, if applicable, encoded in century-months. Compute the duration of marriages that have ended in divorce, and the duration, so far, of marriages that are ongoing. Estimate the hazard and survival curve for the duration of marriage. Use resampling to take into account sampling weights, and plot data from several resamples to visualize sampling error. Consider dividing the respondents into groups by decade of birth, and possibly by age at first marriage.
def CleanData(resp): """Cleans respondent data. resp: DataFrame """ resp.cmdivorcx.replace([9998, 9999], np.nan, inplace=True) resp["notdivorced"] = resp.cmdivorcx.isnull().astype(int) resp["duration"] = (resp.cmdivorcx - resp.cmmarrhx) / 12.0 resp["durationsofar"] = (resp.cmintvw - resp.cmmarrhx) / 12.0 month0 = pd.to_datetime("1899-12-15") dates = [month0 + pd.DateOffset(months=cm) for cm in resp.cmbirth] resp["decade"] = (pd.DatetimeIndex(dates).year - 1900) // 10 CleanData(resp6) married6 = resp6[resp6.evrmarry == 1] CleanData(resp7) married7 = resp7[resp7.evrmarry == 1] # Solution def ResampleDivorceCurve(resps): """Plots divorce curves based on resampled data. resps: list of respondent DataFrames """ for _ in range(11): samples = [thinkstats2.ResampleRowsWeighted(resp) for resp in resps] sample = pd.concat(samples, ignore_index=True) PlotDivorceCurveByDecade(sample, color="#225EA8", alpha=0.1) thinkplot.Show(xlabel="years", axis=[0, 28, 0, 1]) # Solution def ResampleDivorceCurveByDecade(resps): """Plots divorce curves for each birth cohort. resps: list of respondent DataFrames """ for i in range(41): samples = [thinkstats2.ResampleRowsWeighted(resp) for resp in resps] sample = pd.concat(samples, ignore_index=True) groups = sample.groupby("decade") if i == 0: survival.AddLabelsByDecade(groups, alpha=0.7) EstimateSurvivalByDecade(groups, alpha=0.1) thinkplot.Config(xlabel="Years", ylabel="Fraction undivorced", axis=[0, 28, 0, 1]) # Solution def EstimateSurvivalByDecade(groups, **options): """Groups respondents by decade and plots survival curves. groups: GroupBy object """ thinkplot.PrePlot(len(groups)) for name, group in groups: _, sf = EstimateSurvival(group) thinkplot.Plot(sf, **options) # Solution def EstimateSurvival(resp): """Estimates the survival curve. resp: DataFrame of respondents returns: pair of HazardFunction, SurvivalFunction """ complete = resp[resp.notdivorced == 0].duration.dropna() ongoing = resp[resp.notdivorced == 1].durationsofar.dropna() hf = survival.EstimateHazardFunction(complete, ongoing) sf = hf.MakeSurvival() return hf, sf # Solution ResampleDivorceCurveByDecade([married6, married7])
solutions/chap13soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
In this equation: $\epsilon$ is the single particle energy. $\mu$ is the chemical potential, which is related to the total number of particles. $k$ is the Boltzmann constant. $T$ is the temperature in Kelvin. In the cell below, typeset this equation using LaTeX: YOUR ANSWER HERE Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
def fermidist(energy, mu, kT): """Compute the Fermi distribution at energy, mu and kT.""" # YOUR CODE HERE f=1/(np.exp((energy-mu)/kT)+1) return f assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033) assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0), np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532, 0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
assignments/midterm/InteractEx06.ipynb
JackDi/phys202-2015-work
mit
Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT. Use enegies over the range $[0,10.0]$ and a suitable number of points. Choose an appropriate x and y limit for your visualization. Label your x and y axis and the overall visualization. Customize your plot in 3 other ways to make it effective and beautiful.
def plot_fermidist(mu, kT): # YOUR CODE HERE f=plt.figure(figsize=(9,6)) energy=np.linspace(0,10,100) plt.plot(energy,fermidist(energy,mu,kT),'k') plt.xlim(0,10) plt.ylim(0,1) plt.tick_params(direction='out') plt.title('Fermi Dirac Distridution over a range of Energy Levels for Constant MU and kT') plt.ylabel('Fermi Distribution') plt.xlabel('Energy Range') # energy=np.linspace(1,10,10) # f=fermidist(4,1,energy) # f # plt.plot(energy,f) plot_fermidist(4.0, 1.0) assert True # leave this for grading the plot_fermidist function
assignments/midterm/InteractEx06.ipynb
JackDi/phys202-2015-work
mit
Use interact with plot_fermidist to explore the distribution: For mu use a floating point slider over the range $[0.0,5.0]$. for kT use a floating point slider over the range $[0.1,10.0]$.
# YOUR CODE HERE interact(plot_fermidist,mu=(0,5,0.1),kT=(0.1,10,0.1))
assignments/midterm/InteractEx06.ipynb
JackDi/phys202-2015-work
mit
Make population dynamic model Basic parameters
pop_size = 60 seq_length = 100 alphabet = ['A', 'T', 'G', 'C'] base_haplotype = "AAAAAAAAAA"
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Setup a population of sequences Store this as a lightweight Dictionary that maps a string to a count. All the sequences together will have count N.
pop = {} pop["AAAAAAAAAA"] = 40 pop["AAATAAAAAA"] = 30 pop["AATTTAAAAA"] = 30 pop["AAATAAAAAA"]
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Add mutation Mutations occur each generation in each individual in every basepair.
mutation_rate = 0.0001 # per gen per individual per site
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Walk through population and mutate basepairs. Use Poisson splitting to speed this up (you may be familiar with Poisson splitting from its use in the Gillespie algorithm). In naive scenario A: take each element and check for each if event occurs. For example, 100 elements, each with 1% chance. This requires 100 random numbers. In Poisson splitting scenario B: Draw a Poisson random number for the number of events that occur and distribute them randomly. In the above example, this will most likely involve 1 random number draw to see how many events and then a few more draws to see which elements are hit. First off, we need to get random number of total mutations
def get_mutation_count(): mean = mutation_rate * pop_size * seq_length return np.random.poisson(mean)
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here we use Numpy's Poisson random number.
get_mutation_count()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We need to get random haplotype from the population.
pop.keys() [x/float(pop_size) for x in pop.values()] def get_random_haplotype(): haplotypes = pop.keys() frequencies = [x/float(pop_size) for x in pop.values()] total = sum(frequencies) frequencies = [x / total for x in frequencies] return np.random.choice(haplotypes, p=frequencies)
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here we use Numpy's weighted random choice.
get_random_haplotype()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here, we take a supplied haplotype and mutate a site at random.
def get_mutant(haplotype): site = np.random.randint(seq_length) possible_mutations = list(alphabet) possible_mutations.remove(haplotype[site]) mutation = np.random.choice(possible_mutations) new_haplotype = haplotype[:site] + mutation + haplotype[site+1:] return new_haplotype get_mutant("AAAAAAAAAA")
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Putting things together, in a single mutation event, we grab a random haplotype from the population, mutate it, decrement its count, and then check if the mutant already exists in the population. If it does, increment this mutant haplotype; if it doesn't create a new haplotype of count 1.
def mutation_event(): haplotype = get_random_haplotype() if pop[haplotype] > 1: pop[haplotype] -= 1 new_haplotype = get_mutant(haplotype) if new_haplotype in pop: pop[new_haplotype] += 1 else: pop[new_haplotype] = 1 mutation_event() pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
To create all the mutations that occur in a single generation, we draw the total count of mutations and then iteratively add mutation events.
def mutation_step(): mutation_count = get_mutation_count() for i in range(mutation_count): mutation_event() mutation_step() pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Add genetic drift Given a list of haplotype frequencies currently in the population, we can take a multinomial draw to get haplotype counts in the following generation.
def get_offspring_counts(): haplotypes = pop.keys() frequencies = [x/float(pop_size) for x in pop.values()] return list(np.random.multinomial(pop_size, frequencies))
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here we use Numpy's multinomial random sample.
get_offspring_counts()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We then need to assign this new list of haplotype counts to the pop dictionary. To save memory and computation, if a haplotype goes to 0, we remove it entirely from the pop dictionary.
def offspring_step(): counts = get_offspring_counts() for (haplotype, count) in zip(pop.keys(), counts): if (count > 0): pop[haplotype] = count else: del pop[haplotype] offspring_step() pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Combine and iterate Each generation is simply a mutation step where a random number of mutations are thrown down, and an offspring step where haplotype counts are updated.
def time_step(): mutation_step() offspring_step()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Can iterate this over a number of generations.
generations = 500 def simulate(): for i in range(generations): time_step() simulate() pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Record We want to keep a record of past population frequencies to understand dynamics through time. At each step in the simulation, we append to a history object.
pop = {"AAAAAAAAAA": pop_size} history = [] def simulate(): clone_pop = dict(pop) history.append(clone_pop) for i in range(generations): time_step() clone_pop = dict(pop) history.append(clone_pop) simulate() pop history[0] history[1] history[2]
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Analyze trajectories Calculate diversity Here, diversity in population genetics is usually shorthand for the statistic &pi;, which measures pairwise differences between random individuals in the population. &pi; is usually measured as substitutions per site.
pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
First, we need to calculate the number of differences per site between two arbitrary sequences.
def get_distance(seq_a, seq_b): diffs = 0 length = len(seq_a) assert len(seq_a) == len(seq_b) for chr_a, chr_b in zip(seq_a, seq_b): if chr_a != chr_b: diffs += 1 return diffs / float(length) get_distance("AAAAAAAAAA", "AAAAAAAAAB")
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We calculate diversity as a weighted average between all pairs of haplotypes, weighted by pairwise haplotype frequency.
def get_diversity(population): haplotypes = population.keys() haplotype_count = len(haplotypes) diversity = 0 for i in range(haplotype_count): for j in range(haplotype_count): haplotype_a = haplotypes[i] haplotype_b = haplotypes[j] frequency_a = population[haplotype_a] / float(pop_size) frequency_b = population[haplotype_b] / float(pop_size) frequency_pair = frequency_a * frequency_b diversity += frequency_pair * get_distance(haplotype_a, haplotype_b) return diversity get_diversity(pop) def get_diversity_trajectory(): trajectory = [get_diversity(generation) for generation in history] return trajectory get_diversity_trajectory()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Plot diversity Here, we use matplotlib for all Python plotting.
%matplotlib inline import matplotlib.pyplot as plt import matplotlib as mpl
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here, we make a simple line plot using matplotlib's plot function.
plt.plot(get_diversity_trajectory())
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here, we style the plot a bit with x and y axes labels.
def diversity_plot(): mpl.rcParams['font.size']=14 trajectory = get_diversity_trajectory() plt.plot(trajectory, "#447CCD") plt.ylabel("diversity") plt.xlabel("generation") diversity_plot()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Analyze and plot divergence In population genetics, divergence is generally the number of substitutions away from a reference sequence. In this case, we can measure the average distance of the population to the starting haplotype. Again, this will be measured in terms of substitutions per site.
def get_divergence(population): haplotypes = population.keys() divergence = 0 for haplotype in haplotypes: frequency = population[haplotype] / float(pop_size) divergence += frequency * get_distance(base_haplotype, haplotype) return divergence def get_divergence_trajectory(): trajectory = [get_divergence(generation) for generation in history] return trajectory get_divergence_trajectory() def divergence_plot(): mpl.rcParams['font.size']=14 trajectory = get_divergence_trajectory() plt.plot(trajectory, "#447CCD") plt.ylabel("divergence") plt.xlabel("generation") divergence_plot()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Plot haplotype trajectories We also want to directly look at haplotype frequencies through time.
def get_frequency(haplotype, generation): pop_at_generation = history[generation] if haplotype in pop_at_generation: return pop_at_generation[haplotype]/float(pop_size) else: return 0 get_frequency("AAAAAAAAAA", 4) def get_trajectory(haplotype): trajectory = [get_frequency(haplotype, gen) for gen in range(generations)] return trajectory get_trajectory("AAAAAAAAAA")
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We want to plot all haplotypes seen during the simulation.
def get_all_haplotypes(): haplotypes = set() for generation in history: for haplotype in generation: haplotypes.add(haplotype) return haplotypes get_all_haplotypes()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here is a simple plot of their overall frequencies.
haplotypes = get_all_haplotypes() for haplotype in haplotypes: plt.plot(get_trajectory(haplotype)) plt.show() colors = ["#781C86", "#571EA2", "#462EB9", "#3F47C9", "#3F63CF", "#447CCD", "#4C90C0", "#56A0AE", "#63AC9A", "#72B485", "#83BA70", "#96BD60", "#AABD52", "#BDBB48", "#CEB541", "#DCAB3C", "#E49938", "#E68133", "#E4632E", "#DF4327", "#DB2122"] colors_lighter = ["#A567AF", "#8F69C1", "#8474D1", "#7F85DB", "#7F97DF", "#82A8DD", "#88B5D5", "#8FC0C9", "#97C8BC", "#A1CDAD", "#ACD1A0", "#B9D395", "#C6D38C", "#D3D285", "#DECE81", "#E8C77D", "#EDBB7A", "#EEAB77", "#ED9773", "#EA816F", "#E76B6B"]
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We can use stackplot to stack these trajectoies on top of each other to get a better picture of what's going on.
def stacked_trajectory_plot(xlabel="generation"): mpl.rcParams['font.size']=18 haplotypes = get_all_haplotypes() trajectories = [get_trajectory(haplotype) for haplotype in haplotypes] plt.stackplot(range(generations), trajectories, colors=colors_lighter) plt.ylim(0, 1) plt.ylabel("frequency") plt.xlabel(xlabel) stacked_trajectory_plot()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Plot SNP trajectories
def get_snp_frequency(site, generation): minor_allele_frequency = 0.0 pop_at_generation = history[generation] for haplotype in pop_at_generation.keys(): allele = haplotype[site] frequency = pop_at_generation[haplotype] / float(pop_size) if allele != "A": minor_allele_frequency += frequency return minor_allele_frequency get_snp_frequency(3, 5) def get_snp_trajectory(site): trajectory = [get_snp_frequency(site, gen) for gen in range(generations)] return trajectory get_snp_trajectory(3)
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Find all variable sites.
def get_all_snps(): snps = set() for generation in history: for haplotype in generation: for site in range(seq_length): if haplotype[site] != "A": snps.add(site) return snps def snp_trajectory_plot(xlabel="generation"): mpl.rcParams['font.size']=18 snps = get_all_snps() trajectories = [get_snp_trajectory(snp) for snp in snps] data = [] for trajectory, color in itertools.izip(trajectories, itertools.cycle(colors)): data.append(range(generations)) data.append(trajectory) data.append(color) plt.plot(*data) plt.ylim(0, 1) plt.ylabel("frequency") plt.xlabel(xlabel) snp_trajectory_plot()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Scale up Here, we scale up to more interesting parameter values.
pop_size = 50 seq_length = 100 generations = 500 mutation_rate = 0.0001 # per gen per individual per site
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
And the population genetic parameter $\theta$, which equals $2N\mu$, is 1.
2 * pop_size * seq_length * mutation_rate base_haplotype = ''.join(["A" for i in range(seq_length)]) pop.clear() del history[:] pop[base_haplotype] = pop_size simulate() plt.figure(num=None, figsize=(14, 14), dpi=80, facecolor='w', edgecolor='k') plt.subplot2grid((3,2), (0,0), colspan=2) stacked_trajectory_plot(xlabel="") plt.subplot2grid((3,2), (1,0), colspan=2) snp_trajectory_plot(xlabel="") plt.subplot2grid((3,2), (2,0)) diversity_plot() plt.subplot2grid((3,2), (2,1)) divergence_plot()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Create histograms h1, h2, ... have their alternatives in physt.dask_compat. They should work similarly. Although, they are not complete and unexpected errors may occur.
from physt.compat.dask import h1 as d1 from physt.compat.dask import h2 as d2 # Use chunks to create a 1D histogram ha = d1(chunked2, "fixed_width", bin_width=0.2) check_ha = h1(million2, "fixed_width", bin_width=0.2) ok = (ha == check_ha) print("Check: ", ok) ha.plot() ha # Use chunks to create a 2D histogram hb = d2(chunked, chunked2, "fixed_width", bin_width=.2, axis_names=["x", "y"]) check_hb = h2(million, million2, "fixed_width", bin_width=.2, axis_names=["x", "y"]) hb.plot(show_zero=False, cmap="rainbow") ok = (hb == check_hb) print("Check: ", ok) hb # And another cross-check hh = hb.projection("y") hh.plot() print("Check: ", np.array_equal(hh.frequencies, ha.frequencies)) # Just frequencies # Use dask for normal arrays (will automatically split array to chunks) d1(million2, "fixed_width", bin_width=0.2) == ha
doc/dask.ipynb
janpipek/physt
mit
Some timings Your results may vary substantially. These numbers are just for illustration, on 4-core (8-thread) machine. The real gain comes when we have data that don't fit into memory. Efficiency
# Standard %time h1(million2, "fixed_width", bin_width=0.2) # Same array, but using dask %time d1(million2, "fixed_width", bin_width=0.2) # Most efficient: dask with already chunked data %time d1(chunked2, "fixed_width", bin_width=0.2)
doc/dask.ipynb
janpipek/physt
mit
Different scheduling
%time d1(chunked2, "fixed_width", bin_width=0.2) %%time # Hyper-threading or not? graph, name = d1(chunked2, "fixed_width", bin_width=0.2, compute=False) dask.threaded.get(graph, name, num_workers=4) # Multiprocessing not so efficient for small arrays? %time d1(chunked2, "fixed_width", bin_width=0.2, dask_method=dask.multiprocessing.get)
doc/dask.ipynb
janpipek/physt
mit
High level API We recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution. Layers: common sets of useful operations Most of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables. Many machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers. TensorFlow includes the full Keras API in the tf.keras package, and the Keras layers are very useful when building your own models.
# In the tf.keras.layers package, layers are objects. To construct a layer, # simply construct the object. Most layers take as a first argument the number # of output dimensions / channels. layer = tf.keras.layers.Dense(100) # The number of input dimensions is often unnecessary, as it can be inferred # the first time the layer is used, but it can be provided if you want to # specify it manually, which is useful in some complex models. layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
tensorflow/contrib/eager/python/examples/notebooks/4_high_level.ipynb
gojira/tensorflow
apache-2.0
Heu de seleccionar un valor intermedi per a distingir entre blanc i negre. Serà el llindar que usareu: si la lectura està per damunt, el sensor està detectant blanc si està per baix, negre. Seguiment de línia, versió 1.0 <img src="img/line_circuit.jpg" width=240 align="right"> Per a seguir la línia, la idea és senzilla: el robot va pel costat de la línia, per la part de dins del circuit tancat si el sensor detecta blanc, el robot va recte si detecta negre, el robot gira per rectificar la seua trajectòria Depenent del sentit en que està recorrent el circuit, haurà de rectificar girant a l'esquerra o a la dreta.
from functions import left, right try: while True: if ___: ___ else: ___ except KeyboardInterrupt: stop()
task/light.ipynb
ecervera/mindstorms-nb
mit
Versió 2.0 Si el robot se n'ix per la curva, cal fer un gir més pronunciat. Les funcions que hem utilitzat fins ara giren una roda mentre l'altra està parada. Es pot girar més si, en lloc d'estar parada, l'altra roda gira en sentit contrari. Per a fer-ho, useu les següents funcions:
from functions import left_sharp, right_sharp try: while True: if ___: ___ else: ___ except KeyboardInterrupt: stop()
task/light.ipynb
ecervera/mindstorms-nb
mit
Versió 3.0 En general, pot haver curves a esquerra i dreta. Aleshores, el mètode anterior no és suficient. Una solució seria usar dos sensors, un per cada costat de la línia, però només en tenim un per robot. Una altra solució és fer que el sensor vaja per la vora de la línia, de manera que la superfície serà meitat negra i meitat blanca. Si definim dos llindars, aleshores: si el sensor està per baix dels dos llindars, és negre si està per dalt dels dos, és blanc si està enmig, és la vora de la línia El robot anirà recte quan vaja per la vora de la línia, i pot rectificar a esquerra en un cas, i a dreta en l'altre.
try: while True: if ___: ___ else: if ___: ___ else: ___ except KeyboardInterrupt: stop()
task/light.ipynb
ecervera/mindstorms-nb
mit
Versió 4.0 Si el robot oscil·la massa, fins i tot pot travessar la línia i desorientar-se completament. La solució és disminuir la velocitat, indicant-ho a les funcions de moviment, per exemple: python forward(speed=65) El valor per defecte és 100, el màxim, i el mínim és 0.
try: while True: if ___: ___ else: if ___: ___ else: ___ except KeyboardInterrupt: stop()
task/light.ipynb
ecervera/mindstorms-nb
mit
Recapitulem En aquesta pàgina no hem vist conceptes nous, però hem aplicat el que ja sabíem al problema de seguir la línia. Passem a vore el darrer sensor. Abans de continuar, desconnecteu el robot:
from functions import disconnect, next_notebook disconnect() next_notebook('ultrasonic')
task/light.ipynb
ecervera/mindstorms-nb
mit
2. Calculate the translational partition function of a CO molecule in the bottle at 298 K. What is the unit of the partition function? For $T \gg \theta_{trans}$, $\Lambda \ll L$, $q_{trans} = V/\Lambda^3$. $\Lambda = h \left(\frac{\beta}{2\pi m}\right)^{1/2}$.
Lamda = h*(1/(kB*298*2*np.pi*m))**0.5 print(Lamda) q_trans = V/Lamda**3 print('The translational partition function of a CO molecule in the bottle at 298 K is {:.4E}.'.format(q_trans)) print('It is dimensionless.')
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
3. Plot the rotational and vibrational partition functions of a CO molecule in the bottle from $T$ = 200 to 2000 K (assume the CO remains a gas over the whole range). Hint: Use your answer to Problem 1 to simplify calculating the rotational partition function. $q_{rot} = \frac{1}{\sigma}\frac{T}{\theta_{rot}} = \frac{T}{\theta_{rot}}$ $q_{vib} = \frac{1}{1-e^{-\theta_{vib}/T}}$
T = np.linspace(200,2000,1000) # r = R/a_0 q_rot = T/T_rot q_vib = 1/(1-np.exp(-T_vib/T)) plt.plot(T,q_rot) plt.xlabel('T (K)') plt.ylabel('$q_{rot}$') plt.title('The rotational partition function of a CO molecule') plt.show() plt.plot(T,q_vib) plt.xlabel('T (K)') plt.ylabel('$q_{vib}$') plt.title('The vibrational partition function of a CO molecule') plt.show()
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
4. Plot the total translational, rotational, and vibrational energies of CO in the bottle from $T =$ 200 to 2000 K (assume the CO remains a gas over the whole range). Which (if any) of the three types of motions dominate the total energy? $U_{trans} = \frac{3}{2}RT$ $U_{rot} = RT$ $U_{vib} = R\frac{\theta_{vib}}{e^{\theta_{vib}/T}-1}$.
R = 8.31447 # J/(mol*K) U_trans = 1.5*R*T U_rot = R*T U_vib = R*T_vib/(np.exp(T_vib/T)-1) plt.plot(T,U_trans,label='U_trans') plt.plot(T,U_rot,label='U_rot') plt.plot(T,U_vib,label='U_vib') plt.legend() plt.xlabel('T (K)') plt.ylabel('Internal Energy (J/mol)') plt.title('Internal energies of CO in the bottle') plt.show()
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
5. Plot the total translational, rotational, and vibrational constant volume molar heat capacities of CO in the bottle from $T =$ 200 to 2000 K. Which (if any) of the three types of motions dominate the heat capacity? $C_{V,trans} = \frac{3}{2}R$ $C_{V,rot} = R$ $C_{V,vib} = R\left(\frac{\theta_{vib}}{T}\frac{e^{\theta_{vib}/2T}}{e^{\theta_{vib}/T}-1}\right)^2$
Cv_trans = np.linspace(1.5*R,1.5*R,1000) Cv_rot = np.linspace(R,R,1000) Cv_vib = R*(T_vib/T*np.exp(T_vib/2./T)/(np.exp(T_vib/T)-1))**2 plt.plot(T,Cv_trans,label='Cv_trans') plt.plot(T,Cv_rot,label='Cv_rot') plt.plot(T,Cv_vib,label='Cv_vib') plt.legend() plt.xlabel('T (K)') plt.ylabel('Heat Capacity (J/mol/K)') plt.title('Constant volume molar heat capacities of CO in the bottle ') plt.show() print('While translational motion contributes the most to the molar heat capacity of CO,') print('it does not dominate over rotational and vibrational motion.')
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
6. Plot the total translational, rotational, and vibrational Helmholtz energies of CO in the bottle from $T =$ 200 to 2000 K. Which (if any) of the three types of motions dominate the Helmholtz energy? $A = U - TS$ $S_{trans} = Rln\left(\frac{e^{5/2}V}{N\Lambda^3}\right)$ $S_{rot} = R(1-ln(\theta_{rot}/T))$ $S_{vib} = R\left(\frac{\theta_{vib}/T}{e^{\theta_{vib}/T}-1}-ln(1-e^{-\theta_{vib}/T})\right)$.
NA = 6.022e23 S_trans = R*np.log(np.exp(2.5)*V/NA/Lamda**3) S_rot = R*(1-np.log(T_rot/T)) S_vib = R*(T_vib/T/(np.exp(T_vib/T)-1)-np.log(1-np.exp(-T_vib/T))) A_trans = U_trans-T*S_trans A_rot = U_rot-T*S_rot A_vib = U_vib-T*S_vib plt.plot(T,A_trans,label='A_trans') plt.plot(T,A_rot,label='A_rot') plt.plot(T,A_vib,label='A_vib') plt.legend() plt.xlabel('T (K)') plt.ylabel('Helmholtz Energy (J/mol)') plt.title('Helmholtz energies of CO in the bottle') plt.show()
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
7. Use your formulas to calculate $\Delta P$, $\Delta U$, $\Delta A$, and $\Delta S$ associated with isothermally expanding the gas from 20 dm$^3$ to 40 dm$^3$. T = 298 K. $\Delta U=0$. $\Delta P = \frac{RT}{V_2} - \frac{RT}{V_1}$. $\Delta S = S_{trans,2} - S_{trans,1}$. $A = U-TS$, so, $\Delta A = -T\Delta S$.
V2 = 0.04 # m^3 deltaP = R*298*(1/V2-1/V) deltaS = R*np.log(np.exp(2.5)*V2/NA/Lamda**3) - R*np.log(np.exp(2.5)*V/NA/Lamda**3) deltaA = -deltaS*298 print('Delta P = {0:.3f} Pa, Delta U = 0, Delta A = {1:.3f} J/mol, and Delta S = {2:.3f} J/mol/K.'.format(deltaP,deltaA,deltaS))
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
Reactions from scratch In 1996, Schneider and co-workers used quantum chemistry to compute the reaction pathway for unimolecular decomposition of trifluoromethanol, a reaction of relevance to the atmospheric degradation of hydrofluorocarbon refrigerants (J. Phys. Chem. 1996, 100, 6097- 6103, doi:10.1021/jp952703m): $$\mathrm{CF_3OH\rightarrow COF_2 + HF}$$ Following are some of the reported results, computed at 298 K: | | CF$_3$OH| C(O)F$_2$ | HF | | |:--------------|---------:|-----------:|----:|----:| | $E^\text{elec}$ | -412.90047 | -312.57028 | -100.31885 | (Hartree) | | ZPE | 0.02889 | 0.01422 | 0.00925 | (Hartree) | | $U^\text{trans}$ | 3.7 | 3.7| 3.7 | (kJ mol$^{-1}$) | | $U^\text{rot }$ | 3.7 | 3.7| 2.5 | (kJ mol$^{-1}$) | | $U^\text{vib}$ | 4.3 | 1.2 | 0 | (kJ mol$^{-1}$) | | $q^{\text{trans}}/V$ | $7.72\times 10^{32}$ | $1.59\times 10^{32}$ |$8.65\times 10^{31}$ | (m$^{-3}$) | | $q^\text{rot}$ | 61830 | 679 | 9.59 | | | $q^\text{vib}$ | 2.33 | 1.16 | 1 | | 8. Using the data provided, determine $\Delta U^{\circ}$(298 K)), in kJ mol$^{-1}$, assuming ideal behavior and 1 M standard state. Recall that $U(T)$ is the sum of the contributions of all degrees of freedom. Remember that $E_0$ is contained in $\Delta{U}^{\circ},\Delta{A}^{\circ},$ and $\Delta{G}^{\circ}.$ For example: $$\Delta U^{\circ} =\sum_{product}U_{trans} + U_{rot} + U_{vib} + (E_{elec} + ZPE) - \sum_{reactant} U_{trans} + U_{rot} + U_{vib} + (E_{elec} + ZPE)$$$$=\sum_{product}U_{trans} + U_{rot} + U_{vib} + E_0 - \sum_{reactant} U_{trans} + U_{rot} + U_{vib} + E_0$$ $\Delta U^{\circ} =\Delta(U_{trans} + U_{rot} + U_{vib}) + \Delta(E_0)$
import numpy as np T = 298 # K k = 1.38065e-23 # J/K R = 8.31447 # J/(mol*K) Na = 6.0221e23 # 1/mol c = 6.0221e26 # 1/m^3, conversion factor of 1mol/L = 6.02e26 particles/m^3 autokJ = 2625.50 Eelec = [-412.90047 ,-312.57028 ,-100.31885 ] # kJ/mol ZPE = [0.02889 ,0.01422 ,0.00925 ] # kJ/mol dE0 = ((Eelec[1] + ZPE[1] + Eelec[2] + ZPE[2]) - (Eelec[0] + ZPE[0]))* autokJ # kJ/mol u_trans = [3.7,3.7,3.7] #kJ/mol u_rot = [3.7,3.7,2.5] #kJ/mol u_vib = [4.3,1.2,0] #kJ/mol dU = dE0 + (u_trans[1]+u_rot[1]+u_vib[1])+(u_trans[2]+u_rot[2]+u_vib[2])-(u_trans[0]+u_rot[0]+u_vib[0])#kJ/mol print("delta_U = %.2f kJ/mol"%dU)
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
9. Using the data provided, determine $\Delta A^{\circ}$(298 K) in kJ mol$^{-1}$, assuming ideal behavior and 1 M standard state. Recall that $A^\circ=E^\text{elec} + \text{ZPE}-RT\ln(q^\circ)-RT$ and that $q^\circ =(q^\text{trans}/V)q^\text{rot}q^\text{vib}/c^\circ$ in units corresponding with the standard state. $\Delta{A^{\circ}}= \big{[-k_BT\ln(q_{t}q_{r}q_{v})-k_BT]{COF_2}+[-k_BT\ln(q{t}q_{r}q_{v})-k_BT]{HF}-[-k_BT\ln(q{t}q_{r}q_{v})-k_BT]_{CF_3OH}\big} *N_A + \Delta(E_0)$ $\quad \quad = \Delta(E_0)-RT\ln(Q)-RT$
q_trans = [7.72e32/c,1.59e32/c,8.65e31/c] # change translational partition functions from 1/m3 to mol/l std state q_rot = [61830,679,9.59] # unitless q_vib = [2.33,1.16,1] # unitless Q = (q_trans[1]*q_rot[1]*q_vib[1])*(q_trans[2]*q_rot[2]*q_vib[2])/(q_trans[0]*q_rot[0]*q_vib[0]) # total partition dA = dE0 + (-R*T*np.log(Q)- R*T)/1000 #kJ/mol print("Q = %.2f"%Q) print("delta_E0 = %.2f"%dE0) print("delta_A = %.2f kJ/mol"%dA)
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
10. Determine $\Delta G^\circ$(298 K). Recall that $G = A + PV = A + RT$ for an ideal ga. $\Delta{G^{\circ}} = \Delta{A^{\circ}} + \Delta(PV)$ $\quad = \quad \Delta{A^{\circ}} + RT$
dG = dA + R*T/1000 #kJ/mol print("delta_G = %.2f kJ/mol"%dG)
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
11. Determine $\Delta S^{\circ}$(298 K), in J mol$^{-1}$ K$^{-1}$ , assuming a 1 M standard state. Recall that $S = (U - A)/T$. $\Delta A^{\circ} = \Delta U^{\circ} - T\Delta S^{\circ}$ $\Delta S^{\circ} = \frac{\Delta U^{\circ} - \Delta A^{\circ}}{T}$
dS = 1000*(dU - dA)/T print("delta_S = %.2f J/mol K"%dS)
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
12. Using the data provided, determine $K_c$ (298 K), assuming a 1 M standard state. You may either determine from partition functions of from the relationship between $K_c$ and $\Delta G^\circ$. $\mathrm{A} \rightarrow \mathrm{B + C}$$$K_c(T) = \frac{\frac{q_{\mathrm{B}}}{V}\frac{q_{\mathrm{C}}}{V}}{\frac{q_{\mathrm{A}}}{V}} \frac{1}{c^{\circ}} e^{-\Delta{E(0)}/kT} = e^{-\Delta G^\circ(T)/RT}, \quad \text{where} \quad \frac{q_i}{V} = \frac{q_{trans}}{V}q_{rot}q_{vib}$$ Note: $K_c(T)$ is formally unitless but "remembers" that it refers to 1 M standard state.
Kc = Q *np.exp(-dE0*1000/(R*T)) # Kc equation from lecture notes print('Kc = %.3f (unitless). '%(Kc)) Kc = np.exp(-dG*1000/(R*T)) print('Kc = %.3f (unitless). '%(Kc))
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
13. 1 mole of CF$_3$OH is generated in a 20 L vessel at 298 K and left long enough to come to equilibrium with respect to its decomposition reaction. What is the composition of the gas (concentrations of all the components) at equilibrium (in mol/L)? 1 mol/ 20 L = 0.05 mol/L $\mathrm{A} \rightarrow \mathrm{B + C}$ $K_c = \frac{x^2}{0.05-x}$, solve for x. Note that x = $[\frac{mol}{L}]$.
from sympy import * x = symbols('x',positive=True) c = solve(x**2-(0.05-x)*Kc,x) print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.05-c[0],c[0],c[0])) print('At equilibrium, CF3OH = %.2E mol, COF2 = %.5f mol, HF = %.5f mol.'%((0.05-c[0])*20,c[0]*20,c[0]*20))
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
14. How, directionally, would your answer to Question 13 change if the vessel was at a higher temperature? Use the van'T Hoff relationship to determine the equilibrium constant and equilibrium concentrations at 273 and 323 K. How good was your guess? From question #8, we know that at 298K, $\Delta U$ = 18.64 kJ/mol. $\Delta H = \Delta U + \Delta (nRT) = \Delta U + RT\Delta n$
dn = 2-1 R = 8.314/1000 #kJ/mol K T = 298 #K dH = dU+dn*R*T #kJ/mol print("dH =",round(dH,3),"kJ/mol")
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
Since $\Delta H$ is positive, we expect $K \uparrow$ as $T \uparrow$ The Van't Hoff equation states: $ln\frac{K(T_2)}{K(T_1)}=\frac{-\Delta H^o}{R}(\frac{1}{T_2}-\frac{1}{T_1})$ We know from question 12, K = 2.926 at 298 K.
K1 = 2.926 T1 = 298 #K T2 = 273 #K K2 = K1*np.exp(-dH/R*(1/T2-1/T1)) print('K=', round(K2,4), 'at 273 K.') x = symbols('x',positive=True) c = solve(x**2-(0.05-x)*K2,x) print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.05-c[0],c[0],c[0])) K1 = 2.926 T1 = 298 #K T2 = 323 #K K2 = K1*np.exp(-dH/R*(1/T2-1/T1)) print('K=', round(K2,4), 'at 323 K.') x = symbols('x',positive=True) c = solve(x**2-(0.05-x)*K2,x) print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.05-c[0],c[0],c[0]))
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
Therefore, the at higher temperatures, the reaction shifts towards the products. 15. How, directionally, would your answer to Question 13 change if the vessel had a volume of 5 L? Redo the calculation at this volume to verify your guess. If V = 5L, the initial concentration = $\frac{1 mol}{5 L}=.2 M$ $K_c = \frac{x^2}{0.2-x}$ At 298 K, Kc = 2.926 (from problem 12)
T = 298 #K R = 8.314 Kc = np.exp(-dG*1000/(R*T)) print('At 298 K, Kc = %.3f (unitless). '%(Kc)) x = symbols('x',positive=True) c = solve(x**2-(0.2-x)*Kc,x) print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.2-c[0],c[0],c[0])) print('At equilibrium, CF3OH = %.2E mol, COF2 = %.5f mol, HF = %.5f mol.'%((0.2-c[0])*5,c[0]*5,c[0]*5)) print('At a smaller volume, the concentration of products increases, but the number of moles decreases.')
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
16. Consult a thermodynamics source (e.g. https://webbook.nist.gov/chemistry/) to determine $\Delta H^\circ$(298 K), $\Delta S^\circ$(298 K), and $\Delta G^\circ$(298 K) for the homologous reaction CH$_3$OH (g)$\rightarrow$ H$_2$O (g) + H$_2$CO (g). Does the substitution of F by H make the reaction more or less favorable?
T = 298 #K #All values were taken from NIST #Methanol Hm = -205 #kJ/mol Sm = .2399 #kJ/mol K Gm = Hm - T*Sm #kJ/mol #Hydrogen Hh = 0 Sh = .13068 #J/mol K Gh = Hh - T*Sh #kJ/mol #Formaldehyde Hf = -108.6 #kJ/mol Sf = .21895 #kJ/mol K Gf = Hf - T*Sf #kJ/mol delta_H = Hf+Hh-Hm #kJ/mol delta_S = Sf+Sh-Sm #kJ/mol K delta_G = Gf+Gh-Gm #kJ/mol print('Delta H =',delta_H,'kJ/mol.') print('Delta S =',delta_S,'kJ/mol K.') print('Delta G =',delta_G,'kJ/mol.') print('Therefore, by substituting F with H makes the reaction less favorable.')
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
This Notebool is Mainly to Check the Buffer built from Entrances .shp (previously) and from geojson (.json) in most recent Dumbo-Can-Run scripts. Load
from shapely.geometry import Point import pyproj import geopandas as gpd proj = pyproj.Proj(init='epsg:2263', preserve_units=True) entr_points = sqlContext.read.load('../why_yellow_taxi/Data/2016_(May)_New_York_City_Subway_Station_Entrances.json', \ format='json', header=True, inferSchema=True).collect()[0].asDict()['features'] routes = ['route_'+str(i) for i in range(1,12)] entr_geo = gpd.GeoDataFrame(columns=['geometry', 'lines']) for i in range(len(entr_points)): entr_coor = entr_points[i].asDict()['geometry'].asDict()['coordinates'] entr_buffer = Point(proj(float(entr_coor[0]), float(entr_coor[1]))).buffer(100) entr_prop = entr_points[i].asDict()['properties'].asDict() entr_lines = [entr_prop[r] for r in routes if entr_prop[r]] entr_geo = entr_geo.append({'geometry':entr_buffer, 'lines':entr_lines}, ignore_index=True) shp = gpd.read_file('../why_yellow_taxi/Buffer/entr_buffer_100_feet_epsg4269_nad83/entr_buffer_100_feet_epsg4269_nad83.shp')
Buffer/Use_Geojson2build_Buffer_Not_Shapefile.ipynb
djfan/why_yellow_taxi
mit
List
entr_geo.head(2) shp.head(2)
Buffer/Use_Geojson2build_Buffer_Not_Shapefile.ipynb
djfan/why_yellow_taxi
mit
Identical or Not?
entr_geo.head(2).geometry[1] == shp.head(2).geometry[1]
Buffer/Use_Geojson2build_Buffer_Not_Shapefile.ipynb
djfan/why_yellow_taxi
mit
Detail
shp.head(2).geometry[0].centroid.x shp.head(2).geometry[0].centroid.y entr_geo.head(2).geometry[0].centroid.x entr_geo.head(2).geometry[0].centroid.y
Buffer/Use_Geojson2build_Buffer_Not_Shapefile.ipynb
djfan/why_yellow_taxi
mit
Unit Test
%run ../utils/results.py # %load test_dfs.py from nose.tools import assert_equal class TestDfs(object): def __init__(self): self.results = Results() def test_dfs(self): node = Node(5) insert(node, 2) insert(node, 8) insert(node, 1) insert(node, 3) in_order_traversal(node, self.results.add_result) assert_equal(str(self.results), "[1, 2, 3, 5, 8]") self.results.clear_results() pre_order_traversal(node, self.results.add_result) assert_equal(str(self.results), "[5, 2, 1, 3, 8]") self.results.clear_results() post_order_traversal(node, self.results.add_result) assert_equal(str(self.results), "[1, 3, 2, 8, 5]") self.results.clear_results() node = Node(1) insert(node, 2) insert(node, 3) insert(node, 4) insert(node, 5) in_order_traversal(node, self.results.add_result) assert_equal(str(self.results), "[1, 2, 3, 4, 5]") self.results.clear_results() pre_order_traversal(node, self.results.add_result) assert_equal(str(self.results), "[1, 2, 3, 4, 5]") self.results.clear_results() post_order_traversal(node, self.results.add_result) assert_equal(str(self.results), "[5, 4, 3, 2, 1]") print('Success: test_dfs') def main(): test = TestDfs() test.test_dfs() if __name__ == '__main__': main()
interactive-coding-challenges/graphs_trees/tree_dfs/dfs_challenge.ipynb
saashimi/code_guild
mit
因为需要全市场回测所以本章无法使用沙盒数据,《量化交易之路》中的原始示例使用的是美股市场,这里的示例改为使用A股市场。 本节建议对照阅读abu量化文档第20-23节内容 本节的基础是在abu量化文档中第20节内容完成运行后有A股训练集交易和A股测试集交易数据之后 abu量化系统github地址 (您的star是我的动力!) abu量化文档教程ipython notebook 第11章 量化系统-机器学习•ABU
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop, AbuFactorBuyBreak from abupy import abu, EMarketTargetType, AbuMetricsBase, ABuMarketDrawing, ABuProgress, ABuSymbolPd from abupy import EMarketTargetType, EDataCacheType, EMarketSourceType, EMarketDataFetchMode, EStoreAbu, AbuUmpMainMul from abupy import AbuUmpMainDeg, AbuUmpMainJump, AbuUmpMainPrice, AbuUmpMainWave, feature, AbuFeatureDegExtend from abupy import AbuUmpEdgeDeg, AbuUmpEdgePrice, AbuUmpEdgeWave, AbuUmpEdgeFull, AbuUmpEdgeMul, AbuUmpEegeDegExtend from abupy import AbuUmpMainDegExtend, ump, Parallel, delayed, AbuMulPidProgress, AbuProgress , # 关闭沙盒数据 abupy.env.disable_example_env_ipython() abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL abu_result_tuple_train = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME, custom_name='train_cn') abu_result_tuple_test = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME, custom_name='test_cn') ABuProgress.clear_output() print('训练集结果:') metrics_train = AbuMetricsBase.show_general(*abu_result_tuple_train, only_show_returns=True) print('测试集结果:') metrics_test = AbuMetricsBase.show_general(*abu_result_tuple_test, only_show_returns=True)
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
11.1 搜索引擎与量化交易 本节建议对照abu量化文档第15节内容进行阅读
orders_pd_train = abu_result_tuple_train.orders_pd # 选择失败的前20笔交易绘制交易快照 # 这里只是示例,实战中根据需要挑选,rank或者其他方式 plot_simple = orders_pd_train[orders_pd_train.profit_cg < 0][:20] # save=True保存在本地, 文件保存在~/abu/data/save_png/中 ABuMarketDrawing.plot_candle_from_order(plot_simple, save=True)
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
11.2 主裁 11.2.1 角度主裁 请对照阅读ABU量化系统使用文档 :第15节 中相关内容
from abupy import AbuUmpMainDeg # 参数为orders_pd ump_deg = AbuUmpMainDeg(orders_pd_train) # df即由之前ump_main_make_xy生成的类df,表11-1所示 ump_deg.fiter.df.head()
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
耗时操作,快的电脑大概几分钟,具体根据电脑性能,cpu数量,启动多进程进行训练:
_ = ump_deg.fit(brust_min=False) ump_deg.cprs max_failed_cluster = ump_deg.cprs.loc[ump_deg.cprs.lrs.argmax()] print('失败概率最大的分类簇{0}, 失败率为{1:.2f}%, 簇交易总数{2}, ' \ '簇平均交易获利{3:.2f}%'.format(ump_deg.cprs.lrs.argmax(), max_failed_cluster.lrs * 100, max_failed_cluster.lcs, max_failed_cluster.lms * 100)) cpt = int(ump_deg.cprs.lrs.argmax().split('_')[0]) print(cpt) ump_deg.show_parse_rt(ump_deg.rts[cpt]) max_failed_cluster_orders = ump_deg.nts[ump_deg.cprs.lrs.argmax()] # 表11-3所示 max_failed_cluster_orders
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
由于不是同一份沙盒数据,所以下面结果内容与书中分析内容不符,需要按照实际情况分析: 比如下面的特征即是42日和60日的deg格外大,21和252相对训练集平均值也很大:
from abupy import ml ml.show_orders_hist(max_failed_cluster_orders, ['buy_deg_ang21', 'buy_deg_ang42', 'buy_deg_ang60','buy_deg_ang252']) print('分类簇中deg_ang60平均值为{0:.2f}'.format( max_failed_cluster_orders.buy_deg_ang60.mean())) print('分类簇中deg_ang21平均值为{0:.2f}'.format( max_failed_cluster_orders.buy_deg_ang21.mean())) print('分类簇中deg_ang42平均值为{0:.2f}'.format( max_failed_cluster_orders.buy_deg_ang42.mean())) print('分类簇中deg_ang252平均值为{0:.2f}'.format( max_failed_cluster_orders.buy_deg_ang252.mean())) ml.show_orders_hist(orders_pd_train, ['buy_deg_ang21', 'buy_deg_ang42', 'buy_deg_ang60', 'buy_deg_ang252']) print('训练数据集中deg_ang60平均值为{0:.2f}'.format( orders_pd_train.buy_deg_ang60.mean())) print('训练数据集中deg_ang21平均值为{0:.2f}'.format( orders_pd_train.buy_deg_ang21.mean())) print('训练数据集中deg_ang42平均值为{0:.2f}'.format( orders_pd_train.buy_deg_ang42.mean())) print('训练数据集中deg_ang252平均值为{0:.2f}'.format( orders_pd_train.buy_deg_ang252.mean())) progress = AbuProgress(len(max_failed_cluster_orders), 0, label='plot snap') for ind in np.arange(0, len(max_failed_cluster_orders)): progress.show(ind) order_ind = int(max_failed_cluster_orders.iloc[ind].ind) # 交易快照文件保存在~/abu/data/save_png/中 ABuMarketDrawing.plot_candle_from_order(ump_deg.fiter.order_has_ret.iloc[order_ind], save=True)
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
交易快照文件保存在~/abu/data/save_png/中, 下面打开对应目录:save_png
if abupy.env.g_is_mac_os: !open $abupy.env.g_project_data_dir else: !echo $abupy.env.g_project_data_dir
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
11.2.2 使用全局最优对分类簇集合进行筛选
brust_min = ump_deg.brust_min() brust_min llps = ump_deg.cprs[(ump_deg.cprs['lps'] <= brust_min[0]) & (ump_deg.cprs['lms'] <= brust_min[1] )& (ump_deg.cprs['lrs'] >=brust_min[2])] llps ump_deg.choose_cprs_component(llps) ump_deg.dump_clf(llps)
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
11.2.3 跳空主裁 请对照阅读ABU量化系统使用文档 :第16节 UMP主裁交易决策 中相关内容
from abupy import AbuUmpMainJump # 耗时操作,大概需要10几分钟,具体根据电脑性能,cpu情况 ump_jump = AbuUmpMainJump.ump_main_clf_dump(orders_pd_train, save_order=False) ump_jump.fiter.df.head()
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0