markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now let's use the above arrangement of axes as a template for the figurefirst:figure objects that we have drawn in an svg file. Below is a rendering of the svg file showing where we would like the templated sets of axes to go.
display(SVG('mpl_fig_for_templates.svg'))
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
FlyRanch/figurefirst
mit
Now we call add_mpl_fig_to_figurefirst_svg with the necessary arguments to add a layer to the above svg file. This new layer is similar to the layer created previously using mpl_fig_to_figurefirst_svg, except that instead of placing this layer into an empty svg, we are adding it to an existing svg. The one trick being ...
fifi_svg_filename = 'mpl_fig_for_templates.svg' mpl_fig = fig output_filename = 'mpl_fig_for_templates_output.svg' layout = fifi.mpl_fig_to_figurefirst_svg.add_mpl_fig_to_figurefirst_svg( fifi_svg_filename, mpl_fig, ...
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
FlyRanch/figurefirst
mit
Now we can load the layout that has the added layer generated from the Matplotlib figure, and make a sample plot.
layout = fifi.svg_to_axes.FigureLayout('mpl_fig_for_templates_output.svg', make_mplfigures=True) # collect the group and axis names # Note: you could also do this by noting your figurefirst:figure names from the layout file import numpy as np groups_and_axes = np.unique(np.ravel(layout.axes.keys())) groups = [g fo...
examples/mpl_to_svg_layout/mpl_fig_to_figurefirst_svg.ipynb
FlyRanch/figurefirst
mit
The second line of this code uses the keyword if to tell Python that we want to make a choice. If the test that follows the if statement is true, the body of the if (i.e., the lines indented underneath it) are executed. If the test is false, the body of the else is executed instead. Only one or the other is ever execut...
num = 53 print('before conditional...') if num > 100: print('53 is greater than 100') print('...after conditional')
02-Python1/02-Python-1-Logic_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
We can also chain several tests together using elif, which is short for “else if”. The following Python code uses elif to print the sign of a number.
num = -3 if num > 0: print(num, "is positive") elif num == 0: print(num, "is zero") else: print(num, "is negative")
02-Python1/02-Python-1-Logic_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
One important thing to notice in the code above is that we use a double equals sign == to test for equality rather than a single equals sign because the latter is used to mean assignment. We can also combine tests using and and or. and is only true if both parts are true:
if (1 > 0) and (-1 > 0): print('both parts are true') else: print('at least one part is false')
02-Python1/02-Python-1-Logic_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
Checking out Data Now that we’ve seen how conditionals work, we can use them to check for the suspicious features we saw in our inflammation data. In the first couple of plots, the maximum inflammation per day seemed to rise like a straight line, one unit per day. We can check for this inside the for loop we wrote wit...
import numpy as np data = np.loadtxt(fname='data/inflammation-01.csv', delimiter=',') if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20: print('Suspicious looking maxima!') elif data.min(axis=0).sum() == 0: print('Minima add up to zero!') else: print('Seems OK!') data = numpy.loadtxt(fname='infla...
02-Python1/02-Python-1-Logic_Instructor.ipynb
OpenAstronomy/workshop_sunpy_astropy
mit
<a id='strings'></a> Strings In Python, human language text gets represented as a string. These contain sequential sets of characters and they are offset by quotation marks, either double (") or single ('). We will explore different kinds of operations in Python that are specific to human language objects, but it is us...
# The iconic string print("Hello, World!") # Assign these strings to variables a = "Hello" b = 'World' # Try out arithmetic operations. # When we add strings we call it 'concatenation' print(a+" "+b) print(a*5) # Unlike a number that consists of a single value, a string is an ordered # sequence of characters. We ...
01-IntroToPython/.ipynb_checkpoints/00-PythonBasics-checkpoint.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='lists'></a> Lists The numbers and strings we have just looked at are the two basic data types that we will focus our attention on in this workshop. (In a few days, we will look at a third data type, boolean, which consists of just True/False values.) When we are working with just a few numbers or strings, it is ...
# Let's assign a couple lists to variables list1 = ['Call', 'me', 'Ishmael'] list2 = ['In', 'the', 'beginning'] ## Q. Predict what will happen when we perform the following operations print(list1+list2) print(list1*5) # As with a string, we can find out the length of a list len(list1) # Sometimes we just want a s...
01-IntroToPython/.ipynb_checkpoints/00-PythonBasics-checkpoint.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='tricks'></a> A couple of useful tricks <a id='string_methods'></a> String Methods The creators of Python recognize that human language has many important yet idiosyncratic features, so they have tried to make it easy for us to identify and manipulate them. For example, in the demonstration at the very beginning ...
# Let's assign a variable to perform methods upon greeting = "Hello, World!" # We saw the 'endswith' method at the very beginning # Note the type of output that gets printed greeting.startswith('H'), greeting.endswith('d') # We can check whether the string is a letter or a number this_string = 'f' this_string.isa...
01-IntroToPython/.ipynb_checkpoints/00-PythonBasics-checkpoint.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='list_methods'></a> List Comprehension List comprehensions are a fairly advanced programming technique that we will spend more time talking about tomorrow. For now, you can think of them as list filters. Often, we don't need every value in a list, just a few that fulfill certain criteria.
# 'list1' had contained three words, two of which were in title case. # We can automatically return those words using a list comprehension [word for word in list1 if word.istitle()] # Or we can include all the words in the list but just take their first letters [word[0] for word in list1] for word in list1: prin...
01-IntroToPython/.ipynb_checkpoints/00-PythonBasics-checkpoint.ipynb
lknelson/text-analysis-2017
bsd-3-clause
which uses pickle internally, optionally mmap‘ing the model’s internal large NumPy matrices into virtual memory directly from disk files, for inter-process memory sharing. In addition, you can load models created by the original C tool, both using its text and binary formats: model = gensim.models.Word2Vec.load_word2ve...
model = gensim.models.Word2Vec.load(temp_path) more_sentences = [['Advanced', 'users', 'can', 'load', 'a', 'model', 'and', 'continue', 'training', 'it', 'with', 'more', 'sentences']] model.build_vocab(more_sentences, update=True) model.train(more_sentences, ) # cleaning up temp os.close(fs) os.remov...
docs/notebooks/word2vec.ipynb
isohyt/gensim
lgpl-2.1
Your task is to provide a drop-down so the user of the program can select one of the 5 mailboxes. Upon running the interaction the program will: read the selected mailbox file a line at a time find any lines beginning with From:. extract out the email address from the From: line. use the isEmail() function (provided b...
# Step 2: Write code here
content/lessons/07-Files/HW-Files.ipynb
IST256/learn-python
mit
Part 3: Questions Did you write your own user-defined function? For what purpose? --== Double-Click and Write Your Answer Below This Line ==-- Explain how you might re-write this program to create one large file from all the mailboxes. No code, just explain it. --== Double-Click and Write Your Answer Below This L...
# run this code to turn in your work! from coursetools.submission import Submission Submission().submit()
content/lessons/07-Files/HW-Files.ipynb
IST256/learn-python
mit
From the above info(),We can see columns Age, Cabin and Embarked have missing values. Handling the missing values: Ignore the rows with missing data, Exclude the variable at all or we might substite it with mean or median. Age 80% of the data is available,which seems a important variable so not to exclude. Port of ...
#exculding some coloumns for a in ['Ticket','Cabin','Embarked','Name','PassengerId','Fare']: if a in data.columns: del data[a] print "Age median values by Age and Sex:" #we are grouping by gender and class and taking median of age so we can replace with corrresponding values instead of NaN print data.grou...
P6 data_viz/data/.ipynb_checkpoints/P2-checkpoint.ipynb
gangadhara691/gangadhara691.github.io
mit
Data Exploration and Visualization
data_s=data survival_group = data_s.groupby('Survived') survival_group.describe()
P6 data_viz/data/.ipynb_checkpoints/P2-checkpoint.ipynb
gangadhara691/gangadhara691.github.io
mit
From the above statistics - Youngest to survive: 0.42 - Youngest to die: 1.0 - Oldest to survive: 80.0 - Oldest to die: 74.0
import matplotlib.pyplot as plt import seaborn as sns # Set style for all graphs #sns.set_style("light") #sns.set_style("whitegrid") sns.set_style("ticks", {"xtick.major.size": 8, "ytick.major.size": 8})
P6 data_viz/data/.ipynb_checkpoints/P2-checkpoint.ipynb
gangadhara691/gangadhara691.github.io
mit
From the above plot we can see how female individuals are given 1st preference and based on class. Social-economic standing was a factor in survival rate of passengers by gender Class 1 - female survival rate: 96.81% Class 1 - male survival rate: 36.89% Class 2 - female survival rate: 92.11% Class 2 - male surviv...
def group(d,v): if (d == 'female') and (v >= 18): return 'Woman' elif v < 18: return 'child' elif (d == 'male') and (v >= 18): return 'Man' data['Category'] = data.apply(lambda row:group(row['Sex'], row['Age']), axis=1) data.head(5) # We are dividing the ...
P6 data_viz/data/.ipynb_checkpoints/P2-checkpoint.ipynb
gangadhara691/gangadhara691.github.io
mit
Load some example scores (output of a classifier) The example data (from binary classification), presented next, contains: - Column 1: scores or probas (output of predict_proba()) in the range [0, 1] - Column 2: target or actual outcome (y truth)
df_results = pd.read_csv('../data/classifier_prediction_scores.csv') print('Number of rows:', df_results.shape[0]) df_results.head()
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Let's take a look at the scores distribution. As an output of the predict_proba(), the scores range is [0, 1].
df_results['scores'].hist(bins=50) plt.ylabel('Frequency') plt.xlabel('Scores') plt.title('Distribution of Scores') plt.xlim(0, 1) plt.show()
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Classification Metrics Accuracy score The accuracy_score is the fraction (default) or the count (normalize=False) of correct predictions. It is given by: $$ A = \frac{TP + TN}{TP + TN + FP + FN} $$ Where, TP is the True Positives, TN the True Negatives, FP the False Positives, and False Negative. Disavantages: - Not ...
# Specifying the threshold above which the predicted label is considered 1: threshold = 0.50 # Generate the predicted labels (above threshold = 1, below = 0) predicted_outcome = [0 if k <= threshold else 1 for k in df_results['scores']] print('Accuracy = %2.3f' % accuracy_score(df_results['target'], predicted_outcome)...
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Confusion Matrix The confusion_matrix C provides several performance indicators: - C(0,0) - TN count - C(1,0) - FN count - C(0,1) - FP count - C(1,1) - TP count
# Get the confusion matrix: confmat = confusion_matrix(y_true=df_results['target'], y_pred=predicted_outcome) # Plot the confusion matrix fig, ax = plt.subplots(figsize=(5, 5)) ax.matshow(confmat, cmap=plt.cm.Blues, alpha=0.4) for i in range(confmat.shape[0]): for j in range(confmat.shape[1]): ax.text(x=j,...
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
As can be seen, the number of False Negatives is very high, which, depending on the business could be harmful. Precision, Recall and F1-score Precision is the ability of the classifier not to label as positive a sample that is negative (i.e., a measure of result relevancy). $$ P = \frac{T_P}{T_P+F_P} $$ Recall is...
df_results['target'].value_counts(normalize=True)
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Rather imbalanced! Approximately 83% of the labels are 0. Let's take a look at the other metrics more appropriate for this type of datasets:
print('Precision score = %1.3f' % precision_score(df_results['target'], predicted_outcome)) print('Recall score = %1.3f' % recall_score(df_results['target'], predicted_outcome)) print('F1 score = %1.3f' % f1_score(df_results['target'], predicted_outcome))
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
As you can see, the results actually not so good as the accuracy metric would show us. Receiver Operating Characteristic (ROC) and Area Under the ROC (AUROC) The ROC is very common for binary classification problems. It is created by plotting the fraction of true positives out of the positives (TPR = true positive rate...
# Data to compute the ROC curve (FPR and TPR): fpr, tpr, thresholds = roc_curve(df_results['target'], df_results['scores']) # The Area Under the ROC curve: roc_auc = roc_auc_score(df_results['target'], df_results['scores']) # Plot ROC Curve plt.figure(figsize=(8,6)) lw = 2 plt.plot(fpr, tpr, color='orange', lw=lw, lab...
units/11-validation-metrics/examples/01 - Validation Metrics for Classification.ipynb
LDSSA/learning-units
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.p...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize we...
first-neural-network/Your_first_neural_network.ipynb
gururajl/deep-learning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys ### Set the hyperparameters here ### iterations = 5500 learning_rate = 0.4 hidden_nodes = 30 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random ...
first-neural-network/Your_first_neural_network.ipynb
gururajl/deep-learning
mit
Exercises Exercise: In NSFG Cycles 6 and 7, the variable cmdivorcx contains the date of divorce for the respondent’s first marriage, if applicable, encoded in century-months. Compute the duration of marriages that have ended in divorce, and the duration, so far, of marriages that are ongoing. Estimate the hazard and...
def CleanData(resp): """Cleans respondent data. resp: DataFrame """ resp.cmdivorcx.replace([9998, 9999], np.nan, inplace=True) resp["notdivorced"] = resp.cmdivorcx.isnull().astype(int) resp["duration"] = (resp.cmdivorcx - resp.cmmarrhx) / 12.0 resp["durationsofar"] = (resp.cmintvw - resp.c...
solutions/chap13soln.ipynb
AllenDowney/ThinkStats2
gpl-3.0
In this equation: $\epsilon$ is the single particle energy. $\mu$ is the chemical potential, which is related to the total number of particles. $k$ is the Boltzmann constant. $T$ is the temperature in Kelvin. In the cell below, typeset this equation using LaTeX: YOUR ANSWER HERE Define a function fermidist(energy, mu...
def fermidist(energy, mu, kT): """Compute the Fermi distribution at energy, mu and kT.""" # YOUR CODE HERE f=1/(np.exp((energy-mu)/kT)+1) return f assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033) assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0), np.array([ 0.5249...
assignments/midterm/InteractEx06.ipynb
JackDi/phys202-2015-work
mit
Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT. Use enegies over the range $[0,10.0]$ and a suitable number of points. Choose an appropriate x and y limit for your visualization. Label your x and y axis and...
def plot_fermidist(mu, kT): # YOUR CODE HERE f=plt.figure(figsize=(9,6)) energy=np.linspace(0,10,100) plt.plot(energy,fermidist(energy,mu,kT),'k') plt.xlim(0,10) plt.ylim(0,1) plt.tick_params(direction='out') plt.title('Fermi Dirac Distridution over a range of Energy Levels for Constant ...
assignments/midterm/InteractEx06.ipynb
JackDi/phys202-2015-work
mit
Use interact with plot_fermidist to explore the distribution: For mu use a floating point slider over the range $[0.0,5.0]$. for kT use a floating point slider over the range $[0.1,10.0]$.
# YOUR CODE HERE interact(plot_fermidist,mu=(0,5,0.1),kT=(0.1,10,0.1))
assignments/midterm/InteractEx06.ipynb
JackDi/phys202-2015-work
mit
Make population dynamic model Basic parameters
pop_size = 60 seq_length = 100 alphabet = ['A', 'T', 'G', 'C'] base_haplotype = "AAAAAAAAAA"
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Setup a population of sequences Store this as a lightweight Dictionary that maps a string to a count. All the sequences together will have count N.
pop = {} pop["AAAAAAAAAA"] = 40 pop["AAATAAAAAA"] = 30 pop["AATTTAAAAA"] = 30 pop["AAATAAAAAA"]
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Add mutation Mutations occur each generation in each individual in every basepair.
mutation_rate = 0.0001 # per gen per individual per site
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Walk through population and mutate basepairs. Use Poisson splitting to speed this up (you may be familiar with Poisson splitting from its use in the Gillespie algorithm). In naive scenario A: take each element and check for each if event occurs. For example, 100 elements, each with 1% chance. This requires 100 random...
def get_mutation_count(): mean = mutation_rate * pop_size * seq_length return np.random.poisson(mean)
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here we use Numpy's Poisson random number.
get_mutation_count()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We need to get random haplotype from the population.
pop.keys() [x/float(pop_size) for x in pop.values()] def get_random_haplotype(): haplotypes = pop.keys() frequencies = [x/float(pop_size) for x in pop.values()] total = sum(frequencies) frequencies = [x / total for x in frequencies] return np.random.choice(haplotypes, p=frequencies)
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here we use Numpy's weighted random choice.
get_random_haplotype()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here, we take a supplied haplotype and mutate a site at random.
def get_mutant(haplotype): site = np.random.randint(seq_length) possible_mutations = list(alphabet) possible_mutations.remove(haplotype[site]) mutation = np.random.choice(possible_mutations) new_haplotype = haplotype[:site] + mutation + haplotype[site+1:] return new_haplotype get_mutant("AAAAAA...
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Putting things together, in a single mutation event, we grab a random haplotype from the population, mutate it, decrement its count, and then check if the mutant already exists in the population. If it does, increment this mutant haplotype; if it doesn't create a new haplotype of count 1.
def mutation_event(): haplotype = get_random_haplotype() if pop[haplotype] > 1: pop[haplotype] -= 1 new_haplotype = get_mutant(haplotype) if new_haplotype in pop: pop[new_haplotype] += 1 else: pop[new_haplotype] = 1 mutation_event() pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
To create all the mutations that occur in a single generation, we draw the total count of mutations and then iteratively add mutation events.
def mutation_step(): mutation_count = get_mutation_count() for i in range(mutation_count): mutation_event() mutation_step() pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Add genetic drift Given a list of haplotype frequencies currently in the population, we can take a multinomial draw to get haplotype counts in the following generation.
def get_offspring_counts(): haplotypes = pop.keys() frequencies = [x/float(pop_size) for x in pop.values()] return list(np.random.multinomial(pop_size, frequencies))
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here we use Numpy's multinomial random sample.
get_offspring_counts()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We then need to assign this new list of haplotype counts to the pop dictionary. To save memory and computation, if a haplotype goes to 0, we remove it entirely from the pop dictionary.
def offspring_step(): counts = get_offspring_counts() for (haplotype, count) in zip(pop.keys(), counts): if (count > 0): pop[haplotype] = count else: del pop[haplotype] offspring_step() pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Combine and iterate Each generation is simply a mutation step where a random number of mutations are thrown down, and an offspring step where haplotype counts are updated.
def time_step(): mutation_step() offspring_step()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Can iterate this over a number of generations.
generations = 500 def simulate(): for i in range(generations): time_step() simulate() pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Record We want to keep a record of past population frequencies to understand dynamics through time. At each step in the simulation, we append to a history object.
pop = {"AAAAAAAAAA": pop_size} history = [] def simulate(): clone_pop = dict(pop) history.append(clone_pop) for i in range(generations): time_step() clone_pop = dict(pop) history.append(clone_pop) simulate() pop history[0] history[1] history[2]
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Analyze trajectories Calculate diversity Here, diversity in population genetics is usually shorthand for the statistic &pi;, which measures pairwise differences between random individuals in the population. &pi; is usually measured as substitutions per site.
pop
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
First, we need to calculate the number of differences per site between two arbitrary sequences.
def get_distance(seq_a, seq_b): diffs = 0 length = len(seq_a) assert len(seq_a) == len(seq_b) for chr_a, chr_b in zip(seq_a, seq_b): if chr_a != chr_b: diffs += 1 return diffs / float(length) get_distance("AAAAAAAAAA", "AAAAAAAAAB")
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We calculate diversity as a weighted average between all pairs of haplotypes, weighted by pairwise haplotype frequency.
def get_diversity(population): haplotypes = population.keys() haplotype_count = len(haplotypes) diversity = 0 for i in range(haplotype_count): for j in range(haplotype_count): haplotype_a = haplotypes[i] haplotype_b = haplotypes[j] frequency_a = population[hap...
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Plot diversity Here, we use matplotlib for all Python plotting.
%matplotlib inline import matplotlib.pyplot as plt import matplotlib as mpl
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here, we make a simple line plot using matplotlib's plot function.
plt.plot(get_diversity_trajectory())
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here, we style the plot a bit with x and y axes labels.
def diversity_plot(): mpl.rcParams['font.size']=14 trajectory = get_diversity_trajectory() plt.plot(trajectory, "#447CCD") plt.ylabel("diversity") plt.xlabel("generation") diversity_plot()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Analyze and plot divergence In population genetics, divergence is generally the number of substitutions away from a reference sequence. In this case, we can measure the average distance of the population to the starting haplotype. Again, this will be measured in terms of substitutions per site.
def get_divergence(population): haplotypes = population.keys() divergence = 0 for haplotype in haplotypes: frequency = population[haplotype] / float(pop_size) divergence += frequency * get_distance(base_haplotype, haplotype) return divergence def get_divergence_trajectory(): traject...
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Plot haplotype trajectories We also want to directly look at haplotype frequencies through time.
def get_frequency(haplotype, generation): pop_at_generation = history[generation] if haplotype in pop_at_generation: return pop_at_generation[haplotype]/float(pop_size) else: return 0 get_frequency("AAAAAAAAAA", 4) def get_trajectory(haplotype): trajectory = [get_frequency(haplotype, g...
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We want to plot all haplotypes seen during the simulation.
def get_all_haplotypes(): haplotypes = set() for generation in history: for haplotype in generation: haplotypes.add(haplotype) return haplotypes get_all_haplotypes()
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Here is a simple plot of their overall frequencies.
haplotypes = get_all_haplotypes() for haplotype in haplotypes: plt.plot(get_trajectory(haplotype)) plt.show() colors = ["#781C86", "#571EA2", "#462EB9", "#3F47C9", "#3F63CF", "#447CCD", "#4C90C0", "#56A0AE", "#63AC9A", "#72B485", "#83BA70", "#96BD60", "#AABD52", "#BDBB48", "#CEB541", "#DCAB3C", "#E49938", "#E68133...
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
We can use stackplot to stack these trajectoies on top of each other to get a better picture of what's going on.
def stacked_trajectory_plot(xlabel="generation"): mpl.rcParams['font.size']=18 haplotypes = get_all_haplotypes() trajectories = [get_trajectory(haplotype) for haplotype in haplotypes] plt.stackplot(range(generations), trajectories, colors=colors_lighter) plt.ylim(0, 1) plt.ylabel("frequency") ...
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Plot SNP trajectories
def get_snp_frequency(site, generation): minor_allele_frequency = 0.0 pop_at_generation = history[generation] for haplotype in pop_at_generation.keys(): allele = haplotype[site] frequency = pop_at_generation[haplotype] / float(pop_size) if allele != "A": minor_allele_freq...
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Find all variable sites.
def get_all_snps(): snps = set() for generation in history: for haplotype in generation: for site in range(seq_length): if haplotype[site] != "A": snps.add(site) return snps def snp_trajectory_plot(xlabel="generation"): mpl.rcParams['font.size'...
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Scale up Here, we scale up to more interesting parameter values.
pop_size = 50 seq_length = 100 generations = 500 mutation_rate = 0.0001 # per gen per individual per site
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
And the population genetic parameter $\theta$, which equals $2N\mu$, is 1.
2 * pop_size * seq_length * mutation_rate base_haplotype = ''.join(["A" for i in range(seq_length)]) pop.clear() del history[:] pop[base_haplotype] = pop_size simulate() plt.figure(num=None, figsize=(14, 14), dpi=80, facecolor='w', edgecolor='k') plt.subplot2grid((3,2), (0,0), colspan=2) stacked_trajectory_plot(xlab...
code/mutation-drift.ipynb
alvason/probability-insighter
gpl-2.0
Create histograms h1, h2, ... have their alternatives in physt.dask_compat. They should work similarly. Although, they are not complete and unexpected errors may occur.
from physt.compat.dask import h1 as d1 from physt.compat.dask import h2 as d2 # Use chunks to create a 1D histogram ha = d1(chunked2, "fixed_width", bin_width=0.2) check_ha = h1(million2, "fixed_width", bin_width=0.2) ok = (ha == check_ha) print("Check: ", ok) ha.plot() ha # Use chunks to create a 2D histogram hb = d...
doc/dask.ipynb
janpipek/physt
mit
Some timings Your results may vary substantially. These numbers are just for illustration, on 4-core (8-thread) machine. The real gain comes when we have data that don't fit into memory. Efficiency
# Standard %time h1(million2, "fixed_width", bin_width=0.2) # Same array, but using dask %time d1(million2, "fixed_width", bin_width=0.2) # Most efficient: dask with already chunked data %time d1(chunked2, "fixed_width", bin_width=0.2)
doc/dask.ipynb
janpipek/physt
mit
Different scheduling
%time d1(chunked2, "fixed_width", bin_width=0.2) %%time # Hyper-threading or not? graph, name = d1(chunked2, "fixed_width", bin_width=0.2, compute=False) dask.threaded.get(graph, name, num_workers=4) # Multiprocessing not so efficient for small arrays? %time d1(chunked2, "fixed_width", bin_width=0.2, dask_method=dask...
doc/dask.ipynb
janpipek/physt
mit
High level API We recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution. Layers: common sets of useful operations Most of the time when writing code for machine learning models you want to operate at a higher level of abstraction than ...
# In the tf.keras.layers package, layers are objects. To construct a layer, # simply construct the object. Most layers take as a first argument the number # of output dimensions / channels. layer = tf.keras.layers.Dense(100) # The number of input dimensions is often unnecessary, as it can be inferred # the first time t...
tensorflow/contrib/eager/python/examples/notebooks/4_high_level.ipynb
gojira/tensorflow
apache-2.0
Heu de seleccionar un valor intermedi per a distingir entre blanc i negre. Serà el llindar que usareu: si la lectura està per damunt, el sensor està detectant blanc si està per baix, negre. Seguiment de línia, versió 1.0 <img src="img/line_circuit.jpg" width=240 align="right"> Per a seguir la línia, la idea és senzi...
from functions import left, right try: while True: if ___: ___ else: ___ except KeyboardInterrupt: stop()
task/light.ipynb
ecervera/mindstorms-nb
mit
Versió 2.0 Si el robot se n'ix per la curva, cal fer un gir més pronunciat. Les funcions que hem utilitzat fins ara giren una roda mentre l'altra està parada. Es pot girar més si, en lloc d'estar parada, l'altra roda gira en sentit contrari. Per a fer-ho, useu les següents funcions:
from functions import left_sharp, right_sharp try: while True: if ___: ___ else: ___ except KeyboardInterrupt: stop()
task/light.ipynb
ecervera/mindstorms-nb
mit
Versió 3.0 En general, pot haver curves a esquerra i dreta. Aleshores, el mètode anterior no és suficient. Una solució seria usar dos sensors, un per cada costat de la línia, però només en tenim un per robot. Una altra solució és fer que el sensor vaja per la vora de la línia, de manera que la superfície serà meitat ne...
try: while True: if ___: ___ else: if ___: ___ else: ___ except KeyboardInterrupt: stop()
task/light.ipynb
ecervera/mindstorms-nb
mit
Versió 4.0 Si el robot oscil·la massa, fins i tot pot travessar la línia i desorientar-se completament. La solució és disminuir la velocitat, indicant-ho a les funcions de moviment, per exemple: python forward(speed=65) El valor per defecte és 100, el màxim, i el mínim és 0.
try: while True: if ___: ___ else: if ___: ___ else: ___ except KeyboardInterrupt: stop()
task/light.ipynb
ecervera/mindstorms-nb
mit
Recapitulem En aquesta pàgina no hem vist conceptes nous, però hem aplicat el que ja sabíem al problema de seguir la línia. Passem a vore el darrer sensor. Abans de continuar, desconnecteu el robot:
from functions import disconnect, next_notebook disconnect() next_notebook('ultrasonic')
task/light.ipynb
ecervera/mindstorms-nb
mit
2. Calculate the translational partition function of a CO molecule in the bottle at 298 K. What is the unit of the partition function? For $T \gg \theta_{trans}$, $\Lambda \ll L$, $q_{trans} = V/\Lambda^3$. $\Lambda = h \left(\frac{\beta}{2\pi m}\right)^{1/2}$.
Lamda = h*(1/(kB*298*2*np.pi*m))**0.5 print(Lamda) q_trans = V/Lamda**3 print('The translational partition function of a CO molecule in the bottle at 298 K is {:.4E}.'.format(q_trans)) print('It is dimensionless.')
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
3. Plot the rotational and vibrational partition functions of a CO molecule in the bottle from $T$ = 200 to 2000 K (assume the CO remains a gas over the whole range). Hint: Use your answer to Problem 1 to simplify calculating the rotational partition function. $q_{rot} = \frac{1}{\sigma}\frac{T}{\theta_{rot}} = \frac...
T = np.linspace(200,2000,1000) # r = R/a_0 q_rot = T/T_rot q_vib = 1/(1-np.exp(-T_vib/T)) plt.plot(T,q_rot) plt.xlabel('T (K)') plt.ylabel('$q_{rot}$') plt.title('The rotational partition function of a CO molecule') plt.show() plt.plot(T,q_vib) plt.xlabel('T (K)') plt.ylabel('$q_{vib}$') plt.title('The vibrational part...
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
4. Plot the total translational, rotational, and vibrational energies of CO in the bottle from $T =$ 200 to 2000 K (assume the CO remains a gas over the whole range). Which (if any) of the three types of motions dominate the total energy? $U_{trans} = \frac{3}{2}RT$ $U_{rot} = RT$ $U_{vib} = R\frac{\theta_{vib}}{e^{\th...
R = 8.31447 # J/(mol*K) U_trans = 1.5*R*T U_rot = R*T U_vib = R*T_vib/(np.exp(T_vib/T)-1) plt.plot(T,U_trans,label='U_trans') plt.plot(T,U_rot,label='U_rot') plt.plot(T,U_vib,label='U_vib') plt.legend() plt.xlabel('T (K)') plt.ylabel('Internal Energy (J/mol)') plt.title('Internal energies of CO in the bottle') plt.show...
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
5. Plot the total translational, rotational, and vibrational constant volume molar heat capacities of CO in the bottle from $T =$ 200 to 2000 K. Which (if any) of the three types of motions dominate the heat capacity? $C_{V,trans} = \frac{3}{2}R$ $C_{V,rot} = R$ $C_{V,vib} = R\left(\frac{\theta_{vib}}{T}\frac{e^{\theta...
Cv_trans = np.linspace(1.5*R,1.5*R,1000) Cv_rot = np.linspace(R,R,1000) Cv_vib = R*(T_vib/T*np.exp(T_vib/2./T)/(np.exp(T_vib/T)-1))**2 plt.plot(T,Cv_trans,label='Cv_trans') plt.plot(T,Cv_rot,label='Cv_rot') plt.plot(T,Cv_vib,label='Cv_vib') plt.legend() plt.xlabel('T (K)') plt.ylabel('Heat Capacity (J/mol/K)') plt.titl...
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
6. Plot the total translational, rotational, and vibrational Helmholtz energies of CO in the bottle from $T =$ 200 to 2000 K. Which (if any) of the three types of motions dominate the Helmholtz energy? $A = U - TS$ $S_{trans} = Rln\left(\frac{e^{5/2}V}{N\Lambda^3}\right)$ $S_{rot} = R(1-ln(\theta_{rot}/T))$ $S_{vib} = ...
NA = 6.022e23 S_trans = R*np.log(np.exp(2.5)*V/NA/Lamda**3) S_rot = R*(1-np.log(T_rot/T)) S_vib = R*(T_vib/T/(np.exp(T_vib/T)-1)-np.log(1-np.exp(-T_vib/T))) A_trans = U_trans-T*S_trans A_rot = U_rot-T*S_rot A_vib = U_vib-T*S_vib plt.plot(T,A_trans,label='A_trans') plt.plot(T,A_rot,label='A_rot') plt.plot(T,A_vib,label=...
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
7. Use your formulas to calculate $\Delta P$, $\Delta U$, $\Delta A$, and $\Delta S$ associated with isothermally expanding the gas from 20 dm$^3$ to 40 dm$^3$. T = 298 K. $\Delta U=0$. $\Delta P = \frac{RT}{V_2} - \frac{RT}{V_1}$. $\Delta S = S_{trans,2} - S_{trans,1}$. $A = U-TS$, so, $\Delta A = -T\Delta S$.
V2 = 0.04 # m^3 deltaP = R*298*(1/V2-1/V) deltaS = R*np.log(np.exp(2.5)*V2/NA/Lamda**3) - R*np.log(np.exp(2.5)*V/NA/Lamda**3) deltaA = -deltaS*298 print('Delta P = {0:.3f} Pa, Delta U = 0, Delta A = {1:.3f} J/mol, and Delta S = {2:.3f} J/mol/K.'.format(deltaP,deltaA,deltaS))
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
Reactions from scratch In 1996, Schneider and co-workers used quantum chemistry to compute the reaction pathway for unimolecular decomposition of trifluoromethanol, a reaction of relevance to the atmospheric degradation of hydrofluorocarbon refrigerants (J. Phys. Chem. 1996, 100, 6097- 6103, doi:10.1021/jp952703m): $$\...
import numpy as np T = 298 # K k = 1.38065e-23 # J/K R = 8.31447 # J/(mol*K) Na = 6.0221e23 # 1/mol c = 6.0221e26 # 1/m^3, conversion factor of 1mol/L = 6.02e26 particles/m^3 autokJ = 2625.50 Eelec = [-412.90047 ,-312.57028 ,-100.31885 ] # kJ/mol ZPE = [0.02889 ,0.01422 ,0.00925 ] # kJ/mol dE0 = ((Eelec[1] + ZPE[1] ...
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
9. Using the data provided, determine $\Delta A^{\circ}$(298 K) in kJ mol$^{-1}$, assuming ideal behavior and 1 M standard state. Recall that $A^\circ=E^\text{elec} + \text{ZPE}-RT\ln(q^\circ)-RT$ and that $q^\circ =(q^\text{trans}/V)q^\text{rot}q^\text{vib}/c^\circ$ in units corresponding with the standard state. $\...
q_trans = [7.72e32/c,1.59e32/c,8.65e31/c] # change translational partition functions from 1/m3 to mol/l std state q_rot = [61830,679,9.59] # unitless q_vib = [2.33,1.16,1] # unitless Q = (q_trans[1]*q_rot[1]*q_vib[1])*(q_trans[2]*q_rot[2]*q_vib[2])/(q_trans[0]*q_rot[0]*q_vib[0]) # total partition dA = dE0 + (-R*T*...
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
10. Determine $\Delta G^\circ$(298 K). Recall that $G = A + PV = A + RT$ for an ideal ga. $\Delta{G^{\circ}} = \Delta{A^{\circ}} + \Delta(PV)$ $\quad = \quad \Delta{A^{\circ}} + RT$
dG = dA + R*T/1000 #kJ/mol print("delta_G = %.2f kJ/mol"%dG)
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
11. Determine $\Delta S^{\circ}$(298 K), in J mol$^{-1}$ K$^{-1}$ , assuming a 1 M standard state. Recall that $S = (U - A)/T$. $\Delta A^{\circ} = \Delta U^{\circ} - T\Delta S^{\circ}$ $\Delta S^{\circ} = \frac{\Delta U^{\circ} - \Delta A^{\circ}}{T}$
dS = 1000*(dU - dA)/T print("delta_S = %.2f J/mol K"%dS)
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
12. Using the data provided, determine $K_c$ (298 K), assuming a 1 M standard state. You may either determine from partition functions of from the relationship between $K_c$ and $\Delta G^\circ$. $\mathrm{A} \rightarrow \mathrm{B + C}$$$K_c(T) = \frac{\frac{q_{\mathrm{B}}}{V}\frac{q_{\mathrm{C}}}{V}}{\frac{q_{\mathrm{...
Kc = Q *np.exp(-dE0*1000/(R*T)) # Kc equation from lecture notes print('Kc = %.3f (unitless). '%(Kc)) Kc = np.exp(-dG*1000/(R*T)) print('Kc = %.3f (unitless). '%(Kc))
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
13. 1 mole of CF$_3$OH is generated in a 20 L vessel at 298 K and left long enough to come to equilibrium with respect to its decomposition reaction. What is the composition of the gas (concentrations of all the components) at equilibrium (in mol/L)? 1 mol/ 20 L = 0.05 mol/L $\mathrm{A} \rightarrow \mathrm{B + C}$ $K_c...
from sympy import * x = symbols('x',positive=True) c = solve(x**2-(0.05-x)*Kc,x) print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.05-c[0],c[0],c[0])) print('At equilibrium, CF3OH = %.2E mol, COF2 = %.5f mol, HF = %.5f mol.'%((0.05-c[0])*20,c[0]*20,c[0]*20))
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
14. How, directionally, would your answer to Question 13 change if the vessel was at a higher temperature? Use the van'T Hoff relationship to determine the equilibrium constant and equilibrium concentrations at 273 and 323 K. How good was your guess? From question #8, we know that at 298K, $\Delta U$ = 18.64 kJ/mol. ...
dn = 2-1 R = 8.314/1000 #kJ/mol K T = 298 #K dH = dU+dn*R*T #kJ/mol print("dH =",round(dH,3),"kJ/mol")
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
Since $\Delta H$ is positive, we expect $K \uparrow$ as $T \uparrow$ The Van't Hoff equation states: $ln\frac{K(T_2)}{K(T_1)}=\frac{-\Delta H^o}{R}(\frac{1}{T_2}-\frac{1}{T_1})$ We know from question 12, K = 2.926 at 298 K.
K1 = 2.926 T1 = 298 #K T2 = 273 #K K2 = K1*np.exp(-dH/R*(1/T2-1/T1)) print('K=', round(K2,4), 'at 273 K.') x = symbols('x',positive=True) c = solve(x**2-(0.05-x)*K2,x) print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.05-c[0],c[0],c[0])) K1 = 2.926 T1 = 298 #K T2 = 323 #K K2 = K1*np.ex...
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
Therefore, the at higher temperatures, the reaction shifts towards the products. 15. How, directionally, would your answer to Question 13 change if the vessel had a volume of 5 L? Redo the calculation at this volume to verify your guess. If V = 5L, the initial concentration = $\frac{1 mol}{5 L}=.2 M$ $K_c = \frac{x^2}{...
T = 298 #K R = 8.314 Kc = np.exp(-dG*1000/(R*T)) print('At 298 K, Kc = %.3f (unitless). '%(Kc)) x = symbols('x',positive=True) c = solve(x**2-(0.2-x)*Kc,x) print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.2-c[0],c[0],c[0])) print('At equilibrium, CF3OH = %.2E mol, COF2 = %.5f mol, HF =...
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
16. Consult a thermodynamics source (e.g. https://webbook.nist.gov/chemistry/) to determine $\Delta H^\circ$(298 K), $\Delta S^\circ$(298 K), and $\Delta G^\circ$(298 K) for the homologous reaction CH$_3$OH (g)$\rightarrow$ H$_2$O (g) + H$_2$CO (g). Does the substitution of F by H make the reaction more or less favora...
T = 298 #K #All values were taken from NIST #Methanol Hm = -205 #kJ/mol Sm = .2399 #kJ/mol K Gm = Hm - T*Sm #kJ/mol #Hydrogen Hh = 0 Sh = .13068 #J/mol K Gh = Hh - T*Sh #kJ/mol #Formaldehyde Hf = -108.6 #kJ/mol Sf = .21895 #kJ/mol K Gf = Hf - T*Sf #kJ/mol delta_H = Hf+Hh-Hm #kJ/mol delta_S = Sf+Sh-Sm #kJ/mol K del...
Homework/HW10-soln.ipynb
wmfschneider/CHE30324
gpl-3.0
This Notebool is Mainly to Check the Buffer built from Entrances .shp (previously) and from geojson (.json) in most recent Dumbo-Can-Run scripts. Load
from shapely.geometry import Point import pyproj import geopandas as gpd proj = pyproj.Proj(init='epsg:2263', preserve_units=True) entr_points = sqlContext.read.load('../why_yellow_taxi/Data/2016_(May)_New_York_City_Subway_Station_Entrances.json', \ format='json', header=True, inferSche...
Buffer/Use_Geojson2build_Buffer_Not_Shapefile.ipynb
djfan/why_yellow_taxi
mit
List
entr_geo.head(2) shp.head(2)
Buffer/Use_Geojson2build_Buffer_Not_Shapefile.ipynb
djfan/why_yellow_taxi
mit
Identical or Not?
entr_geo.head(2).geometry[1] == shp.head(2).geometry[1]
Buffer/Use_Geojson2build_Buffer_Not_Shapefile.ipynb
djfan/why_yellow_taxi
mit
Detail
shp.head(2).geometry[0].centroid.x shp.head(2).geometry[0].centroid.y entr_geo.head(2).geometry[0].centroid.x entr_geo.head(2).geometry[0].centroid.y
Buffer/Use_Geojson2build_Buffer_Not_Shapefile.ipynb
djfan/why_yellow_taxi
mit
Unit Test
%run ../utils/results.py # %load test_dfs.py from nose.tools import assert_equal class TestDfs(object): def __init__(self): self.results = Results() def test_dfs(self): node = Node(5) insert(node, 2) insert(node, 8) insert(node, 1) insert(node, 3) in...
interactive-coding-challenges/graphs_trees/tree_dfs/dfs_challenge.ipynb
saashimi/code_guild
mit
因为需要全市场回测所以本章无法使用沙盒数据,《量化交易之路》中的原始示例使用的是美股市场,这里的示例改为使用A股市场。 本节建议对照阅读abu量化文档第20-23节内容 本节的基础是在abu量化文档中第20节内容完成运行后有A股训练集交易和A股测试集交易数据之后 abu量化系统github地址 (您的star是我的动力!) abu量化文档教程ipython notebook 第11章 量化系统-机器学习•ABU
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop, AbuFactorBuyBreak from abupy import abu, EMarketTargetType, AbuMetricsBase, ABuMarketDrawing, ABuProgress, ABuSymbolPd from abupy import EMarketTargetType, EDataCacheType, EMarketSourceType, EMarketDataFetchMode, EStoreAbu, AbuUmpMainMul...
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
11.1 搜索引擎与量化交易 本节建议对照abu量化文档第15节内容进行阅读
orders_pd_train = abu_result_tuple_train.orders_pd # 选择失败的前20笔交易绘制交易快照 # 这里只是示例,实战中根据需要挑选,rank或者其他方式 plot_simple = orders_pd_train[orders_pd_train.profit_cg < 0][:20] # save=True保存在本地, 文件保存在~/abu/data/save_png/中 ABuMarketDrawing.plot_candle_from_order(plot_simple, save=True)
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
11.2 主裁 11.2.1 角度主裁 请对照阅读ABU量化系统使用文档 :第15节 中相关内容
from abupy import AbuUmpMainDeg # 参数为orders_pd ump_deg = AbuUmpMainDeg(orders_pd_train) # df即由之前ump_main_make_xy生成的类df,表11-1所示 ump_deg.fiter.df.head()
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
耗时操作,快的电脑大概几分钟,具体根据电脑性能,cpu数量,启动多进程进行训练:
_ = ump_deg.fit(brust_min=False) ump_deg.cprs max_failed_cluster = ump_deg.cprs.loc[ump_deg.cprs.lrs.argmax()] print('失败概率最大的分类簇{0}, 失败率为{1:.2f}%, 簇交易总数{2}, ' \ '簇平均交易获利{3:.2f}%'.format(ump_deg.cprs.lrs.argmax(), max_failed_cluster.lrs * 100, max_fai...
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
由于不是同一份沙盒数据,所以下面结果内容与书中分析内容不符,需要按照实际情况分析: 比如下面的特征即是42日和60日的deg格外大,21和252相对训练集平均值也很大:
from abupy import ml ml.show_orders_hist(max_failed_cluster_orders, ['buy_deg_ang21', 'buy_deg_ang42', 'buy_deg_ang60','buy_deg_ang252']) print('分类簇中deg_ang60平均值为{0:.2f}'.format( max_failed_cluster_orders.buy_deg_ang60.mean())) print('分类簇中deg_ang21平均值为{0:.2f}'.format( max_failed_cluster_orders.buy_deg_ang21.m...
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
交易快照文件保存在~/abu/data/save_png/中, 下面打开对应目录:save_png
if abupy.env.g_is_mac_os: !open $abupy.env.g_project_data_dir else: !echo $abupy.env.g_project_data_dir
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
11.2.2 使用全局最优对分类簇集合进行筛选
brust_min = ump_deg.brust_min() brust_min llps = ump_deg.cprs[(ump_deg.cprs['lps'] <= brust_min[0]) & (ump_deg.cprs['lms'] <= brust_min[1] )& (ump_deg.cprs['lrs'] >=brust_min[2])] llps ump_deg.choose_cprs_component(llps) ump_deg.dump_clf(llps)
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0
11.2.3 跳空主裁 请对照阅读ABU量化系统使用文档 :第16节 UMP主裁交易决策 中相关内容
from abupy import AbuUmpMainJump # 耗时操作,大概需要10几分钟,具体根据电脑性能,cpu情况 ump_jump = AbuUmpMainJump.ump_main_clf_dump(orders_pd_train, save_order=False) ump_jump.fiter.df.head()
ipython/第十一章-量化系统——机器学习•ABU.ipynb
bbfamily/abu
gpl-3.0