markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<p class="normal"> &raquo; <b>Causality:</b> <i>a system whose output depends on past and present values of input.</i></p> <p class="normal"> &raquo; <b>Time invariance:</b> <i>system remains the same with time.</i></p> <p class="normal">If a system is time invariant, then if </p> <div class="formula">$$ x(t) \mapsto y(t) \implies x(t-\tau) \mapsto y(t-\tau)$$</div> <p class="normal"> &raquo; <b>Stability:</b> <i>bounded input produces bounded output</i></p> <div class="formula">$$ \left|x(t)\right| < M_x < \infty \mapsto \left|y(t)\right| < M_y < \infty $$</div> <p class="normal"> &raquo; <b>Invertibility:</b> <i>input can be recovered from the output</i></p>
%pylab inline import seaborn as sb # Functions to generate plots for the different sections. def signal_examples(): t = np.arange(-5, 5, 0.01) s1 = 1.23 * (t ** 2) - 5.11 * t + 41.5 x, y = np.arange(-2.0, 2.0, 0.01), np.arange(-2.0, 2.0, 0.01) fig = figure(figsize=(17, 7)) subplot(121) plot(t, s1) xlabel('Time', fontsize=20) xlim(-5, 5) xticks(fontsize=25) yticks(fontsize=25) title("$1.23t^2 - 5.11t + 41.5$", fontsize=30) subplot(122) s = np.array([[np.exp(-(_x**2 + _y**2 + 0.5*_x*_y)) for _y in y] for _x in x]) X, Y = meshgrid(x, y) contourf(X, Y, s) xticks(fontsize=25) yticks(fontsize=25) title("$s(x, y) = e^{-(x^2 + y^2 + 0.5xy)}$", fontsize=30); savefig("img/signals.svg", format="svg") def memory(): dt = 0.01 N = np.round(0.5/dt) t = np.arange(-1.0, 5.0, dt) x = 1.0 * np.array([t >= 1.0, t < 3.0]).all(0) # memoryless system y1 = 0.5 * x # system with memory. y2 = np.zeros(len(x)) for i in xrange(len(y2)): y2[i] = np.sum(x[max(0, i-N):i]) * dt figure(figsize=(17,4)) plot(t, x, lw=2, label="$x(t)$") plot(t, y1, lw=2, label="$0.5x(t)$") plot(t, y2, lw=2, label="$\int_{t-0.5}^{t}x(p)dp$") xlim(-1, 5) ylim(-0.1, 1.1) xlabel('Time', fontsize=15) legend(prop={'size':20}); from IPython.core.display import HTML def css_styling(): styles = open("../../styles/custom_aero.css", "r").read() return HTML(styles) css_styling()
lectures/lecture_1/.ipynb_checkpoints/lecture_1-checkpoint.ipynb
siva82kb/intro_to_signal_processing
mit
Open and read data from the DEM Change the path name below to reflect your particular computer, then run the cell.
betasso_dem_name = '/Users/gtucker/Dev/dem_analysis_with_gdal/czo_1m_bt1.img' geo = gdal.Open(betasso_dem_name) zb = geo.ReadAsArray()
dem_processing_with_gdal_python.ipynb
gregtucker/dem_analysis_with_gdal
mit
If the previous two lines worked, zb should be a 2D numpy array that contains the DEM elevations. There are some cells along the edge of the grid with invalid data. Let's set their elevations to zero, using the numpy where function:
zb[np.where(zb<0.0)[0],np.where(zb<0.0)[1]] = 0.0
dem_processing_with_gdal_python.ipynb
gregtucker/dem_analysis_with_gdal
mit
Now let's make a color image of the data. To do this, we'll need Pylab and a little "magic".
import matplotlib.pyplot as plt %matplotlib inline plt.imshow(zb, vmin=1600.0, vmax=2350.0)
dem_processing_with_gdal_python.ipynb
gregtucker/dem_analysis_with_gdal
mit
Questions: (Note: to answer the following, open Google Earth and enter Betasso Preserve in the search bar. Zoom out a bit to view the area around Betasso) (1) Use a screen shot to place a copy of this image in your lab document. Label Boulder Creek Canyon and draw an arrow to show its flow direction. (2) Indicate and label the confluence of Fourmile Creek and Boulder Canyon. (3) What is the mean altitude? What is the maximum altitude? (Hint: see numpy functions mean and amax)
np.amax(zb)
dem_processing_with_gdal_python.ipynb
gregtucker/dem_analysis_with_gdal
mit
Make a slope map Use the numpy gradient function to make an image of absolute maximum slope angle at each cell:
def slope_gradient(z): """ Calculate absolute slope gradient elevation array. """ x, y = np.gradient(z) #slope = (np.pi/2. - np.arctan(np.sqrt(x*x + y*y))) slope = np.sqrt(x*x + y*y) return slope sb = slope_gradient(zb)
dem_processing_with_gdal_python.ipynb
gregtucker/dem_analysis_with_gdal
mit
Let's see what it looks like:
plt.imshow(sb, vmin=0.0, vmax=1.0, cmap='pink')
dem_processing_with_gdal_python.ipynb
gregtucker/dem_analysis_with_gdal
mit
Questions: (1) Place a copy of this image in your lab document. Identify and label the Betasso Water Treatment plant. (2) How many degrees are in a slope gradient of 1.0 (or 100%)? (3) What areas have the steepest slopes? What areas have the gentlest slopes? What do you think the distribution of slopes might indicate about the distribution of erosion rates within this area? (4) What is the median slope gradient? What is this gradient in degrees? (Hint: numpy has a median function) Make a map of slope aspect
def aspect(z): """Calculate aspect from DEM.""" x, y = np.gradient(z) return np.arctan2(-x, y) ab = aspect(zb) plt.imshow(ab)
dem_processing_with_gdal_python.ipynb
gregtucker/dem_analysis_with_gdal
mit
We can make a histogram (frequency diagram) of aspect. Here 0 degrees is east-facing, 90 is north-facing, 180 is west-facing, and -90 is south-facing.
abdeg = (180./np.pi)*ab # convert to degrees n, bins, patches = plt.hist(abdeg.flatten(), 50, normed=1, facecolor='green', alpha=0.75)
dem_processing_with_gdal_python.ipynb
gregtucker/dem_analysis_with_gdal
mit
Questions: (1) Place a copy of this image in your lab notes. (2) Compare the aspect map to imagery in Google Earth. Is there any correlation aspect and vegetation? If so, what does it look like? (3) What is the most common aspect? (N, NE, E, SE, S, SW, W, or NW) Shaded relief Create a shaded relief image
def hillshade(z, azimuth=315.0, angle_altitude=45.0): """Generate a hillshade image from DEM. Notes: adapted from example on GeoExamples blog, published March 24, 2014, by Roger Veciana i Rovira. """ x, y = np.gradient(z) slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y)) aspect = np.arctan2(-x, y) azimuthrad = azimuth*np.pi / 180. altituderad = angle_altitude*np.pi / 180. shaded = np.sin(altituderad) * np.sin(slope)\ + np.cos(altituderad) * np.cos(slope)\ * np.cos(azimuthrad - aspect) return 255*(shaded + 1)/2 hb = hillshade(zb) plt.imshow(hb, cmap='gray')
dem_processing_with_gdal_python.ipynb
gregtucker/dem_analysis_with_gdal
mit
Setting up the Data A large dataset sampled from two Gaussians. One is centered at 0 and one is centered at 1. Let's look at a distribution of the 0 and 1 categories.
nsamples = 300000 X_train = np.zeros(shape=(nsamples,1)) y_train = np.zeros(shape=(nsamples)) X_train[0:nsamples//2,:] = np.random.randn(nsamples//2,1) y_train[0:nsamples//2] = 0 X_train[nsamples//2:nsamples,:] = np.random.randn(nsamples//2,1) + 1.0 y_train[nsamples//2:nsamples] = 1 _ = plt.hist(X_train[0:nsamples//2,:],bins=200,range=(-10,10),label="Class 0",alpha=0.5,color='blue') _ = plt.hist(X_train[nsamples//2:nsamples,:],bins=200,range=(-10,10),label="Class 1",alpha=0.5,color='black') plt.legend(loc='upper left')
cost-interpretation/train-and-verify.ipynb
jrpretz/scratch
gpl-3.0
The model This is an over-complicated model. The reason for the complexity is that I am looking at the detailed probabilities that come out, not just the 0/1 classification and I want to make sure that I have a model that's nuanced enough to capture the details. Note in fitting the model, the data is not randomly sampled. The first half is the 0s and the second half is the 1s. I'm not batch training, though, so it's fine.
X_input = keras.layers.Input((1,)) layer1 = keras.layers.Dense(20, activation='relu') X = layer1(X_input) layer2 = keras.layers.Dense(20, activation='relu') X = layer2(X_input) layer3 = keras.layers.Dense(20, activation='relu') X = layer3(X_input) layer4 = keras.layers.Dense(1, activation='sigmoid') X = layer4(X) adam = keras.optimizers.Adam(lr=0.1, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False) model = keras.models.Model(inputs = X_input, outputs = X, name='model') model.compile(optimizer=adam, loss='binary_crossentropy', metrics=['accuracy']) model.fit(x=X_train,y=y_train,batch_size=nsamples,epochs=50,verbose=0)
cost-interpretation/train-and-verify.ipynb
jrpretz/scratch
gpl-3.0
Ideal NN output The NN should compute the probability that a given example is of class 0 or 1. But for this dataset, I know those probabilities precisely; the two populations are just drawn from Gaussians of different. So here, we get the probabilities as a function of x from the NN and computed exactly. They agree well.
# probability vs x X_test = np.linspace(-10,10,10000).reshape(10000,1) pred = model.predict(X_test) # I can compute the probability exactly and compare to the predicted # prob from the model fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(X_test,pred,label="NN-Predicted Probability") g1 = np.exp(-X_test*X_test/2.) g2 = np.exp(-(X_test-1)*(X_test-1)/2.) ax.plot(X_test,g2/(g1+g2),label="True Probability") ax.set_xlabel("X") ax.set_ylabel("Prob to be in class 1") ax.legend(loc='upper left') #plt.savefig("prob.png")
cost-interpretation/train-and-verify.ipynb
jrpretz/scratch
gpl-3.0
Import pyhparser And one useful function that comes with it (readFile)
from pyhparser import Pyhparser, readFile
examples/Pyhparser.ipynb
msramalho/pyhparser
mit
Hello World Readin an int into a variable n that can later be used
inputVar = "10" #if the data is in a file do readFile("input.txt") parserVar = "(int, n)" #if the parser is in a file do readFile("parser.txt") p = Pyhparser(inputVar, parserVar) #create a pyhparser instance p.parse() #execute the parsing p.printRecursive(p.parserRead) #display the parse grammar hierarchy tempGlobals = p.getVariables() #get the values of the created variables for key, value in tempGlobals.items(): globals()[key] = value #make the created var acessible from every scope print(n) #print the value of n, the parsed variable
examples/Pyhparser.ipynb
msramalho/pyhparser
mit
Multiple data types example Requires: - Input data - Parser format - Python code that invokes Pyhparser
inputText = """ 3 Bartholomew JoJo Simpson 101 120 5455 Andrew American Bernard Bolivian Carl Canadian 10 11 12 20 21 22 30 31 32 69 lol 169 lel 333 threeHundredAndThirtyThree 666 sixHundredAndSixtySix this is my first tuple 5 3 limited to three 2 only two 2 two more 6 what a big old string list 8 this sentence is the biggest of them all 10.23 55 3.141592653 70 1300 veryNice 3.141592653 70 4 string with four words """ parserText = """ (int, n) (str, name, {n}) [list, {n}, (int), myInts] [list, {n}, (str, 2), Names] [list, {n}, [list, {n}, (int)] ,Numbers] {(int), (str), myDicts, 2} [list, 2, {(int), (str)}, myDictsList] [tuple, 5, (str), myTuple] (int, total) [list, {total}, {(int, sizeOfLine), (str, {sizeOfLine})}, sizesList] #classes [class, Complex, {realpart: (float), imagpart: (int)}, cn1] [class, Complex, {realpart: (float), imagpart: (int), special: {(int), (str)}}, cn2] [class, Complex, {realpart: (float), imagpart: (int), special: {(int, myInt), (str, {myInt})}}, cn3] """
examples/Pyhparser.ipynb
msramalho/pyhparser
mit
Define the Complex class used to parse the data
class Complex: def __init__(self, realpart, imagpart, special = "not special"): self.realpart = realpart self.imagpart = imagpart self.special = special def __str__(self): return ("%s Is the special of %di + %d" % (self.special, self.realpart, self.imagpart)) p = Pyhparser(inputText, parserText, [Complex]) #create a pyhparser instance p.parse() #execute the parsing #p.printRecursive(p.parserRead) #display the parse grammar hierarchy tempGlobals = p.getVariables() #get the values of the created variables for key, value in tempGlobals.items(): globals()[key] = value #make the created var acessible from every scope print(cn3)
examples/Pyhparser.ipynb
msramalho/pyhparser
mit
Build the first model In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use num_rooms as our input feature. To train our model, we'll use the LinearRegressor estimator. The Estimator takes care of a lot of the plumbing, and exposes a convenient way to interact with data, training, and evaluation.
OUTDIR = './housing_trained' def train_and_evaluate(output_dir, num_train_steps): estimator = tf.estimator.LinearRegressor( model_dir = output_dir, feature_columns = [tf.feature_column.numeric_column('num_rooms')]) #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'],tf.float64) return {'rmse': tf.metrics.root_mean_squared_error(labels, pred_values)} estimator = tf.contrib.estimator.add_metrics(estimator,rmse) train_spec=tf.estimator.TrainSpec( input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]], y = traindf["median_house_value"], # note the scaling num_epochs = None, shuffle = True), max_steps = num_train_steps) eval_spec=tf.estimator.EvalSpec( input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]], y = evaldf["median_house_value"], # note the scaling num_epochs = 1, shuffle = False), steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Run training shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time train_and_evaluate(OUTDIR, num_train_steps = 100)
courses/machine_learning/deepdive/05_artandscience/a_handtuning.ipynb
turbomanage/training-data-analyst
apache-2.0
1. Scale the output Let's scale the target values so that the default parameters are more appropriate.
SCALE = 100000 OUTDIR = './housing_trained' def train_and_evaluate(output_dir, num_train_steps): estimator = tf.estimator.LinearRegressor( model_dir = output_dir, feature_columns = [tf.feature_column.numeric_column('num_rooms')]) #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'],tf.float64) return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)} estimator = tf.contrib.estimator.add_metrics(estimator,rmse) train_spec=tf.estimator.TrainSpec( input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]], y = traindf["median_house_value"] / SCALE, # note the scaling num_epochs = None, shuffle = True), max_steps = num_train_steps) eval_spec=tf.estimator.EvalSpec( input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, shuffle = False), steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Run training shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time train_and_evaluate(OUTDIR, num_train_steps = 100)
courses/machine_learning/deepdive/05_artandscience/a_handtuning.ipynb
turbomanage/training-data-analyst
apache-2.0
2. Change learning rate and batch size Can you come up with better parameters?
SCALE = 100000 OUTDIR = './housing_trained' def train_and_evaluate(output_dir, num_train_steps): myopt = tf.train.FtrlOptimizer(learning_rate = 0.2) # note the learning rate estimator = tf.estimator.LinearRegressor( model_dir = output_dir, feature_columns = [tf.feature_column.numeric_column('num_rooms')], optimizer = myopt) #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'],tf.float64) return {'rmse': tf.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)} estimator = tf.contrib.estimator.add_metrics(estimator,rmse) train_spec=tf.estimator.TrainSpec( input_fn = tf.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]], y = traindf["median_house_value"] / SCALE, # note the scaling num_epochs = None, batch_size = 512, # note the batch size shuffle = True), max_steps = num_train_steps) eval_spec=tf.estimator.EvalSpec( input_fn = tf.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, shuffle = False), steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Run training shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time train_and_evaluate(OUTDIR, num_train_steps = 100)
courses/machine_learning/deepdive/05_artandscience/a_handtuning.ipynb
turbomanage/training-data-analyst
apache-2.0
Introduction
from IPython.display import YouTubeVideo YouTubeVideo(id="v9HrR_AF5Zc", width="100%")
notebooks/01-introduction/03-viz.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
In this chapter, We want to introduce you to the wonderful world of graph visualization. You probably have seen graphs that are visualized as hairballs. Apart from communicating how complex the graph is, hairballs don't really communicate much else. As such, my goal by the end of this chapter is to introduce you to what I call rational graph visualization. But before we can do that, let's first make sure we understand how to use NetworkX's drawing facilities to draw graphs to the screen. In a pinch, and for small graphs, it's very handy to have. Hairballs The node-link diagram is the canonical diagram we will see in publications. Nodes are commonly drawn as circles, while edges are drawn s lines. Node-link diagrams are common, and there's a good reason for this: it's convenient to draw! In NetworkX, we can draw node-link diagrams using:
from nams import load_data as cf import networkx as nx import matplotlib.pyplot as plt G = cf.load_seventh_grader_network() nx.draw(G)
notebooks/01-introduction/03-viz.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Nodes more tightly connected with one another are clustered together. Initial node placement is done typically at random, so really it's tough to deterministically generate the same figure. If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels=True argument to bring some degree of informativeness to the drawing:
G.is_directed() nx.draw(G, with_labels=True)
notebooks/01-introduction/03-viz.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
The downside to drawing graphs this way is that large graphs end up looking like hairballs. Can you imagine a graph with more than the 28 nodes that we have? As you probably can imagine, the default nx.draw(G) is probably not suitable for generating visual insights. Matrix Plot A different way that we can visualize a graph is by visualizing it in its matrix form. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes. We can draw a graph's matrix form conveniently by using nxviz.MatrixPlot:
import nxviz as nv from nxviz import annotate nv.matrix(G, group_by="gender", node_color_by="gender") annotate.matrix_group(G, group_by="gender")
notebooks/01-introduction/03-viz.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
What can you tell from the graph visualization? A few things are immediately obvious: The diagonal is empty: no student voted for themselves as their favourite. The matrix is asymmetric about the diagonal: this is a directed graph! (An undirected graph would be symmetric about the diagonal.) You might go on to suggest that there is some clustering happening, but without applying a proper clustering algorithm on the adjacency matrix, we would be hard-pressed to know for sure. After all, we can simply re-order the node ordering along the axes to produce a seemingly-random matrix. Arc Plot The Arc Plot is another rational graph visualization. Here, we line up the nodes along a horizontal axis, and draw arcs between nodes if they are connected by an edge. We can also optionally group and colour them by some metadata. In the case of this student graph, we group and colour them by "gender".
# a = ArcPlot(G, node_color='gender', node_grouping='gender') nv.arc(G, node_color_by="gender", group_by="gender") annotate.arc_group(G, group_by="gender")
notebooks/01-introduction/03-viz.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
The Arc Plot forms the basis of the next visualization, the highly popular Circos plot. Circos Plot The Circos Plot was developed by Martin Krzywinski at the BC Cancer Research Center. The nxviz.CircosPlot takes inspiration from the original by joining the two ends of the Arc Plot into a circle. Likewise, we can colour and order nodes by node metadata:
nv.circos(G, group_by="gender", node_color_by="gender") annotate.circos_group(G, group_by="gender")
notebooks/01-introduction/03-viz.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
Generally speaking, you can think of a Circos Plot as being a more compact and aesthetically pleasing version of Arc Plots. Hive Plot The final plot we'll show is, Hive Plots.
from nxviz import plots import matplotlib.pyplot as plt nv.hive(G, group_by="gender", node_color_by="gender") annotate.hive_group(G, group_by="gender")
notebooks/01-introduction/03-viz.ipynb
ericmjl/Network-Analysis-Made-Simple
mit
The MTC sample dataset is the same data used in the Self Instructing Manual {cite:p}koppelman2006self for discrete choice modeling: The San Francisco Bay Area work mode choice data set comprises 5029 home-to-work commute trips in the San Francisco Bay Area. The data is drawn from the San Francisco Bay Area Household Travel Survey conducted by the Metropolitan Transportation Commission (MTC) in the spring and fall of 1990. This survey included a one day travel diary for each household member older than five years and detailed individual and household socio-demographic information. In this example we will import the MTC example dataset, starting from a csv text file in idca format. Suppose that data file is gzipped, named "MTCwork.csv.gz" and is located in the current directory (use os.getcwd to see what is the current directory). For this example, we'll use the example_file method to find the file that comes with Larch. We can take a peek at the contents of the file, examining the first 10 lines:
with gzip.open(lx.example_file("MTCwork.csv.gz"), 'rt') as previewfile: print(*(next(previewfile) for x in range(10)))
book/example/000_mtc_data.ipynb
jpn--/larch
gpl-3.0
The first line of the file contains column headers. After that, each line represents an alternative available to a decision maker. In our sample data, we see the first 5 lines of data share a caseid of 1, indicating that they are 5 different alternatives available to the first decision maker. The identity of the alternatives is given by the number in the column altid. The observed choice of the decision maker is indicated in the column chose with a 1 in the appropriate row. We can load this data easily using pandas. We'll also set the index of the resulting DataFrame to be the case and alt identifiers.
df = pd.read_csv(lx.example_file("MTCwork.csv.gz"), index_col=['casenum','altnum']) df.head(15)
book/example/000_mtc_data.ipynb
jpn--/larch
gpl-3.0
To prepare this data for use with the latest version of Larch, we'll want to convert this DataFrame into a larch.Dataset. For idca format like this, we can use the from_idca constructor to do so easily.
ds = lx.Dataset.construct.from_idca(df) ds
book/example/000_mtc_data.ipynb
jpn--/larch
gpl-3.0
Larch can automatically analyze the data to find variables that do not vary across alternatives within cases, and transform those into idco format variables. If you would prefer that Larch not do this you can set the crack argument to False. This is particularly important for larger datasets (the data sample included is only a tiny extract of the data that might be available for this kind of model), as breaking the data into separate idca and idco parts is a relatively expensive operation, and it is not actually required for most model structures.
# TEST assert ds['femdum'].dims == ('casenum',) assert ds['femdum'].dtype.kind == 'i' assert ds['ivtt'].dims == ('casenum','altnum') assert ds['ivtt'].dtype.kind == 'f' assert ds.dims == {'casenum': 5029, 'altnum': 6} assert ds.dc.CASEID == 'casenum' assert ds.dc.ALTID == 'altnum'
book/example/000_mtc_data.ipynb
jpn--/larch
gpl-3.0
The set of all possible alternative codes is deduced automatically from all the values in the altnum column. However, the alterative codes are not very descriptive when they are set automatically, as the csv data file does not have enough information to tell what each alternative code number means. We can use the set_altnames method to attach more descriptive names.
ds = ds.dc.set_altnames({ 1:'DA', 2:'SR2', 3:'SR3+', 4:'Transit', 5:'Bike', 6:'Walk', }) ds # TEST assert all(ds.coords['altnames'] == ['DA', 'SR2', 'SR3+', 'Transit', 'Bike', 'Walk'])
book/example/000_mtc_data.ipynb
jpn--/larch
gpl-3.0
Import packages for scientific computing One of the things that makes Python so powerful for science are the plethora of packages for scientific computing. However, the need to import these packages and understand what they are is also confusing to new users. Usually at the top of a notebook, you should put in a code cell that imports all the modules we'll need. For this notebook, we will import numpy so that we can use its numerical structures, the scipy.integrate module and matplotlib.pyplot for plotting. I have used the import commands that are standard used in scientific python that imports abbreviations for these modules (e.g. numpy is imported as np).
import numpy as np import scipy.integrate import matplotlib.pyplot as plt
Jupyter_notebook_introduction.ipynb
PmagPy/2017_MagIC_Workshop_PmagPy_Tutorial
bsd-3-clause
Example plot We can generate some data to plot using the np.linspace function and then feeding that data into the np.sin function.
x = np.linspace(0, 2 * np.pi, 200) y = np.sin(x)
Jupyter_notebook_introduction.ipynb
PmagPy/2017_MagIC_Workshop_PmagPy_Tutorial
bsd-3-clause
These data can then be plotted as below with axes then being labeled.
plt.plot(x, y) plt.xlim((0, 2 * np.pi)) plt.xlabel('x') plt.ylabel('sin(x)') plt.title('Example plot') plt.show()
Jupyter_notebook_introduction.ipynb
PmagPy/2017_MagIC_Workshop_PmagPy_Tutorial
bsd-3-clause
With this function in hand, we can now pick our initial conditions and time points, run the numerical integration, and then plot the result.
# Parameters to use p = np.array([10.0, 28.0, 8.0 / 3.0]) # Initial condition r0 = np.array([0.1, 0.0, 0.0]) # Time points to sample t = np.linspace(0.0, 80.0, 10000) # Use scipy.integrate.odeint to integrate Lorentz attractor r = scipy.integrate.odeint(lorenz_attractor, r0, t, args=(p,)) # Unpack results into x, y, z. x, y, z = r.transpose() # Plot the result plt.plot(x, z, '-', linewidth=0.5) plt.xlabel(r'$x(t)$', fontsize=18) plt.ylabel(r'$z(t)$', fontsize=18) plt.title(r'$x$-$z$ proj. of Lorenz attractor traj.') plt.show()
Jupyter_notebook_introduction.ipynb
PmagPy/2017_MagIC_Workshop_PmagPy_Tutorial
bsd-3-clause
Passo 1: Modificar o histograma para que represente o mesmo número de pixels da imagem desejada. A imagem a ser modificada é a "cameraman.tif". A idéia é calcular o histograma acumulado, normalizá-lo para que o valor final acumulado seja o número de pixels (n) da imagem de entrada e fazer a diferença discreta para calcular o histograma que represente o mesmo número de pixels da imagem do "cameraman".
f = mpimg.imread('../data/cameraman.tif') ia.adshow(f, 'imagem de entrada') plt.plot(ia.histogram(f)),plt.title('histograma original'); n = f.size hcc = np.cumsum(hout) hcc1 = ia.normalize(hcc,[0,n]) h1 = np.diff(np.concatenate(([0],hcc1))) plt.plot(hcc1), plt.title('histograma acumulado desejado'); plt.show() plt.plot(h1), plt.title('histograma desejado'); plt.show
master/tutorial_pehist_2.ipynb
robertoalotufo/ia898
mit
Passo 2: Realizar o conjunto de pixels desejados a partir do histograma desejado. É utilizado a função "repeat" do NumPy.
gs = np.repeat(np.arange(256),h1).astype('uint8') plt.plot(gs), plt.title('pixels desejados, ordenados'); plt.show() plt.plot(np.sort(f.ravel())), plt.title('pixels ordenados da imagem original');
master/tutorial_pehist_2.ipynb
robertoalotufo/ia898
mit
Passo 3: Fazer o mapeando dos pixels ordenados. Aqui existem três técnicas importantes: a primeira é trabalhar com a imagem rasterizada em uma dimensão, com o uso de ravel(); a segunda é o uso da função argsort que retorna os índices dos pixels ordenados pelo nível de cinza; e a terceira é a atribuição indexada g[si] = gs, onde g é a imagem de saída rasterizada, si é o array de índices dos pixels ordenados e gs são os pixels desejados ordenados. O último passo é colocar o shape da imagem desejada.
g = np.empty( (n,), np.uint8) si = np.argsort(f.ravel()) g[si] = gs g.shape = f.shape ia.adshow(g, 'imagem modificada') h = ia.histogram(g) plt.plot(h), plt.title('histograma da imagem modificada');
master/tutorial_pehist_2.ipynb
robertoalotufo/ia898
mit
Compute Power Spectral Density of inverse solution from single epochs Compute PSD of dSPM inverse solution on single trial epochs restricted to a brain label. The PSD is computed using a multi-taper method with Discrete Prolate Spheroidal Sequence (DPSS) windows.
# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu> # # License: BSD-3-Clause import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, compute_source_psd_epochs print(__doc__) data_path = sample.data_path() fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' fname_raw = data_path + '/MEG/sample/sample_audvis_raw.fif' fname_event = data_path + '/MEG/sample/sample_audvis_raw-eve.fif' label_name = 'Aud-lh' fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name subjects_dir = data_path + '/subjects' event_id, tmin, tmax = 1, -0.2, 0.5 snr = 1.0 # use smaller SNR for raw data lambda2 = 1.0 / snr ** 2 method = "dSPM" # use dSPM method (could also be MNE or sLORETA) # Load data inverse_operator = read_inverse_operator(fname_inv) label = mne.read_label(fname_label) raw = mne.io.read_raw_fif(fname_raw) events = mne.read_events(fname_event) # Set up pick list include = [] raw.info['bads'] += ['EEG 053'] # bads + 1 more # pick MEG channels picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True, include=include, exclude='bads') # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks, baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13, eog=150e-6)) # define frequencies of interest fmin, fmax = 0., 70. bandwidth = 4. # bandwidth of the windows in Hz
0.24/_downloads/4d3b714a9291625bb4b01d7f9c7c3a16/compute_source_psd_epochs.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
This dataset seems to offer a lot of easy wins around the 0-50 character range, but there's also a very long tail with comments ranging up to 17,803 characters in length (!). Taking only those comments with length within 1 SD the max drops to ~630 characters, around 6-7 sentences long. Seems to demand the building of more complex representations of the document. eg. sentence embeddings, paragraph embeddings
from enchant.checker import SpellChecker
insults/exploration/dataset/basic_features.ipynb
thundergolfer/Insults
gpl-3.0
How good is the spelling in these comments?
rate_of_misspellings = [] for c in comments: checker = SpellChecker() checker.set_text(c) num_misspellings = len([e.word for e in checker]) rate = num_misspellings / float(len(c.split(' '))) rate_of_misspellings.append(rate) ave_rate_of_misspelling = sum(rate_of_misspellings) / float(len(comments)) print(ave_rate_of_misspelling)
insults/exploration/dataset/basic_features.ipynb
thundergolfer/Insults
gpl-3.0
Need to check that this isn't being thrown off by some unicode code in the comments, but ~13.5% misspellings is kind of brutal.
from collections import Counter # http://www.slate.com/blogs/lexicon_valley/2013/09/11/top_swear_words_most_popular_curse_words_on_facebook.html facebook_most_popular_swears = [ 'shit', 'fuck', 'damn', 'bitch', 'crap', 'piss', 'dick', 'darn', 'cock', 'pussy', 'asshole', 'fag', 'bastard', 'slut', 'douche' ] everything = [] for c in comments: everything += c.split() counts = Counter(everything) swear_counts = [count for count in counts.items() if count[0] in facebook_most_popular_swears] print(swear_counts) print("\nTotal common swear words is " + str(sum([elem[1] for elem in swear_counts])))
insults/exploration/dataset/basic_features.ipynb
thundergolfer/Insults
gpl-3.0
The data can be accessed through a URL that I'll store in a string below.
NOMADV2url='https://seabass.gsfc.nasa.gov/wiki/NOMAD/nomad_seabass_v2.a_2008200.txt'
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
Next, I'll write a couple of functions. The first to get the data from the url. The second function will parse the text returned by the first function and put in a Pandas DataFrame. This second function makes more sense after inspecting the content of the page at the url above.
def GetNomad(url=NOMADV2url): """Download and return data as text""" resp = requests.get(NOMADV2url) content = resp.text.splitlines() resp.close() return content def ParseTextFile(textFile, topickle=False, convert2DateTime=False, **kwargs): """ * topickle: pickle resulting DataFrame if True * convert2DateTime: join date/time columns and convert entries to datetime objects * kwargs: pkl_fname: pickle file name to save DataFrame by, if topickle=True """ # Pre-compute some regex columns = re.compile('^/fields=(.+)') # to get field/column names units = re.compile('^/units=(.+)') # to get units -- optional endHeader = re.compile('^/end_header') # to know when to start storing data # Set some milestones noFields = True getData = False # loop through the text data for line in textFile: if noFields: fieldStr = columns.findall(line) if len(fieldStr)>0: noFields = False fieldList = fieldStr[0].split(',') dataDict = dict.fromkeys(fieldList) continue # nothing left to do with this line, keep looping if not getData: if endHeader.match(line): # end of header reached, start acquiring data getData = True else: dataList = line.split(',') for field,datum in zip(fieldList, dataList): if not dataDict[field]: dataDict[field] = [] dataDict[field].append(datum) df = pd.DataFrame(dataDict, columns=fieldList) if convert2DateTime: datetimelabels=['year', 'month', 'day', 'hour', 'minute', 'second'] df['Datetime']= pd.to_datetime(df[datetimelabels], format='%Y-%m-%dT%H:%M:%S') df.drop(datetimelabels, axis=1, inplace=True) if topickle: fname=kwargs.pop('pkl_fname', 'dfNomad2.pkl') df.to_pickle(fname) return df df = ParseTextFile(GetNomad(), topickle=True, convert2DateTime=True, pkl_fname='./bayesianChl_DATA/dfNomadRaw.pkl') df.head()
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
This DataFrame quite large and unwieldy with 212 columns. But Pandas makes it easy to extract the necessary data for a particular project. For my current project, which I'll go over in a subsequent post, I need field data relevant to the SeaWiFS sensor, in particular optical data at wavelengths 412, 443, 490, 510, 555, and 670 nm. First let's look at the available bands as they appear in spectral surface irradiance column labels, which start with 'es'.
bandregex = re.compile('es([0-9]+)') bands = bandregex.findall(''.join(df.columns)) print(bands)
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
Now I can extract data with bands that are the closest to what I need. In the process I'm going to use water leaving radiance and spectral surface irradiance to compute remote sensing reflectance, rrs. I will store this new data in a new DataFrame, dfSwf.
swfBands = ['411','443','489','510','555','670'] dfSwf = pd.DataFrame(columns=['rrs%s' % b for b in swfBands]) for b in swfBands: dfSwf.loc[:,'rrs%s'%b] = df.loc[:,'lw%s' % b].astype('f8') / df.loc[:,'es%s' % b].astype('f8') dfSwf.head()
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
For the projects I'm currently working on, I'll need to select a few more features from the inital dataset.
dfSwf['id'] = df.id.astype('i4') # in case I need to relate this data to the original dfSwf['datetime'] = df.Datetime dfSwf['hplc_chl'] = df.chl_a.astype('f8') dfSwf['fluo_chl'] = df.chl.astype('f8') dfSwf['lat'] = df.lat.astype('f8') dfSwf['lon'] = df.lon.astype('f8') dfSwf['depth'] = df.etopo2.astype('f8') dfSwf['sst'] = df.oisst.astype('f8') for band in swfBands: addprods=['a','ad','ag','ap','bb'] for prod in addprods: dfSwf['%s%s' % (prod,band)] = df['%s%s' % (prod, band)].astype('f8') dfSwf.replace(-999,np.nan, inplace=True)
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
Tallying the features I've gathered...
print(dfSwf.columns)
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
That seems like a good dataset to start with. I'll pickle this DataFrame just in case.
dfSwf.to_pickle('./bayesianChl_DATA/dfNomadSWF.pkl')
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
The first project that I'll first tackle is a recasting of the OCx empirical band ratio algorithms within a Bayesian framework. For that I can further cull the dataset following the "Data Source" section in a paper I am using for comparison by Hu et al., 2012. This study draws from this same data set, applying the following criteria: * only hplc chlorophyll * chl>0 where rrs>0 * depth>30 * lat $\in\left[-60,60\right]$ Applying these criteria should result in a dataset reduced to 136 observations.
rrsCols = [col for col in dfSwf.columns if 'rrs' in col] iwantcols=rrsCols + ['id', 'depth','hplc_chl','sst','lat','lon'] dfSwfHu = dfSwf[iwantcols].copy() del dfSwf, df dfSwfHu.info()
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
Apparently the only null entries are in the hplc_chl column. Dropping the nulls in that column takes care of the first of the criteria listed above.
dfSwfHu.dropna(inplace=True) dfSwfHu.describe()
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
According to the summary table above, I don't need to worry about 0 chl as per the criteria above. However, it appears several reflectances have spurious 1.0000 values. Since these were never mentioned in the paper, I'll first cull the dataset according to depth and lat criteria, see if that takes care of cleaning those values as well. This should land me with 136 observations
dfSwfHu=dfSwfHu.loc[((dfSwfHu.depth>30) &\ (dfSwfHu.lat>=-60) & (dfSwfHu.lat<=60)),:] dfSwfHu.describe()
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
Nope. We're down to 964 observations. So much for reproducibility via publication. Getting rid of spurions rrs values...
dfSwfHu = dfSwfHu.loc[((dfSwfHu.rrs411<1.0) & (dfSwfHu.rrs510<1.0)&\ (dfSwfHu.rrs555<1.0) & (dfSwfHu.rrs670<1.0)),:] dfSwfHu.describe()
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
136 values. Success! Once again, I'll pickle this DataFrame.
dfSwfHu.to_pickle('/accounts/ekarakoy/DATA/NOMAD/dfSwfHuOcxCI_2012.pkl')
posts/from-the-web-to-a-pandas-dataframe.ipynb
madHatter106/DataScienceCorner
mit
Wir haben X Einträge. Es wird viel Speicher belegt, wenn wir die Daten roh einlesen
git_blame.info(memory_usage='deep')
prototypes/KnowledgeGaps.ipynb
feststelltaste/software-analytics
gpl-3.0
Wir können zuerst einmal noch bei den verwendeten Datentypen nachhelfen. Categorical == kategoriale Variablen, also Variablen, die nur eine limitierte Anzahl an Werten annehmen können. Werte in den Spalten werden dann zu Referenzen, die auf die eigentlichen Werte zeigen. AKA => Auswertungn werden schneller. Hat bei sehr vielen Daten wie hier viel Sinn.
git_blame.path = pd.Categorical(git_blame.path) git_blame.author = pd.Categorical(git_blame.author) git_blame.timestamp = pd.to_datetime(git_blame.timestamp) git_blame.info(memory_usage='deep')
prototypes/KnowledgeGaps.ipynb
feststelltaste/software-analytics
gpl-3.0
Einfach Auswertung dieser Art bringt nichts, müssen unseren Kontext beachten. Linus Torvalds hat den initialen Git-Commit mit dem alten Bestandscode vorgenommen, deshalb ist diese Auswertung nicht korrekt:
git_blame.author.value_counts().head(10) git_blame.head()
prototypes/KnowledgeGaps.ipynb
feststelltaste/software-analytics
gpl-3.0
Was ist eigentlich Wissen? Unsere Annäherung / Modell: Geänderte Codezeilen im letzten Jahr
a_year_ago = pd.Timestamp("today") - pd.DateOffset(years=1) a_year_ago (git_blame.timestamp >= a_year_ago).head() git_blame['knowing'] = git_blame.timestamp >= a_year_ago git_blame.head() %matplotlib inline git_blame.knowing.value_counts().plot.pie() knowledge = git_blame[git_blame.knowing] knowledge.head() knowledge_carrier = knowledge.author.value_counts() / len(knowledge) knowledge_carrier.head(10) git_blame.head()
prototypes/KnowledgeGaps.ipynb
feststelltaste/software-analytics
gpl-3.0
Komponenten können aus dem Pfad gewonnen werden
git_blame.path.value_counts().head()
prototypes/KnowledgeGaps.ipynb
feststelltaste/software-analytics
gpl-3.0
Split Schritt für Schritt auf bauen
git_blame['component'] = git_blame.path.str.split("/").str[:2].str.join(":") git_blame.head()
prototypes/KnowledgeGaps.ipynb
feststelltaste/software-analytics
gpl-3.0
Nun können wir unsere Daten nach den Komponenten gruppieren.
knowledge_per_component = git_blame.groupby('component')[['knowing']].mean() knowledge_per_component.head() knowledge_per_component.knowing.sort_values().plot.barh(figsize=[3,20])
prototypes/KnowledgeGaps.ipynb
feststelltaste/software-analytics
gpl-3.0
We're gonna be running R code as well, so we need the following: The R code is run using 'rmagic' commands in IPython, so to copy this code it would be easiest if you also ran it in a Jupyter/IPython notebook.
## requires rpy2 %load_ext rpy2.ipython %%R ## load a few R libraries library(ape) library(ade4) library(nlme)
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
To run parallel Python code using ipyparallel In a separate terminal run the command below to start N engines for doing parallel computing. This requires the Python package 'ipyparallel'. ipcluster start --n 20 Then we connect to these Engines and populate the namespace with our libraries
## import ipyparallel import ipyparallel as ipp ## start a parallel client ipyclient = ipp.Client() ## create a loadbalanced view to distribute jobs lbview = ipyclient.load_balanced_view() ## import Python and R packages into parallel namespace ipyclient[:].execute(""" from scipy.optimize import fminbound import numpy as np import pandas as pd import itertools import ete3 import rpy2 import copy import os %load_ext rpy2.ipython %R library(ape) %R library(nlme) """)
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
1. An example pgls fit for simulated data. The code to fit a phylogenetic generalized least squares model is adapted from http://www.mpcm-evolution.org/OPM/Chapter5_OPM/OPM_chap5.pdf. Below we first fit a model for simulated data to see how large of data matrices this method can handle. Then we will run on our real data, which is a more complex case. We'll get to that. Generate some data and plot it This cell is an example of R code (notice the %%R header). Google "R in Jupyter notebooks" for more info.
%%R -w 400 -h 400 ## matrix size (can it handle big data?) n = 500 ## simulate random data, log-transformed large values set.seed(54321999) simdata = data.frame('nloci'=log(rnorm(n, 50000, 10000)), 'pdist'=rnorm(n, 1, 0.2), 'inputreads'=log(abs(rnorm(n, 500000, 100000)))) ## simulate a tree of same size simtree = rtree(n) ## match names of tree to simdata idxs simtree$tip.label = 1:n ## plot the data plot(simtree) plot(simdata)
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Pass objects between R and Python
## as an example of what we'll be doing with the real data (Python objects) ## let's export them (-o) from R back to Python objects %R newick <- write.tree(simtree) %R -o newick %R -o simdata ## Now we have the tree from R as a string in Python ## and the data frame from R as a pandas data frame in Python newick = newick.tostring() simdata = simdata print simdata.head()
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Model fitting functions I know this is a bit convoluted, but these are Python functions which call mostly R code to do model fitting. This is because I couldn't find a Python library capable of doing pgls. The funtions take in Python objects, convert them to R objects, compute results, and return values as Python objects. The lines with "%R" execute as R code in IPython. The function rModelFitPhy uses a phylogeny to estimate a covariance matrix and perform model fitting using GLS.
def rModelFit(pydat, covmat=np.zeros(0), newick=""): """ send PyObjects to R and runs pgls using either an input covariance matrix or an input tree. Returns the model fit as a dataframe, and the Log likelhiood """ ## reconstitute Python data frame as R data frame %R -i pydat %R data <- data.frame(pydat) ## which model to use... if (not np.any(covmat)) and (not newick): %R fit <- gls(nloci ~ inputreads + pdist, data=data) else: ## get covariance (correlation) matrix from tree if newick: %R -i newick %R tre <- read.tree(text=newick) %R simmat <- vcv(tre, corr=TRUE) ## get covariance matrix from input else: %R -i covmat %R simmat <- cov2cor(covmat) ## fit the model %R tip.heights <- diag(simmat) %R fit <- gls(nloci ~ inputreads + pdist, data=data, \ correlation=corSymm(simmat[lower.tri(simmat)], fixed=TRUE), \ weights=varFixed(~tip.heights)) ## return results as data frame %R df <- as.data.frame(summary(fit)$tTable) %R LL <- fit$logLik %R -o df,LL return df, LL def rModelFit2(pydat, covmat=np.zeros(0), newick=""): """ Send PyObjects to R and run pgls with covariance mat. In contrast to the model above this one only fits the model to inputreads, not pdist. We use the likelihood fit of this model to estimate estimate lamda by using maximum likelihood in the func estimate_lambda. """ ## reconstitute Python data frame as R data frame %R -i pydat %R data <- data.frame(pydat) ## which model to use... if (not np.any(covmat)) and (not newick): %R fit <- gls(nloci ~ inputreads, data=data) else: ## get covariance (correlation) matrix from tree if newick: %R -i newick %R tre <- read.tree(text=newick) %R simmat <- vcv(tre, corr=TRUE) ## get covariance matrix from input else: %R -i covmat %R simmat <- cov2cor(covmat) ## fit the model %R tip.heights <- diag(simmat) %R fit <- gls(nloci ~ inputreads, data=data, \ correlation=corSymm(simmat[lower.tri(simmat)], fixed=TRUE), \ weights=varFixed(~tip.heights)) ## return results as data frame %R df <- as.data.frame(summary(fit)$tTable) %R LL <- fit$logLik %R -o df,LL return df, LL
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Model fits to random data Here we did four different model fits to check that our covariacne structure is working correctly. We expect that in all model fits the two variables (inputreads and pdist) will be poor predictors of nloci, since all values were randomly generated. In the first model fit we use no covariance structure and the LL=158. In the second we enforce a covariance structure by getting a VCV from the phylogeny. This gives a much worse fit to the data, as we might expect since the data were not generated with any regard to the phylogeny. The third test checks that when we input a VCV directly we get the same answer as when we put in a tree. We do. This is good since in our quartet analysis below we will be entering just the VCV. Finally, We fit a model that includes a VCV, but where all off-diagonal elements are zero (no covariance). This is equivalent to tranforming the tree by Pagel's lambda = 0. It is encouraging that when we remove the covariance structure in this way we get the same LL=158 as when no covariance structure is present at all. This means that in our real data set we can try to estimate Pagel's lambda for our covariance matrices computed from quartet data and if the covariance structure we create does not fit our data at all then a zero lambda will be estimated and the effect of our covariance structure will be removed.
print "\nno VCV" df, LL = rModelFit(simdata) print df print "log-likelihood", LL print "---"*20 print "\nVCV from tree -- entered as tree" df, LL = rModelFit(simdata, newick=newick) print df print "log-likelihood", LL print "---"*20 print "\nVCV from tree -- entered as VCV" %R -o simmat df, LL = rModelFit(simdata, covmat=simmat) ## <- uses the simmat from the tree print df print "log-likelihood", LL print "---"*20 print "\nVCV from tree -- entered as VCV -- transformed so no VCV structure" df, LL = rModelFit(simdata, covmat=np.eye(simdata.shape[0])) ## <- no covar == lambda=0 print df print "log-likelihood", LL print "---"*20
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
A function to optimize lambda On this simulated data there is little power to detect any covariance structure (lambda fit) in the data because the data is basically just noise. But you'll see below on our simulated data that it fits very well.
def get_lik_lambda(lam, data, covmat): """ a function that can be optimized with ML to find lambda""" tmat = covmat*lam np.fill_diagonal(tmat, 1.0) _, LL = rModelFit2(data, covmat=tmat) ## return as the NEGATIVE LL to minimze func return -1*LL def estimate_lambda(data, covmat): """ uses fminbound to estimate lambda in [0, 1]""" return fminbound(get_lik_lambda, 0, 1, args=(data, covmat), xtol=0.001, maxfun=25)
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Fit with lambda When we fit the model using our estimated lambda, which in this case is zero since the data were simulated random (no respect to phylogeny) the model fit is the same as above when there is no covariance structure. This is good news. We will penalize this model for the extra parameter using AIC when comparing models.
print "\nVCV from tree -- entered as VCV -- transformed by estimated lambda" lam = estimate_lambda(simdata, simmat) mat = simmat * lam np.fill_diagonal(mat, 1.0) df, LL = rModelFit(simdata, covmat=mat) print df print "lambda", lam print "log-likelihood", LL print "---"*20
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
2. Write functions to get stats info from pyrad outputs Here we build a large data frame that will store how many loci are shared among all sets of quartets, what the phylogenetic distance spanned by each quartet is (pdist), and how much input data each quartet sample had (inputreads). Parse nloci (shared) from pyrad output
def getarray(locifile, treefile): """ get presence/absence matrix from .loci file (pyrad v3 format) ordered by tips on the tree""" ## parse the loci file infile = open(locifile) loci = infile.read().split("\n//")[:-1] ## order (ladderize) the tree tree = ete3.Tree(treefile, format=3) tree.ladderize() ## assign numbers to internal nodes nodes = tree.iter_descendants() nodenum = 0 for node in nodes: if not node.is_leaf(): node.name = nodenum nodenum += 1 ## get tip names names = tree.get_leaf_names() ## make empty matrix lxs = np.zeros((len(names), len(loci)), dtype=np.uint32) ## fill the matrix for loc in xrange(len(loci)): for seq in loci[loc].split("\n"): if ">" in seq: ## drop _0 from sim names if present seq = seq.rsplit("_0 ", 1)[0] lxs[names.index(seq.split()[0][1:]),loc] += 1 infile.close() return lxs, tree
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Get pdist and median inputreads for all(!) quartets
def build_df4_parallel(tree, lxs, s2file, lbview): """ Builds a data frame for quartets in parallel. A less generalized form of the 'buildarray' function, and much faster. Returns a data frame with n-shared-loci, median-input-reads, phylo-dist. """ ## get number of taxa names = tree.get_leaf_names() ## read in step2 stats to get inputreads info, correct _0 in names for simdata res2 = pd.read_table(s2file, header=0, index_col=0, nrows=len(names)) res2.index = [i[:-2] if i.endswith("_0") else i for i in res2.index] inputdat = res2["passed.total"].reindex(tree.get_leaf_names()) ## create empty array of nquart rows and 7 columns nsets = sum(1 for _ in itertools.combinations(xrange(len(names)), 4)) taxonset = itertools.combinations(xrange(len(names)), 4) arr = np.zeros((nsets, 7), dtype=np.float32) ## iterate over sampled sets and fill array. asyncs = {} sub = 0 while 1: ## grab 100 rows hund = np.array(list(itertools.islice(taxonset, 100))) ## submit to engine if np.any(hund): asyncs[sub] = lbview.apply(fillrows, *[tree, names, inputdat, hund, lxs]) sub += 100 else: break ## wait for jobs to finish and enter into the results array lbview.wait() taxonset = itertools.combinations(xrange(len(names)), 4) for idx in xrange(nsets): arr[idx, :4] = taxonset.next() for idx in xrange(0, sub, 100): arr[idx:idx+100, 4:] = asyncs[idx].get() ## dress up the array as a dataframe columns=["p1", "p2", "p3", "p4", "nloci", "inputreads", "pdist"] df = pd.DataFrame(arr, columns=columns) ## convert quartet indices to ints for prettier printing df[["p1", "p2", "p3", "p4"]] = df[["p1", "p2", "p3", "p4"]].astype(int) return df def fillrows(tree, names, inputdat, tsets, lxs): """ takes 100 row elements in build df4 """ ## output array arr = np.zeros((tsets.shape[0], 3), dtype=np.float64) ## get number of loci shared by the set for ridx in xrange(tsets.shape[0]): tset = tsets[ridx] colsums = lxs[tset, :].sum(axis=0) lshare = np.sum(colsums==4) ## get total tree length separating these four taxa t = copy.deepcopy(tree) t.prune([names[i] for i in tset], preserve_branch_length=True) pdist = sum([i.dist for i in t]) ## get min input reads inputreads = np.median([inputdat[i] for i in tset]) ## fill arr (+1 ensures no log(0) = -inf) arr[ridx] = [np.log(lshare+1), np.log(inputreads+1), pdist] ## return array with 100 values return arr
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
A simple Class object to store our results in
## define a class object to store data in class dataset(): def __init__(self, name): self.name = name self.files = fileset() ## define a class object to store file locations class fileset(dict): """ checks that data handles exist and stores them""" def __getattr__(self, name): if name in self: return self[name] else: raise AttributeError("No such attribute: " + name) def __setattr__(self, name, value): if os.path.exists(value): self[name] = value else: raise AttributeError("bad file name " + value)
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Calculate a covariance matrix from shared edges among quartets This is of course different from the VCV we inferred from a tree structure in the example at the beginning of this notebook. Here our data points are not tips of a tree but rather quartets. And we create a covariance matrix that measures the amount of shared edges among sampled quartets.
def get_path(node, mrca): """ get branch length path from tip to chosen node (mrca)""" path = set() while 1: ## check that tips have not coalesced if not node == mrca: path.add((node, node.up)) node = node.up else: return path def calculate_covariance(intree, maxarr=1000): """ get covariance matrix measuring shared branch lengths among quartets, if total number of quartets for a tree is >1000 then randomly sample 1000 quartets instead. """ ## tree copy tree = copy.deepcopy(intree) tree.unroot() ## create a large empty matrix tt = tree.get_leaves() nsets = sum(1 for _ in itertools.combinations(range(len(tt)), 4)) ## we're not gonna worry about memory for now, and so make a list ## otherwise we would work with iterators fullcombs = list(itertools.combinations(range(len(tt)), 4)) ## either sample all quarets or a random maxarr if nsets <= maxarr: quarts = np.zeros((nsets, 4), dtype=np.uint16) arr = np.zeros((nsets, nsets), dtype=np.float64) ridx = np.arange(nsets) for i in xrange(nsets): quarts[i] = fullcombs[i] arrsize = nsets else: quarts = np.zeros((maxarr, 4), dtype=np.uint16) arr = np.zeros((maxarr, maxarr), dtype=np.float64) ## randomly sample 1000 indices within maxarr ridx = np.random.choice(nsets, maxarr) for i in xrange(maxarr): quarts[i] = fullcombs[ridx[i]] arrsize = maxarr ## iterate over each comb to compare for idx in xrange(arrsize): ## get path to quartet tips set1 = [tt[i] for i in quarts[idx]] mrca1 = set1[0].get_common_ancestor(set1) edges1 = [get_path(node, mrca1) for node in set1] for cdx in xrange(idx, arrsize): ## get path to quartet tips set2 = [tt[i] for i in quarts[cdx]] mrca2 = set2[0].get_common_ancestor(set2) edges2 = [get_path(node, mrca2) for node in set2] ## which edges are shared a = set([tuple(i) for i in itertools.chain(*edges1)]) b = set([tuple(i) for i in itertools.chain(*edges2)]) shared = set.intersection(a, b) ## save the branch lengths sumshare = sum([tree.get_distance(edg[0], edg[1]) for edg in shared]) arr[idx, cdx] = sumshare #print idx, return arr, ridx def check_covariance(covar): """ tests covariance matrix for positive definite""" ## fill the lower triangle of matrix symmetrically mat = np.array(covar) upidx = np.triu_indices_from(covar) mat[(upidx[1], upidx[0])] = covar[upidx] ## is the matrix symmetric? assert np.allclose(mat, mat.T), "matrix is not symmetric" ## is it positive definite? test = 0 while 1: if not np.all(np.linalg.eigvals(mat) > 0): didx = np.diag_indices_from(mat) mat[didx] = np.diag(mat) *1.1 test += 1 elif test > 20: assert np.all(np.linalg.eigvals(mat) > 0), "matrix is not positive definite" else: break assert np.all(np.linalg.eigvals(mat) > 0), "matrix is not positive definite" return mat
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Function to run model functions together This allows us to make one call to run jobs in parallel
def fitmodels(tree, df4, nsamples): """ Calculates covar, checks matrix, fits models, and return arrays for with and without covar """ ## calculate covariance of (nsamples) random data points covar, ridx = calculate_covariance(tree, nsamples) ## get symmetric matrix and test for positive definite mat = check_covariance(covar) ## subsample a reasonable number of data points subdf4 = df4.loc[ridx, :] ## estimate lambda to be used in model fits ## I'm using R to convert cov2cor %R -i mat %R mm <-cov2cor(mat) %R -o mm lam = estimate_lambda(subdf4, mm) ## transform corr matrix with lambda mat = mm*lam np.fill_diagonal(mat, 1.0) ## fit models with covar ndf, nLL = rModelFit(subdf4) wdf, wLL = rModelFit(subdf4, covmat=mat) ## return two arrays return wdf, wLL, nLL, lam #ndf, nLL, wdf, wLL
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
3. Testing on a simulated RAD data set In this case we know what the source of missing data is, either mutation (drop) or random (rand). For a large tree like this (64) tips this takes quite a while to run (~20 minutes). This is the case even when we only randomly sample 200 quartets out of the possible ~650K. Our solution will be to do many repetitions of subsampling and to parallelize this to get a good estimate quickly. Fortunately our largest empirical data set is about the same size as these simulated data sets, so this gives us a good idea for run times (still pretty long). To repeat this analysis you will have to change the file paths. The files are created in the simulation notebook. Simulate a new data set variable missing data I'm simulating a new data set for this since the 'random' missing data sets we made in the sim notebook all have very similar amounts of input data. To get signal from the data we need the input data to vary more significantly among samples. So I simulate some new data here with greater variance. More specifically, I will take data from the 'fullcov' data sets which had no missing data and randomly remove some DIFFERENT proportion of data from each sample. This is different from our low coverage simulations in which we removed the same proportion of data from each sample, but it was randomly missing from different loci. New balanced tree data set
## make a new directory for the subsampled fastqs ! mkdir -p /home/deren/Documents/RADsims/Tbal_rad_varcov/fastq/ ## grab the no-missing fastqs fastqs = glob.glob("/home/deren/Documents/RADsims/Tbal_rad_covfull/fastq/s*") for fastq in fastqs: ## create a new output file _, handle = os.path.split(fastq) outfile = gzip.open( os.path.join( "/home/deren/Documents/RADsims/Tbal_rad_varcov/fastq", handle), 'w') ## grab a random proportion of reads from this data set (0-100%) p = np.random.uniform(0.1, 0.9) ## iterate over file 4-lines at a time. infile = gzip.open(fastq, 'r') qiter = itertools.izip(*[iter(infile)]*4) ## sample read with probability p kept = 0 while 1: try: if np.random.binomial(1, p): outfile.write("".join(qiter.next())) kept += 1 else: _ = qiter.next() except StopIteration: break print '{} sampled at p={:.2f} kept {} reads'.format(handle, p, kept) infile.close() outfile.close() %%bash ## assemble the data set in pyrad rm params.txt pyrad -n >> log.txt 2>&1 sed -i '/## 1. /c\Tbal_rad_varcov ## 1. working dir ' params.txt sed -i '/## 2. /c\ ## 2. data loc ' params.txt sed -i '/## 3. /c\ ## 3. Bcode ' params.txt sed -i '/## 6. /c\TGCAG ## 6. cutters ' params.txt sed -i '/## 7. /c\20 ## 7. Nproc ' params.txt sed -i '/## 10. /c\.82 ## 10. clust thresh' params.txt sed -i '/## 11. /c\rad ## 11. datatype ' params.txt sed -i '/## 12. /c\2 ## 12. minCov ' params.txt sed -i '/## 13. /c\10 ## 13. maxSH' params.txt sed -i '/## 14. /c\Tbal ## 14. outname' params.txt sed -i '/## 18./c\Tbal_rad_varcov/fastq/*.gz ## sorted data ' params.txt sed -i '/## 24./c\99 ## 24. maxH' params.txt sed -i '/## 30./c\n,p,s ## 30. out format' params.txt pyrad -p params.txt -s 234567 >> log.txt 2>&1 ## load the data files Tbalvarcov = dataset("Tbalvarcov") Tbalvarcov.files.loci4 = "/home/deren/Documents/RADsims/Tbal_rad_varcov/outfiles/Tbal.loci" Tbalvarcov.files.tree = "/home/deren/Documents/RADsims/Tbal.tre" Tbalvarcov.files.s2 = "/home/deren/Documents/RADsims/Tbal_rad_varcov/stats/s2.rawedit.txt"
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
For the mutation-disruption data set we can re-use the sim data from notebook 1
## balanced tree with only phylo missing data. Tbaldrop = dataset("Tbaldrop") Tbaldrop.files.loci4 = "/home/deren/Documents/RADsims/Tbal_rad_drop/outfiles/Tbal.loci" Tbaldrop.files.tree = "/home/deren/Documents/RADsims/Tbal.tre" Tbaldrop.files.s2 = "/home/deren/Documents/RADsims/Tbal_rad_drop/stats/s2.rawedit.txt"
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
New Imbalanced tree data set
## make a new directory for the subsampled fastqs ! mkdir -p /home/deren/Documents/RADsims/Timb_rad_varcov/fastq/ ## grab the no-missing fastqs fastqs = glob.glob("/home/deren/Documents/RADsims/Timb_rad_covfull/fastq/s*") for fastq in fastqs: ## create a new output file _, handle = os.path.split(fastq) outfile = gzip.open( os.path.join( "/home/deren/Documents/RADsims/Timb_rad_varcov/fastq", handle), 'w') ## grab a random proportion of reads from this data set (0-100%) p = np.random.uniform(0.1, 0.9) ## iterate over file 4-lines at a time. infile = gzip.open(fastq, 'r') qiter = itertools.izip(*[iter(infile)]*4) ## sample read with probability p kept = 0 while 1: try: if np.random.binomial(1, p): outfile.write("".join(qiter.next())) kept += 1 else: _ = qiter.next() except StopIteration: break print '{} sampled at p={:.2f} kept {} reads'.format(handle, p, kept) infile.close() outfile.close() %%bash ## assemble the data set in pyrad rm params.txt pyrad -n >> log.txt 2>&1 sed -i '/## 1. /c\Timb_rad_varcov ## 1. working dir ' params.txt sed -i '/## 2. /c\ ## 2. data loc ' params.txt sed -i '/## 3. /c\ ## 3. Bcode ' params.txt sed -i '/## 6. /c\TGCAG ## 6. cutters ' params.txt sed -i '/## 7. /c\20 ## 7. Nproc ' params.txt sed -i '/## 10. /c\.82 ## 10. clust thresh' params.txt sed -i '/## 11. /c\rad ## 11. datatype ' params.txt sed -i '/## 12. /c\2 ## 12. minCov ' params.txt sed -i '/## 13. /c\10 ## 13. maxSH' params.txt sed -i '/## 14. /c\Timb ## 14. outname' params.txt sed -i '/## 18./c\Timb_rad_varcov/fastq/*.gz ## sorted data ' params.txt sed -i '/## 24./c\99 ## 24. maxH' params.txt sed -i '/## 30./c\n,p,s ## 30. out format' params.txt pyrad -p params.txt -s 234567 >> log.txt 2>&1 ## imbalanced tree with only phylo missind data. Timbvarcov = dataset("Timbvarcov") Timbvarcov.files.loci4 = "/home/deren/Documents/RADsims/Timb_rad_varcov/outfiles/Timb.loci" Timbvarcov.files.tree = "/home/deren/Documents/RADsims/Timb.tre" Timbvarcov.files.s2 = "/home/deren/Documents/RADsims/Timb_rad_varcov/stats/s2.rawedit.txt"
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Imbalanced tree
## balanced tree with only phylo missind data. Timbdrop = dataset("Timbdrop") Timbdrop.files.loci4 = "/home/deren/Documents/RADsims/Timb_rad_drop/outfiles/Timb.loci" Timbdrop.files.tree = "/home/deren/Documents/RADsims/Timb.tre" Timbdrop.files.s2 = "/home/deren/Documents/RADsims/Timb_rad_drop/stats/s2.rawedit.txt" ## list of dsets dsets = [Tbaldrop, Timbdrop, Tbalvarcov, Timbvarcov]
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Get array of shared loci for each data set
## submit parallel [getarray] jobs asyncs = {} for dset in dsets: asyncs[dset.name] = lbview.apply(getarray, *[dset.files.loci4, dset.files.tree]) ## collect results ipyclient.wait() for dset in dsets: dset.lxs4, dset.tree = asyncs[dset.name].get() print dset.name, "\n", dset.lxs4, "\n"
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Build array of model stats for each data set This takes a few minutes depending on how many CPUs you're running in parallel. One of the arguments to 'build_df4_parallel' is 'lbview', our load_balanced_view of the parallel processors.
## submit parallel [buildarray] jobs for dset in dsets: dset.df4 = build_df4_parallel(dset.tree, dset.lxs4, dset.files.s2, lbview) ## peek at one of the data sets print dsets[3].df4.head()
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Mean standardize the arrays
for dset in dsets: for var in ["nloci", "inputreads", "pdist"]: dset.df4[var] = (dset.df4[var] - dset.df4[var].mean()) / dset.df4[var].std() ## peek again print dsets[3].df4.head()
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
To parallelize the next step we need to send our functions to the remote namespace A much cleaner way to do this would have been to collect all the functions into a Python module and then just import that. Since I'm writing everything out in this notebook to be more didactic, though, we need to perform this step instead.
ipyclient[:].push( dict( calculate_covariance=calculate_covariance, check_covariance=check_covariance, get_path=get_path, rModelFit=rModelFit, rModelFit2=rModelFit2, estimate_lambda=estimate_lambda, get_lik_lambda=get_lik_lambda ) )
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Plot sim data set with a random 1000 quartets sampled
## pass objects into R rdf0 = dsets[0].df4.loc[np.random.choice(range(630000), 1000), :] rdf1 = dsets[1].df4.loc[np.random.choice(range(630000), 1000), :] rdf2 = dsets[2].df4.loc[np.random.choice(range(630000), 1000), :] rdf3 = dsets[3].df4.loc[np.random.choice(range(630000), 1000), :] baltre = dsets[0].tree.write() imbtre = dsets[1].tree.write() %R -i rdf0,rdf1,rdf2,rdf3,baltre,imbtre %%R -w 400 -h 400 ## make tree and plot data #pdf("simulation_model_fits.pdf") tre <- read.tree(text=baltre) plot(tre, 'u', no.margin=TRUE) plot(rdf0[,c(5,6,7)], main="Balanced tree - phylo missing") plot(rdf2[,c(5,6,7)], main="Balanced tree - low-cov missing") tre <- read.tree(text=imbtre) plot(tre, 'u', no.margin=TRUE) plot(rdf1[,c(5,6,7)], main="Imbalanced tree - phylo missing") plot(rdf3[,c(5,6,7)], main="Imbalanced tree - low-cov missing") #dev.off() dset = dsets[0] print dset.name fitmodels(dset.tree, dset.df4, nsamples=200) dset = dsets[1] print dset.name fitmodels(dset.tree, dset.df4, nsamples=200) dset = dsets[2] print dset.name fitmodels(dset.tree, dset.df4, nsamples=200) dset = dsets[3] print dset.name fitmodels(dset.tree, dset.df4, nsamples=200) def AICc(LL, k, n): return (-2*LL) + 2*k + ((2*k * (k+1))/float(n-k-1))
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Run 100 replicate subsample models for each data set
## store results in this array ntests = 100 nsamples = 200 ## for each test for dset in dsets: ## create output storage arrays dset.tab = np.zeros((ntests, 3, 4), dtype=np.float64) dset.LL = np.zeros((2, ntests), dtype=np.float64) dset.lam = np.zeros(ntests, dtype=np.float64) dset.asyncs = {} ## send jobs to get results in parallel for tidx in xrange(ntests): dset.asyncs[tidx] = lbview.apply(fitmodels, *[dset.tree, dset.df4, nsamples]) ## check progress on running jobs ipyclient.wait_interactive() ## enter results into results array when finished for dset in dsets: ## create empty results arrays dset.tab = np.zeros((ntests, 3, 4), dtype=np.float64) dset.LL = np.zeros((ntests, 2), dtype=np.float64) dset.lam = np.zeros(ntests, dtype=np.float64) for tidx in range(ntests): if dset.asyncs[tidx].ready(): res = dset.asyncs[tidx].get() dset.tab[tidx] = res[0] dset.LL[tidx] = res[1], res[2] dset.lam[tidx] = res[3] else: print "job: [{}, {}] is still running".format(dset.name, tidx)
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Simulated data sets results In all of the data sets the phylo corrected model was a better fit to the data by 30-90 AIC points. When data was missing from low sequence coverage it was best predicted by the inputreads, and when data was missing from mutation-disruption it was best explained by phylo distances.
def results_table(dset): tdat = dset.tab.mean(axis=0) df = pd.DataFrame( index=["fit"], data=[ pd.Series([np.mean(dset.LL[:, 0] - dset.LL[:, 1]), dset.lam.mean(), tdat[1, 0], tdat[1, 3], tdat[2, 0], tdat[2, 3]], index=["deltaAIC", "lambda", "raw_coeff", "raw_P", "phy_coeff", "phy_P" ]), ]) return df for dset in dsets: print dset.name, "---"*23 print results_table(dset) print "---"*27, "\n"
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
confidence intervals The fit for this data set yields a negative AIC both with and without a covariance matrix. This shows that the amount of input data (raw) is a better predictor of shared data bewteen samples than is their phylogenetic distance. See the plot below.
## get a stats module import scipy.stats as st def get_CI(a): mean = np.mean(a) interval = st.t.interval(0.95, len(a)-1, loc=np.mean(a), scale=st.sem(a)) return mean, interval[0], interval[1] for dset in dsets: print dset.name print "LL ", get_CI(dset.LL[:,0]-dset.LL[:,1]) print "lambda", get_CI(dset.lam) print "raw_coeff", get_CI(dset.tab[:, 1, 0]) print "raw_P", get_CI(dset.tab[:, 1, 3]) print "phy_coeff", get_CI(dset.tab[:, 2, 0]) print "phy_P", get_CI(dset.tab[:, 2, 3]) print ""
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
How to deal with large matrices (absurd run times) OK, so in our test example it takes about 10 minutes to compute a matrix with only 4000 elements, meaning we can expect that a matrix of several hundred thousand elements will pretty much never finish. One work around for this is to take a sub-sampling approach. The full matrix for 13 taxa is ~700 induced quartets, while the full data set for 65 taxa is ~700K. For the latter we will subsample 100 matrices composed of 1000 random quartets. Then we will compute the covariance matrix of the sampled quartets and fit a regression model. Finally, the results over all 100 subsampled replicates will be reported as a 95% confidence interval. Get all 10 empirical data sets
## data set 1 (Viburnum) data1 = dataset("data1") data1.files.loci4 = "/home/deren/Documents/RADmissing/empirical_1/fullrun/outfiles/empirical_1_full_m4.loci" data1.files.tree = "/home/deren/Documents/RADmissing/empirical_1/fullrun/RAxML_bipartitions.empirical_1_full_m4" data1.files.s2 = "/home/deren/Documents/RADmissing/empirical_1/fullrun/stats/s2.rawedit.txt" ## data set 2 (Phrynosomatidae) data2 = dataset("data2") data2.files.loci4 = "/home/deren/Documents/RADmissing/empirical_2/outfiles/empirical_2_m4.loci" data2.files.tree = "/home/deren/Documents/RADmissing/empirical_2/RAxML_bipartitions.empirical_2" data2.files.s2 = "/home/deren/Documents/RADmissing/empirical_2/stats/s2.rawedit.txt" ## data set 3 (Quercus) data3 = dataset("data3") data3.files.loci4 = "/home/deren/Documents/RADmissing/empirical_3/outfiles/empirical_3_m4.loci" data3.files.tree = "/home/deren/Documents/RADmissing/empirical_3/RAxML_bipartitions.empirical_3" data3.files.s2 = "/home/deren/Documents/RADmissing/empirical_3/stats/s2.rawedit.txt" ## data set 4 (Orestias) data4 = dataset("data4") data4.files.loci4 = "/home/deren/Documents/RADmissing/empirical_4/outfiles/empirical_4_m4.loci" data4.files.tree = "/home/deren/Documents/RADmissing/empirical_4/RAxML_bipartitions.empirical_4" data4.files.s2 = "/home/deren/Documents/RADmissing/empirical_4/stats/s2.rawedit.txt" ## data set 5 (Heliconius) data5 = dataset("data5") data5.files.loci4 = "/home/deren/Documents/RADmissing/empirical_5/outfiles/empirical_5_m4.loci" data5.files.tree = "/home/deren/Documents/RADmissing/empirical_5/RAxML_bipartitions.empirical_5" data5.files.s2 = "/home/deren/Documents/RADmissing/empirical_5/stats/s2.rawedit.txt" ## data set 6 (Finches) data6 = dataset("data6") data6.files.loci4 = "/home/deren/Documents/RADmissing/empirical_6/outfiles/empirical_6_m4.loci" data6.files.tree = "/home/deren/Documents/RADmissing/empirical_6/RAxML_bipartitions.empirical_6" data6.files.s2 = "/home/deren/Documents/RADmissing/empirical_6/stats/s2.rawedit.txt" ## data set 7 (Danio) data7 = dataset("data7") data7.files.loci4 = "/home/deren/Documents/RADmissing/empirical_7/outfiles/empirical_7_m4.loci" data7.files.tree = "/home/deren/Documents/RADmissing/empirical_7/RAxML_bipartitions.empirical_7" data7.files.s2 = "/home/deren/Documents/RADmissing/empirical_7/stats/s2.rawedit.txt" ## data set 8 (Barnacles) data8 = dataset("data8") data8.files.loci4 = "/home/deren/Documents/RADmissing/empirical_8/outfiles/empirical_8_m4.loci" data8.files.tree = "/home/deren/Documents/RADmissing/empirical_8/RAxML_bipartitions.empirical_8" data8.files.s2 = "/home/deren/Documents/RADmissing/empirical_8/stats/s2.rawedit.txt" ## data set 9 (Ohomopterus) data9 = dataset("data9") data9.files.loci4 = "/home/deren/Documents/RADmissing/empirical_9/outfiles/empirical_9_m4.loci" data9.files.tree = "/home/deren/Documents/RADmissing/empirical_9/RAxML_bipartitions.empirical_9" data9.files.s2 = "/home/deren/Documents/RADmissing/empirical_9/stats/s2.rawedit.txt" ## data set 10 (Pedicularis) data10 = dataset("data10") data10.files.loci4 = "/home/deren/Documents/RADmissing/empirical_10/outfiles/empirical_10_m4.loci" data10.files.tree = "/home/deren/Documents/RADmissing/empirical_10/RAxML_bipartitions.empirical_10_m4" data10.files.s2 = "/home/deren/Documents/RADmissing/empirical_10/stats/s2.rawedit.txt" ## put all in a list datasets = [data1, data2, data3, data4, data5, data6, data7, data8, data9, data10]
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Create large data frames for each data set
## submit parallel [getarray] jobs asyncs = {} for dset in datasets: asyncs[dset.name] = lbview.apply(getarray, *[dset.files.loci4, dset.files.tree]) ## collect results ipyclient.wait() for dset in datasets: dset.lxs4, dset.tree = asyncs[dset.name].get() print dset.name, "\n", dset.lxs4, "\n" ## submit parallel [buildarray] jobs for dset in datasets: dset.df4 = build_df4_parallel(dset.tree, dset.lxs4, dset.files.s2, lbview) ## peek at one of the data sets print datasets[0].df4.head() for dset in datasets: for var in ["nloci", "inputreads", "pdist"]: dset.df4[var] = (dset.df4[var] - dset.df4[var].mean()) / dset.df4[var].std() ## peek again print datasets[0].df4.head() ## pass objects into R rdf0 = datasets[0].df4.loc[np.random.choice(range(630000), 1000), :] rdftre = datasets[0].tree.write() %R -i rdf0,rdftre %%R -w 400 -h 400 ## make tree and plot data #pdf("simulation_model_fits.pdf") tre <- read.tree(text=rdftre) plot(tre, 'u', no.margin=TRUE, show.tip.label=FALSE) plot(rdf0[,c(5,6,7)], main="Viburnum tree -- empirical") ## store results in this array ntests = 100 nsamples = 200 ## for each test for dset in datasets: ## create output storage arrays dset.tab = np.zeros((ntests, 3, 4), dtype=np.float64) dset.LL = np.zeros((2, ntests), dtype=np.float64) dset.lam = np.zeros(ntests, dtype=np.float64) dset.asyncs = {} ## send jobs to get results in parallel for tidx in xrange(ntests): dset.asyncs[tidx] = lbview.apply(fitmodels, *[dset.tree, dset.df4, nsamples]) ipyclient.wait_interactive()
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Collect the results
## enter results into results array when finished for dset in datasets: ## create empty results arrays dset.tab = np.zeros((ntests, 3, 4), dtype=np.float64) dset.LL = np.zeros((ntests, 2), dtype=np.float64) dset.lam = np.zeros(ntests, dtype=np.float64) for tidx in range(ntests): if dset.asyncs[tidx].ready(): res = dset.asyncs[tidx].get() dset.tab[tidx] = res[0] dset.LL[tidx] = res[1], res[2] dset.lam[tidx] = res[3] else: print "job: [{}, {}] is still running".format(dset.name, tidx)
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Print results means
for dset in datasets: print dset.name, "---"*23 print results_table(dset) print "---"*27, "\n"
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
So, for example, why is this one a poor fit for pdist? There are three clouds of points corresponding to comparisons within and between the major clades. Some with little phylo distance between them have tons of data, while some with tons of data between them have very few data. It comes down to whether those data points include the same few samples or not. When we control for their non-independence, those clouds of points represent much less information, and in essence the pattern disappears.
## pass objects into R rdf0 = datasets[5].df4.loc[np.random.choice(range(10000), 1000), :] rdftre = datasets[5].tree.write() %R -i rdf0,rdftre %%R -w 400 -h 400 ## make tree and plot data #pdf("simulation_model_fits.pdf") tre <- read.tree(text=rdftre) plot(tre, 'u', no.margin=TRUE, show.tip.label=FALSE) plot(rdf0[,c(5,6,7)], main="Finch tree -- empirical")
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Confidence intervals
for dset in datasets: print dset.name print "LL ", get_CI(dset.LL[:,0]-dset.LL[:,1]) print "lambda", get_CI(dset.lam) print "raw_coeff", get_CI(dset.tab[:, 1, 0]) print "raw_P", get_CI(dset.tab[:, 1, 3]) print "phy_coeff", get_CI(dset.tab[:, 2, 0]) print "phy_P", get_CI(dset.tab[:, 2, 3]) print ""
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Make all plots for supp fig 4
## pass objects into R dset = datasets[0] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Vib.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Viburnum") dev.off() dset = datasets[1] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Phryn.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Phrynosomatidae") dev.off() dset = datasets[2] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Quer.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Quercus") dev.off() dset = datasets[3] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Orest.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Orestias") dev.off() dset = datasets[4] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Helic.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Heliconius") dev.off() dset = datasets[5] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Finch.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Finches") dev.off() dset = datasets[6] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Danio.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Danio") dev.off() dset = datasets[7] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Barnacles.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Barnacles") dev.off() dset = datasets[8] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Ohomo.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Ohomopterus") dev.off() dset = datasets[9] rdf = dset.df4.loc[np.random.choice(dset.df4.shape[0], 1000), :] %R -i rdf %%R -w 400 -h 400 ## make tree and plot data pdf("empscatter_Pedi.pdf", height=5, width=5) plot(rdf[,c(5,6,7)], main="Pedicularis") dev.off()
emp_and_sims_nb_pgls.ipynb
dereneaton/RADmissing
mit
Problem Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.
batch_size = 128 # Parameters learning_rate = 0.001 training_epochs = 15 display_step = 1 # Network Parameters n_hidden_1 = 1024 # 1st layer number of features def multilayer_perceptron(x, weights, biases): # Hidden layer with RELU activation layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1']) layer_1 = tf.nn.relu(layer_1) # output layer with softmax logits = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']) prediction = tf.nn.softmax(logits) return prediction, logits graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. # tf Graph input x = tf.placeholder("float", [None, image_size * image_size]) y = tf.placeholder("float", [None, num_labels]) #tf_train_dataset = tf.placeholder(tf.float32,shape=(batch_size, image_size * image_size)) #tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) weights = { 'h1': tf.Variable(tf.random_normal([image_size * image_size, n_hidden_1])), 'h2': tf.Variable(tf.random_normal([n_hidden_1, num_labels])) } biases = { 'b1': tf.Variable(tf.random_normal([n_hidden_1])), 'b2': tf.Variable(tf.random_normal([num_labels])) } #define model train_prediction, train_logits = multilayer_perceptron(x, weights, biases) loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(train_logits, y)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # validation, and test data. valid_prediction,valid_logits = multilayer_perceptron(tf_valid_dataset, weights, biases) test_prediction, test_logits = multilayer_perceptron(tf_test_dataset, weights, biases) num_steps = 3001 with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {x : batch_data, y : batch_labels} _, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
images/2_fullyconnected.ipynb
miltonsarria/dsp-python
mit
B Write a function, subarray, which takes two arguments: a NumPy array of data a NumPy array of indices The function should return a NumPy array that corresponds to the elements of the input array of data selected by the indices array. For example, subarray([1, 2, 3], [2]) should return a NumPy array of [3].
import numpy as np np.random.seed(5381) x1 = np.random.random(43) i1 = np.random.randint(0, 43, 10) a1 = np.array([ 0.24317871, 0.16900041, 0.20687451, 0.38726974, 0.49798077, 0.32797843, 0.18801287, 0.29021025, 0.65418547, 0.78651195]) np.testing.assert_allclose(a1, subarray(x1, i1), rtol = 1e-5) x2 = np.random.random(74) i2 = np.random.randint(0, 74, 5) a2 = np.array([ 0.96372034, 0.84256813, 0.08188566, 0.71852542, 0.92384611]) np.testing.assert_allclose(a2, subarray(x2, i2), rtol = 1e-5)
assignments/A6/A6_Q2.ipynb
eds-uga/csci1360-fa16
mit
C Write a function, length, which computes the lengths of a 1-dimensional NumPy array. Recall that the length $l$ of a vector $\vec{x}$ is defined as the square root of the sum of all the elements in the vector squared: $$ l = \sqrt{x_1^2 + x_2^2 + ... + x_n^2} $$ Here's the rub: you should do this without any loops. Use pure vectorized programming to compute the length of an input NumPy array. Return the length (a floating point value) from the function. You can use the np.sum() function, but no others.
import numpy as np np.random.seed(84853) x1 = np.random.random(848) a1 = 17.118570444957424 np.testing.assert_allclose(a1, length(x1)) import numpy as np np.random.seed(596862) x1 = np.random.random(43958) a1 = 120.98201554071815 np.testing.assert_allclose(a1, length(x1))
assignments/A6/A6_Q2.ipynb
eds-uga/csci1360-fa16
mit
D Write a function less_than which takes two parameters: a NumPy array a floating-point number, the threshold You should use a boolean mask to return only the values in the NumPy array that are less than the specified threshold value (the second parameter). No loops are allowed. For example, less_than([1, 2, 3], 2.5) should return a NumPy array of [1, 2].
import numpy as np np.random.seed(85928) x = np.random.random((10, 20, 30)) t = 0.001 y = np.array([ 0.0005339 , 0.00085714, 0.00091265, 0.00037283]) np.testing.assert_allclose(y, less_than(x, t))
assignments/A6/A6_Q2.ipynb
eds-uga/csci1360-fa16
mit
E Write a function greater_than which takes two parameters: a NumPy array a floating-point number, the threshold You should use a boolean mask to return only the values in the NumPy array that are greater than the specified threshold value (the second parameter). No loops are allowed. For example, greater_than([1, 2, 3], 2.5) should return a NumPy array of [3].
import numpy as np np.random.seed(592582) x = np.random.random((10, 20, 30)) t = 0.999 y = np.array([ 0.99910167, 0.99982779, 0.99982253, 0.9991043 ]) np.testing.assert_allclose(y, greater_than(x, t))
assignments/A6/A6_Q2.ipynb
eds-uga/csci1360-fa16
mit
F Write a function in_between which takes three parameters: a NumPy array a lower threshold, a floating point value an upper threshold, a floating point value You should use a boolean mask to return only the values in the NumPy array that are in between the two specified threshold values, lower and upper. No loops are allowed. For example, in_between([1, 2, 3], 1, 3) should return a NumPy array of [2]. Hint: you can use your functions from Parts D and E to help!
import numpy as np np.random.seed(7472) x = np.random.random((10, 20, 30)) lo = 0.499 hi = 0.501 y = np.array([ 0.50019884, 0.50039172, 0.500711 , 0.49983418, 0.49942259, 0.4994417 , 0.49979261, 0.50029046, 0.5008376 , 0.49985266, 0.50015914, 0.50068227, 0.50060399, 0.49968918, 0.50091042, 0.50063015, 0.50050032]) np.testing.assert_allclose(y, in_between(x, lo, hi)) import numpy as np np.random.seed(14985) x = np.random.random((30, 40, 50)) lo = 0.49999 hi = 0.50001 y = np.array([ 0.50000714, 0.49999045]) np.testing.assert_allclose(y, in_between(x, lo, hi))
assignments/A6/A6_Q2.ipynb
eds-uga/csci1360-fa16
mit