markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
For more control over the output function, including the __doc__, __name__, and __signature__ attributes used by the help function, let's revisit the Block class to wrap the functionality seen so far. All together now As we've seen, all the information is available for a more informative way to manipulate and evaluate systems of equations.
b2 = Block('''c = cos(theta) s = sin(theta) x' = x*c - y*s y' = x*s + y*c ''', '2-D Rotate', 'x y theta', "x' y'") print('\n',vars(b2))
files/Process.ipynb
jimaples/jimaples.github.io
mit
When it comes to function arguments, Python 3.5 has had a lot of development beyond Python 2.7. Since the Block code was originally developed in Python 2.7, let's replace those _Dummy arguments from sympy.lambdify.
f = b2.lambdify() help(f["x'"]) # All lambdify calls are made with the full set of inputs p = [] # Update the signatures for each compiled expression for k in f.keys(): sig=inspect.signature(f[k]) if len(p) == 0: for i,s in zip(sig.parameters.values(), map(str, b2.ins)): p.append(i.replace(name=s)) f[k].__signature__ = sig.replace(parameters=p) help(f[k]) import types # backup functions before overwriting them Block.__lambdify = Block.lambdify b2.__lambdify = b2.lambdify # Rename the old function b2._lambdify = b2.lambdify Block._lambdify = Block.lambdify #test._lambdify = types.MethodType(test.lambdify, test) #Block._lambdify = types.MethodType(Block.lambdify, Block) def lambdify(self, *args, **kwargs): # Call the base function with any arguments self._lambdify(*args, **kwargs) # Update the signatures for each compiled expression for k in self.lambdas.keys(): sig=inspect.signature(self.lambdas[k]) p = [] for i,s in zip(sig.parameters.values(), map(str, self.ins)): p.append(i.replace(name=s)) self.lambdas[k].__signature__ = sig.replace(parameters=p) return self.lambdas # update the old __doc__ string lambdify.__doc__ = Block._lambdify.__doc__ + ' and set the arguments to the variable names' # use the new function in the class and existing instance b2.lambdify = types.MethodType(lambdify, b2) Block.lambdify = types.MethodType(lambdify, Block) f = b2.lambdify() help(f["x'"]) help(inspect.Signature)
files/Process.ipynb
jimaples/jimaples.github.io
mit
And since it's coming up, let's go ahead and improve the Block.eval function used to call the various lambdify functions. Extra credit to anyone who figures out why I couldn't do the same sort of __signature__ update without getting a NameError exception.
help(b2._eval) help(b2.eval) # Backup the old functions b2._eval_save = b2.eval Block._wrap_eval_save = Block._wrap_eval # instead of wrapping _eval, just update its help info def wrap_eval(self): '''create _eval() wrapper function with useful docstring''' def tuple_repr(i): if type(i) == tuple: o = i s = repr(i) else: ins = (i,) s = '('+repr(i)+')' return o, s ins, s_ins = tuple_repr(self.ins) outs, s_outs = tuple_repr(self.outs) f = self._eval f.__func__.__name__ = str(self.__class__.__name__)+'('+self.name+').eval' s = 'Given inputs '+s_ins+', calculate outputs '+s_outs+' as follows:' for e in self.eqn: s += '\n '+str(e.lhs)+' = '+str(e.rhs) s += '\n\nBlock.outputMode = '+repr(Block.outputMode)+'\n' f.__func__.__doc__ = s return f b2.eval = wrap_eval(b2) # use the new function in the class Block._wrap_eval = types.MethodType(wrap_eval, Block) help(b2.eval) b2.eval(1.0,2.0,3.0)
files/Process.ipynb
jimaples/jimaples.github.io
mit
The outputs were specified when the Block instance was created, so lambdify all of those.
f = b2.lambdify() f, b2.lambdas help(f["x'"])
files/Process.ipynb
jimaples/jimaples.github.io
mit
The functions were made using sympy.lambdify, so they get all the benefits like support for NumPy arrays. The Block class also has an eval function to use that calls all the lambdify results.
out = f["x'"](1,0,np.linspace(0, np.pi, 8+1)) print(out, type(out)) help(b2.eval) b2.eval(1,0,np.linspace(0, np.pi, 8+1))
files/Process.ipynb
jimaples/jimaples.github.io
mit
Potential Updates: Set of 2 or more blocks with interconnects Solve for selected outputs Solve for selected inputs Draw interconnect diagram Create interface tables Unnecessary Pictures Because figures are fun. Using the 2-D rotation, we can rotate a constellation of points to best match expectations. Links: Top Intro Text LaTeX Solver Evaluating Designs Help
from matplotlib import pyplot as plt from matplotlib import cm from matplotlib import colors # for IPython plotting %pylab inline # random (x,y) points xy1 = np.random.normal(0,0.5,(8,2)) x,y = np.hsplit(xy1,2) # rotate points on a log-scale a = [0]+np.logspace(-2,1,20,base=5) b2.verbose = False xy2 = b2.eval(x,y,a) xy2["2-D Rotate.x'"].shape def example(xy1, xy2): '''Move plot code into a function for cleaner presentation''' # split original points into separate x/y arrays x,y = np.hsplit(xy1,2) # Fetch a matplotlib color map to use for line colors c = cm.get_cmap('gist_heat') cNorm = matplotlib.colors.Normalize(vmin=0, vmax=20) scalarMap = cm.ScalarMappable(norm=cNorm, cmap=c) # calculate "unit" circles for each point a = np.reshape(np.linspace(0,2*np.pi,60), (60,1)) unit = np.cos(a) + 1j*np.sin(a) circ = np.abs(x + 1j*y) * unit.T # set up the figure plt.figure(figsize=(12,9)) x = xy2["2-D Rotate.x'"].T y = xy2["2-D Rotate.y'"].T # "unit" circles for background for i in range(circ.shape[0]): s = '|iq{:d}|={:0.3f}'.format(i, np.abs(circ[i][0])) plt.plot(circ[i].real, circ[i].imag, ls=':', color='k', label=s) #plt.text(circ[i][0].real, circ[i][0].imag, str(i)+' ') # plot each rotation point for i in range(x.shape[0]): plt.plot(x[i], y[i], color=scalarMap.to_rgba(i), marker='o', ls='none') # label the first rotation points for i in range(x.shape[1]): plt.text(x[0][i]+0.04, y[0][i]+0.04, str(i), color='blue') plt.axis('equal') plt.grid() plt.legend() example(xy1, xy2) # Fetch a matplotlib color map to use for line colors c = cm.get_cmap('gist_heat') cNorm = matplotlib.colors.Normalize(vmin=0, vmax=20) scalarMap = cm.ScalarMappable(norm=cNorm, cmap=c) # calculate "unit" circles for each point a = np.reshape(np.linspace(0,2*np.pi,60), (60,1)) unit = np.cos(a) + 1j*np.sin(a) circ = np.abs(x + 1j*y) * unit.T # set up x/y x = xy2["2-D Rotate.x'"].T y = xy2["2-D Rotate.y'"].T circ.shape plt.figure(figsize=(12,6)) for i in range(circ.shape[0]): # "unit" circles for background s = '|iq{:d}|={:0.3f}'.format(i, np.abs(circ[i][0])) plt.plot(circ[i].real, circ[i].imag, ls=':', color='k', label=s) for i in range(x.shape[0]): # plot each rotation point plt.plot(x[i], y[i], color=scalarMap.to_rgba(i), marker='o', ls='none') for i in range(x.shape[1]): # label the first rotation points plt.text(x[0][i]+0.04, y[0][i]+0.04, str(i), color='blue') plt.axis('equal') plt.grid() i = plt.legend()
files/Process.ipynb
jimaples/jimaples.github.io
mit
More Information SymPy Help Links: Top Intro Text LaTeX Solver Evaluating Designs Help sympy.*.subs(*args, **kwargs) sympy.lambdify(args, expr, modules=None)
print(type(b.eqn[0]),'\n') help(b.eqn[0].subs) help(sympy.lambdify)
files/Process.ipynb
jimaples/jimaples.github.io
mit
Upgrading Code to Python 3 I originally made this notebook using Python 2.7. Fortunately, there's a Python library for that too lib2to3, although the canned 2to3 application worked just fine for me. Although this doesn't usually change functionality, there may be some slight changes as seen when running help on the sympy.lambdify output functions.
%%file process.py import sympy import numpy as np def parseExpr(expr=''): '''Helper function to iterate through a list of equations''' err = 'Malformed expression! Does not match "y = f(x)"\n {0:s}' for s in expr.strip().split('\n'): # Parse anything that looks like an equation and isn't commented out if ('=' in s) and (s[0] != '#'): # convert expression to sympy y, f = map(str.strip, s.split('=',1)) y = sympy.Symbol(y) f = sympy.sympify(f) assert type(y) == sympy.symbol.Symbol, err.format(s) yield (y, f) class Block(object): '''Block(expr, inputs='', outputs='', functions={}) Generic processing block that performs specified calculations on given inputs in such a way to ease documentation ''' verbose = True # Enable verbose output outputMode = 'dict' # eval() return type ('dict' or 'tuple') def __init__(self, expr, name='', inputs='', outputs='', functions={}): '''Create a Block instance''' if Block.verbose: s = 'Creating Block(expr={:s}, name={:s}, inputs={:s}, outputs={:s}, functions=' s = s.format(*map(repr,[expr, name, inputs, outputs]))+'{' if functions: s_functions = [] for k, v in functions.iteritems(): s2 = k+':'+v.__name__ args, varargs, keywords, defaults = getargspec(v) if varargs != None: args += ['*args'] if keywords != None: args += ['*kwargs'] s2 += '('+','.join(args)+')' s_functions.append(s2) s += ','.join(s_functions) print s+'})' self.name = name self.user = functions # save in list form for ordered outputs eqn = tuple(parseExpr(expr)) self.eqn = tuple([ sympy.Eq(k, v) for k, v in eqn ]) # placeholder for compiled expressions self.lambdas = {} # Extract inputs and functions used expr_inputs = set() expr_functs = set() for k, v in eqn: for arg in sympy.preorder_traversal(v): if arg.is_Symbol: expr_inputs.add(arg) elif arg.is_Function: expr_functs.add(arg.func) # save SymPy style .args and .func attributes self.args = tuple(expr_inputs) self.func = tuple(expr_functs) if inputs: self.ins = sympy.symbols(inputs) else: # extract inputs from expr self.ins = tuple(self.args) outs = tuple([ i[0] for i in eqn ]) if outputs: self.outs = sympy.symbols(outputs) print 'outs='+repr(outs) self.hidden = tuple([ i for i in outs if i not in self.outs ]) else: # extract inputs from expr self.outs = outs self.hidden = tuple() if Block.verbose: print ' Extracting outputs:', self.outs # create _eval() wrapper function with useful docstring self.eval = self._wrap_eval() def _wrap_eval(self): '''create _eval() wrapper function with useful docstring''' if type(self.ins) == tuple: ins = self.ins s = repr(self.ins) else: ins = (self.ins,) s = '('+repr(self.ins)+')' # not sure how to do a decorator with non-generic dynamic args func = sympy.lambdify(self.ins, '_eval'+s, {'_eval':self._eval}) func.__name__ = 'Block.eval' s = repr(self.outs) if type(self.outs) != tuple: s = '('+s+')' func.__doc__ = 'Calculate outputs '+s+' as follows:\n ' func.__doc__ += '\n '.join(map(str,self.eqn)) func.__doc__ += '\n\nBlock.outputMode = '+repr(Block.outputMode)+'\n' return func def __str__(self): '''Support str calls on Block instances''' s = 'Block: '+str(self.name)+'\n' s += '\n'.join( str(e).replace('==','=') for e in self.eqn ) return s def __repr__(self): '''return representation that sympy will automatically show as LaTeX''' s = 'Block' if self.name: s += '('+self.name +')' return repr({s:[ sympy.Eq(sympy.Symbol(k), v) for k,v in self.eqn.items() ]}) # method doesn't work currently __repr__ = __str__ def pretty(self): '''Show a SymPy pretty print version of Block equations''' print '\nBlock:', self.name print '\n'.join( sympy.pretty(e) for e in self.eqn ) @property def latex(self): '''generate a latex version of Block equations''' # \verb;*; leaves contents unformatted s = r'\underline{\verb;Block: '+str(self.name)+';} \\\\ \n' s += ' \\\\ \n'.join( sympy.latex(e) for e in self.eqn ) return s def lambdify(self, unknowns=None): '''generate an executable function using NumPy''' lookup='numpy' # by default, unknowns are everything but outputs if unknowns == None: unknowns = self.outs+self.hidden if self.verbose: print 'Compiling Block '+repr(self.name)+' expressions:' # check for missing functions defined = set(dir(np) + self.user.keys()) missing = [ f for f in self.func if str(f) not in defined ] if missing: s = 'Unable to find functions '+str(missing)+' in '+str(lookup) s += ' or user-defined functions ' + self.user.keys() raise LookupError, s # solve equations in terms of unknowns eqn = sympy.solve(self.eqn, unknowns, dict=True) if self.verbose: print 'Solving equations for unknowns:', unknowns print self.eqn print ' =', eqn, '\n' s = " {0:s} = sympy.lambdify({1:s}, {2:s}, '{3:s}')" # verbose string for k, v in eqn[0].iteritems(): if k in self.hidden: # skip intermediate values continue k = str(k) # convert output variable from sympy.Symbol if self.verbose: print s.format(*map(str, (k, self.ins, v, (lookup, self.user.keys()) ))) f = sympy.lambdify(self.ins, v, (lookup, self.user)) # Update the function name and doc string from "<lambda> lambda x, y, theta" f.__name__ = "Block" if self.name: f.__name__ += '('+self.name+')' f.__name__ += '.lambdas['+k+']' f.__doc__ = k+' = '+str(v) self.lambdas[k] = f return self.lambdas def _eval(self, *args, **kwargs): '''evaluate outputs for given inputs''' # make sure in/ouputs are iterable if hasattr(self.ins, '__iter__'): ins = self.ins else: ins = [self.ins] # make sure all inputs are given assert len(args) == len(ins) # default kwargs values if not kwargs.has_key('output'): output=Block.outputMode # compile expressions if needed if not self.lambdas: self.lambdify() if self.verbose: s = '\nBlock.eval(' s += ', '.join( map(lambda k, v: str(k)+'='+str(v), ins, args) ) s += ', output='+repr(output) print s+')\n' if output == 'dict': if self.name: prefix=self.name+'.' else: prefix='' return { prefix+k:v(*args) for k, v in self.lambdas.iteritems() } else: # output tuple in order of .outs return tuple( self.lambdas[k](*args) for k in self.outs )
files/Process.ipynb
jimaples/jimaples.github.io
mit
Let's test the conversion of the rewritten Python 2.7 code
!2to3 -w process.py
files/Process.ipynb
jimaples/jimaples.github.io
mit
Success!
from process import * for s in (parseExpr, Block): help(s)
files/Process.ipynb
jimaples/jimaples.github.io
mit
Interesting Links / References Python and Jupyter Notebooks SymPy and LaTeX SciPy and matplotlib Python and Jupyter Notebooks Markdown syntax Notebook reveal-based slideshow tutorial A brief tour of the IPython notebook: Same presentation, just later on 2to3 - Automated Python 2 to 3 code translation But do I really want to use Python 3? Yes, it's been around since 2008 LaTeX documentation Python Exceptions SymPy and LaTeX Scientific Python Lectures: Symbolic algebra Live SymPy console Wolfram Alpha-style search engine powered by SymPy SymPy Tutorial SymPy Modules SymPy solvers module: Algebraic/Differential/Recurrence/Diophantine equations, Utilities, Systems of Polynomial Equations Equations, Inequalities SymPy Simplification Functions: Function/Trigonometric/Powers/Exponentials/logarithms Simplification, Special Functions SymPy Basic Operations: Substitution, string inputs, evalf, lambdify SymPy Basics of Expressions SymPy Expression Trees and Manipulations Embed LaTeX math equations into Microsoft Word Type math formulas in LaTeX way in Microsoft Word? pyvideo.org videos for SymPy SciPy and matplotlib SciPy Signal Processing module: Convolution, Filters, Linear Systems, Function Generators, Windowing, Wavelets, Peak finding, Spectral Analysis SciPy Special Function module: Elliptic, Bessel, Gamma, Error, Legendre, Hypergeometric, Mathieu, Wave, and other functions SciPy Cookbook: BPSK: A simple BPSK example with AWGN (no coding) Matplotlib Gallery: Example plots Any Questions? This presentation courtesy of Python jupyter nbconvert --to slides Process.ipynb --post serve [NbConvertApp] Converting notebook Process.ipynb to slides [NbConvertApp] Writing 341651 bytes to Process.slides.html [NbConvertApp] Redirecting reveal.js requests to https://cdn.jsdelivr.net/reveal.js/2.6.2 Serving your slides at http://127.0.0.1:8000/Process.slides.html Use Control-C to stop this server WARNING:tornado.access:404 GET /custom.css (127.0.0.1) 4.00ms WARNING:tornado.access:404 GET /custom.css (127.0.0.1) 3.00ms
%%file slides.bat jupyter nbconvert --to slides Process.ipynb --post serve
files/Process.ipynb
jimaples/jimaples.github.io
mit
First, we'll load the dataset from scikit-learn. The Iris Dataset contains 3 classes for each of the iris species (iris setosa, iris virginica, and iris versicolor). It has 50 samples per class with 150 samples in total, making it a very balanced dataset. Each sample is characterized by four features (or dimensions): sepal length, sepal width, petal length, petal width. Load the iris dataset
data = load_iris() # Store the features as X and the labels as y X = data.data y = data.target
docs/examples/usecases/train_neural_network.ipynb
ljvmiranda921/pyswarms
mit
Constructing a custom objective function Recall that neural networks can simply be seen as a mapping function from one space to another. For now, we'll build a simple neural network with the following characteristics: * Input layer size: 4 * Hidden layer size: 20 (activation: $\tanh(x)$) * Output layer size: 3 (activation: $softmax(x)$) Things we'll do: 1. Create a forward_prop method that will do forward propagation for one particle. 2. Create an overhead objective function f() that will compute forward_prop() for the whole swarm. What we'll be doing then is to create a swarm with a number of dimensions equal to the weights and biases. We will unroll these parameters into an n-dimensional array, and have each particle take on different values. Thus, each particle represents a candidate neural network with its own weights and bias. When feeding back to the network, we will reconstruct the learned weights and biases. When rolling-back the parameters into weights and biases, it is useful to recall the shape and bias matrices: * Shape of input-to-hidden weight matrix: (4, 20) * Shape of input-to-hidden bias array: (20, ) * Shape of hidden-to-output weight matrix: (20, 3) * Shape of hidden-to-output bias array: (3, ) By unrolling them together, we have $(4 * 20) + (20 * 3) + 20 + 3 = 163$ parameters, or 163 dimensions for each particle in the swarm. The negative log-likelihood will be used to compute for the error between the ground-truth values and the predictions. Also, because PSO doesn't rely on the gradients, we'll not be performing backpropagation (this may be a good thing or bad thing under some circumstances). Now, let's write the forward propagation procedure as our objective function. Let $X$ be the input, $z_l$ the pre-activation at layer $l$, and $a_l$ the activation for layer $l$: Neural network architecture
n_inputs = 4 n_hidden = 20 n_classes = 3 num_samples = 150 def logits_function(p): """ Calculate roll-back the weights and biases Inputs ------ p: np.ndarray The dimensions should include an unrolled version of the weights and biases. Returns ------- numpy.ndarray of logits for layer 2 """ # Roll-back the weights and biases W1 = p[0:80].reshape((n_inputs,n_hidden)) b1 = p[80:100].reshape((n_hidden,)) W2 = p[100:160].reshape((n_hidden,n_classes)) b2 = p[160:163].reshape((n_classes,)) # Perform forward propagation z1 = X.dot(W1) + b1 # Pre-activation in Layer 1 a1 = np.tanh(z1) # Activation in Layer 1 logits = a1.dot(W2) + b2 # Pre-activation in Layer 2 return logits # Logits for Layer 2 # Forward propagation def forward_prop(params): """Forward propagation as objective function This computes for the forward propagation of the neural network, as well as the loss. Inputs ------ params: np.ndarray The dimensions should include an unrolled version of the weights and biases. Returns ------- float The computed negative log-likelihood loss given the parameters """ logits = logits_function(params) # Compute for the softmax of the logits exp_scores = np.exp(logits) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # Compute for the negative log likelihood corect_logprobs = -np.log(probs[range(num_samples), y]) loss = np.sum(corect_logprobs) / num_samples return loss
docs/examples/usecases/train_neural_network.ipynb
ljvmiranda921/pyswarms
mit
Now that we have a method to do forward propagation for one particle (or for one set of dimensions), we can then create a higher-level method to compute forward_prop() to the whole swarm:
def f(x): """Higher-level method to do forward_prop in the whole swarm. Inputs ------ x: numpy.ndarray of shape (n_particles, dimensions) The swarm that will perform the search Returns ------- numpy.ndarray of shape (n_particles, ) The computed loss for each particle """ n_particles = x.shape[0] j = [forward_prop(x[i]) for i in range(n_particles)] return np.array(j)
docs/examples/usecases/train_neural_network.ipynb
ljvmiranda921/pyswarms
mit
Performing PSO on the custom-function Now that everything has been set-up, we just call our global-best PSO and run the optimizer as usual. For now, we'll just set the PSO parameters arbitrarily.
%%time # Initialize swarm options = {'c1': 0.5, 'c2': 0.3, 'w':0.9} # Call instance of PSO dimensions = (n_inputs * n_hidden) + (n_hidden * n_classes) + n_hidden + n_classes optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options) # Perform optimization cost, pos = optimizer.optimize(f, iters=1000)
docs/examples/usecases/train_neural_network.ipynb
ljvmiranda921/pyswarms
mit
Checking the accuracy We can then check the accuracy by performing forward propagation once again to create a set of predictions. Then it's only a simple matter of matching which one's correct or not. For the logits, we take the argmax. Recall that the softmax function returns probabilities where the whole vector sums to 1. We just take the one with the highest probability then treat it as the network's prediction. Moreover, we let the best position vector found by the swarm be the weight and bias parameters of the network.
def predict(pos): """ Use the trained weights to perform class predictions. Inputs ------ pos: numpy.ndarray Position matrix found by the swarm. Will be rolled into weights and biases. """ logits = logits_function(pos) y_pred = np.argmax(logits, axis=1) return y_pred
docs/examples/usecases/train_neural_network.ipynb
ljvmiranda921/pyswarms
mit
And from this we can just compute for the accuracy. We perform predictions, compare an equivalence to the ground-truth value y, and get the mean.
(predict(pos) == y).mean()
docs/examples/usecases/train_neural_network.ipynb
ljvmiranda921/pyswarms
mit
In order account for initial out-of-straightness and partial yielding, notional lateral loads equal to 0.005 times the factored gravity loads contributed by each level are added to each level (CSA S16-09 8.4.1). At node H that will be $45 \times (10+10.5+10) \times 0.005 = 6.9\ kN$ and at node G it is $55 \times (10+10.5+10) \times 0.005 = 8.4\ kN$. These notional loads will be added to the forces already there.
frame = f2d.Frame2D() frame.read_data('KG82') # read the CSV files in directory 'KG82.d' %matplotlib inline frame.plot() frame.doall() frame.saveStructuralData(frame.dsname)
Devel/Old/frame2d-v03/example-KG82.ipynb
nholtz/structural-analysis
cc0-1.0
The above are the results of a first-order analysis and should be compared with those shown in the following figure from Kulak & Grondin: Compare book values (end bending moments)
import pandas as pd BM = [('AB',44.2,-57.5), # values given on figure, above ('BC',-232.,-236.), ('DE',181.,227.), ('EF',287.,330.), ('BE',290.,-515.), ('CF',236.,-330.)] BOOK = pd.DataFrame({m:{'MZJ':a,'MZK':b} for m,a,b in BM}).T BOOK R = frame.get_mefs() # get our member end forces for the same members #R[['MZJ']] - BOOK[['MZJ']] R = R.ix[BOOK.index] R
Devel/Old/frame2d-v03/example-KG82.ipynb
nholtz/structural-analysis
cc0-1.0
% Difference in End Moments
m = R[['MZJ','MZK']]*1E-6 (100*(m - BOOK[['MZJ','MZK']])/m).round(2)
Devel/Old/frame2d-v03/example-KG82.ipynb
nholtz/structural-analysis
cc0-1.0
Max. difference is 5.5%, which I think is a little large.
frame.get_reactions()[['FY']]*1E-3
Devel/Old/frame2d-v03/example-KG82.ipynb
nholtz/structural-analysis
cc0-1.0
The reactions agree very closely. $P-\Delta$ Analysis
frame.doall(pdelta=True,showinput=False)
Devel/Old/frame2d-v03/example-KG82.ipynb
nholtz/structural-analysis
cc0-1.0
The above are the results of a second-order ($P-\Delta$) analysis and should be compared with the following figure from Kulak & Grondin:
import pandas as pd BM = [('AB',64.0,-39.2), # values given on gigure, above ('BC',-236.,-237.), ('DE',207.,244.), ('EF',301.,347.), ('BE',276.,-544.), ('CF',237.,-347.)] BOOK = pd.DataFrame({m:{'MZJ':a,'MZK':b} for m,a,b in BM}).T BOOK R = frame.get_mefs() # get our member end forces for the same members #R[['MZJ']] - BOOK[['MZJ']] R = R.ix[BOOK.index] R
Devel/Old/frame2d-v03/example-KG82.ipynb
nholtz/structural-analysis
cc0-1.0
Request the open events from the Meetup.com API.
r = requests.get("https://api.meetup.com/2/open_events", params={'topic': TOPIC, 'key': API_KEY}) r.raise_for_status() df = pd.DataFrame(r.json()['results'])
notebooks/document.ipynb
ibm-et/defrag2015
mit
Convert the times since epoch in $\mu s$ to datetime objects, accounting for timezone offset. Hereafter, the times will be local to the meetup venue.
df['localtime'] = pd.to_datetime(df.time+df.utc_offset, unit='ms')
notebooks/document.ipynb
ibm-et/defrag2015
mit
Create a human readable description of the location down to the city level, if possible.
def text_location(venue): ''' Return city, state, country, omitting any piece that isn't available. ''' loc = [] if pd.isnull(venue): return '' if 'city' in venue: loc.append(venue['city']) if 'state' in venue: loc.append(venue['state']) if 'country' in venue: loc.append(venue['country'].upper()) return ', '.join(loc) df['location'] = df.venue.apply(text_location)
notebooks/document.ipynb
ibm-et/defrag2015
mit
Turn the event name into a link to its page on meetup.com.
df['link_name'] = df.apply(lambda row: '<a href="{row[event_url]}" target="_blank">{row[name]}</a>'.format(row=row), axis=1)
notebooks/document.ipynb
ibm-et/defrag2015
mit
Use the HTML output feature instead of static markup so that the topic name appears.
HTML('<h2>Table of Upcoming <em>{}</em> Meetups</h2>'.format(TOPIC)) HTML(df[['link_name', 'localtime', 'location', 'yes_rsvp_count']].to_html(escape=False)) HTML('<h2>Map of Upcoming <em>{}</em> Meetups</h2>'.format(TOPIC)) def map_marker(row): ''' Returns a dictionary with the lat/long location of an event venue as well as a popup containing a link to its meetup.com page. Filters events with no valid lat/long location. ''' if pd.isnull(row['venue']): return None lat = row['venue'].get('lat', 0.) lon = row['venue'].get('lon', 0.) if lat == 0 or lon == 0: return None return dict( location=[lat, lon], popup=row['link_name'] ) m = folium.Map(location=[45, -40], zoom_start=2) for i, row in df.iterrows(): marker = map_marker(row) if marker: m.simple_marker(**marker) m._build_map()
notebooks/document.ipynb
ibm-et/defrag2015
mit
Idea: We might want show the venues of RSVPs in realtime on a map along with the locations of our meetups.
HTML('<iframe srcdoc="{srcdoc}" style="width: 100%; height: 510px; border: none"></iframe>'.format(srcdoc=m.HTML.replace('"', '&quot;')))
notebooks/document.ipynb
ibm-et/defrag2015
mit
Filling the Swear Jar A tale of three languages Alec Reiter (@justanr) Brainfuck Urban Mueller, 1993 Turning ~~Complete~~ Tarpit 8 commands Tape, Tape Pointer, Instruction Pointer ++++++++[&gt;++++[&gt;++&gt;+++&gt;+++&gt;+&lt;&lt;&lt;&lt;-]&gt;+&gt;+&gt;-&gt;&gt;+[&lt;]&lt;-]&gt;&gt;.&gt;---.+++++++..+++.&gt;&gt;.&lt;-.&lt;.+++.------.--------.&gt;&gt;+.&gt;++. Why the... Rust! (but we'll get to that) Different Oddly fun The 8 Commands | Command | Meaning | |---------|------------------------------------| | + | Incr Cell | | - | Decr Cell | | > | Move Right | | < | Move Left | | [ | Conditional Jump (if cell is 0) | | ] | Conditional Jump (if cell isn't 0) | | . | Output Cell | | , | Read into Cell | Common Constructs [-] set current cell to 0 [-&gt;+&lt;] add current cell to another [-&gt;++&lt;] multiplication [&lt;] find last zero cell Ambiguities "Infinite tape" -- reference impl uses 30,000 cells How big are cells? -- u8 or u32? or signed? So, implementations... Turns out I have no idea what I'm doing Python to the rescue First Attempt
def run(prog: str, stdin: str="") -> StringIO: stdout = StringIO() memory = [0] * 30_000 memptr = 0 instrptr = 0 progsize = len(prog) # stores the location of the last [ s we encountered brackets = [] while instrptr < progsize: op = progsize[instrptr] instrptr += 1 if op == '+': memory[memptr] += 1 elif op == '-': memory[memptr] -= 1 # and so on else: # not a BF command pass stdout.seek(0) return stdout
fillingtheswearjar.ipynb
justanr/notebooks
mit
Pros Very simple Jumping back is easy Cons Very naive Jumping forward isn't easy Incorrect programs not detected Parsing
class BFToken(Enum): Incr = '+' Decr = '-' MoveL = '<' MoveR = '>' StdIn = ',' StdOut = '.' JumpF = '[' JumpB = ']' partners = { BFToken.Incr: BFToken.Decr, BFToken.Decr: BFToken.Incr, BFToken.MoveL: BFToken.MoveR, BFToken.MoveR: BFToken.MoveL } def _parse(prog: str) -> Iterator[BFToken]: for char in prog: try: yield BFToken(char) except ValueError: pass def parse(prog: str) -> List[BFToken]: return list(_parse(prog)) parse('++a+--')
fillingtheswearjar.ipynb
justanr/notebooks
mit
Optimizing Jump table Combine like tokens
def collapse(prog: List[BFToken]) -> List[BFToken]: program = [] for token in prog: ... # uh wait a second
fillingtheswearjar.ipynb
justanr/notebooks
mit
Missing Something
class IRToken(NamedTuple): token: BFToken amount: int def collapse(prog: List[BFToken]) -> List[IRToken]: program: List[IRToken] = [] for token in prog: if len(program) == 0 or token not in partners: program.append(IRToken(token, 1)) continue previous = program.pop() if previous.token == token: new_token = previous._replace(amount=previous.amount+1) if new_token.amount != 0: program.append(new_token) elif previous.token == partners[token]: new_token = previous._replace(amount=previous.amount-1) if new_token.amount != 0: program.append(new_token) else: program.append(previous) program.append(IRToken(token, 1)) return program def build_jump_table(prog: List[IRToken]): brackets = [] for idx, token in enumerate(prog, 0): if token.token == BFToken.JumpF: brackets.append(idx) elif token.token == BFToken.JumpB: try: partner = brackets.pop() except IndexError: raise BFError(f"Unmatched bracket at: {idx}") from None else: prog[idx] = prog[idx]._replace(amount=partner) prog[partner] = prog[partner]._replace(amount=idx) if brackets: raise BFError(f"Unmatched brackets at: {', '.join([str(x) for x in brackets])}") tokens = collapse(parse('++[->++++++++<]')) build_jump_table(tokens) tokens def run(prog: List[IRToken], stdin: str="") -> StringIO: stdout = StringIO() stdiniter = iter(stdin) getc = lambda: ord(next(stdiniter, '\0')) putc = lambda: stdout.write(chr(memory[memptr])) memory = [0] * 30_000 memptr = 0 instrptr = 0 proglength = len(prog) while instrptr < proglength: op = prog[instrptr] if op.token == BFToken.StdOut: putc() elif op.token == BFToken.StdIn: memory[memptr] = getc() elif op.token == BFToken.Incr: memory[memptr] += op.amount elif op.token == BFToken.Decr: memory[memptr] -= op.amount elif op.token == BFToken.MoveL: memptr = (memptr - op.amount) % 30_000 elif op.token == BFToken.MoveR: memptr = (memptr + op.amount) % 30_000 elif op.token == BFToken.JumpF: if memory[memptr] == 0: instrptr = op.amount elif op.token == BFToken.JumpB: if memory[memptr] != 0: instrptr = op.amount instrptr += 1 stdout.seek(0) return stdout def bf(source: str, stdin: str="") -> StringIO: prog = collapse(parse(source)) build_jump_table(prog) return run(prog, stdin) %%time print(bf("++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.").read()) triangle = """ > + + + + [ < + + + + + + + + > - ] > + + + + + + + + [ > + + + + < - ] > > + + > > > + > > > + < < < < < < < < < < [ - [ - > + < ] > [ - < + > > > . < < ] > > > [ [ - > + + + + + + + + [ > + + + + < - ] > . < < [ - > + < ] + > [ - > + + + + + + + + + + < < + > ] > . [ - ] > ] ] + < < < [ - [ - > + < ] + > [ - < + > > > - [ - > + < ] + + > [ - < - > ] < < < ] < < < < ] + + + + + + + + + + . + + + . [ - ] < ] + + + + + * * * * * M a d e * B y : * N Y Y R I K K I * 2 0 0 2 * * * * * """ %%time result = bf(triangle) print(result.read()) ZtoA = """>++[<+++++++++++++>-]<[[>+>+<<-]>[<+>-]++++++++ [>++++++++<-]>.[-]<<>++++++++++[>++++++++++[>++ ++++++++[>++++++++++[>++++++++++[>++++++++++[>+ +++++++++[-]<-]<-]<-]<-]<-]<-]<-]++++++++++.""" %%time print(bf(ZtoA).read())
fillingtheswearjar.ipynb
justanr/notebooks
mit
Where are we spending time? [ I-1 ] 26_000_000 [ M1 I10 [ I-1 ] M-1 I-1 ] -&gt; 2_600_000 [ M1 I10 [ M1 I10 [ I-1 ] M-1 I-1 ] M-1 I-1 ] -&gt; 260_000 [ M1 I10 [ M1 I10 [ M1 I10 [ I-1 ] M-1 I-1 ] M-1 I-1 ] M-1 I-1 ] -&gt; 26_000 [ M1 I10 [ M1 I10 [ M1 I10 [ M1 I10 [ I-1 ] M-1 I-1 ] M-1 I-1 ] M-1 I-1 ] M-1 I-1 ] -&gt; 2_600 [ M1 I10 [ M1 I10 [ M1 I10 [ M1 I10 [ M1 I10 [ I-1 ] M-1 I-1 ] M-1 I-1 ] M-1 I-1 ] M-1 I-1 ] M-1 I-1 ] -&gt; 260 Idea Transform [-] into memory[memptr] = 0
def handle_clear(tokens: List[BFToken]) -> List[BFToken]: program: List[BFToken] = [] clear = [BFToken.JumpF, BFToken.Decr, BFToken.JumpB] for token in tokens: program.append(token) if len(program) < 3: continue last_three = program[-3:] if last_three == clear: program[-3:] = [BFToken.ZeroOut] return program
fillingtheswearjar.ipynb
justanr/notebooks
mit
38min 34s Python isn't known for being fast Cython, numba, etc can help but... Rust 🎺🎺🎺 insert hype here But seriously Opt-in mutability Algebraic Data Types Functional + Imperative High level but fast Representation rust enum BrainFuckToken { Move(isize), JumpF(usize), JumpB(usize), Incr(i32) StdIn, StdOut, ZeroOut } Parsing rust impl BrainFuckToken { pub fn from_char(c: char) -&gt; Option&lt;BrainFuckToken&gt; { match c { '+' =&gt; Some(BrainFuckToken::Incr(1)), '-' =&gt; Some(BrainFuckToken::Incr(-1)), '&gt;' =&gt; Some(BrainFuckToken::Move(1)), '&lt;' =&gt; Some(BrainFuckToken::Move(-1)), '.' =&gt; Some(BrainFuckToken::StdOut), ',' =&gt; Some(BrainFuckToken::StdIn), '[' =&gt; Some(BrainFuckToken::JumpF(0)), ']' =&gt; Some(BrainFuckToken::JumpB(0)), _ =&gt; None, } } } Jumps ```rust fn build_jumps(tokens: &mut Vec<BrainFuckToken>) { let mut brackets = Vec::new(); for idx in 0..tokens.len() { match tokens[idx] { BrainFuckToken::JumpF() => brackets.push(idx), BrainFuckToken::JumpB() => { let partner = brackets .pop() .unwrap_or_else(|| panic!("unmatched bracket at {}", idx)); mem::replace(&mut tokens[idx], BrainFuckToken::JumpB(partner)); mem::replace(&mut tokens[partner], BrainFuckToken::JumpF(idx)); } _ => {} } } if brackets.len() != 0 { panic!("Unmatched brackets at: {:?}", brackets); } } ``` Run loop rust while let Some(instr) = self.ops.get(self.loc) { match *instr { BrainFuckToken::JumpF(x) =&gt; { if self.tape.get() == 0 { self.loc = x; } else { self.tracer.trace((self.loc, x)); } } BrainFuckToken::JumpB(x) =&gt; { if self.tape.get() != 0 { self.loc = x; } } BrainFuckToken::Move(x) =&gt; self.tape.move_(x), BrainFuckToken::Incr(x) =&gt; self.tape.incr(x), BrainFuckToken::StdIn =&gt; self.tape.putc(input_iter.next().unwrap_or('\0')), BrainFuckToken::StdOut =&gt; out.push(self.tape.getc()). BrainFuckToken::ZeroOut =&gt; self.tape.put(0), } self.loc += 1; } But how fast?
%%bash time ./bf triangle.bf > /dev/null %%bash time ./bf ZtoA.bf > /dev/null %%bash time ./bf mandel.bf > /dev/null
fillingtheswearjar.ipynb
justanr/notebooks
mit
We can clear the output by either using IPython.display.clear_output within the context manager, or we can call the widget's clear_output method directly.
out.clear_output()
docs/source/examples/Output Widget.ipynb
jupyter-widgets/ipywidgets
bsd-3-clause
Interacting with output widgets from background threads Jupyter's display mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out: ```python import threading import time def run(): for i in itertools.count(0): time.sleep(1) print('output from background {}'.format(i)) t = threading.Thread(target=run) t.start() ``` This always prints in the currently active cell, not the cell that started the background thread. This can lead to surprising behavior in output widgets. During the time in which output is captured by the output widget, any output generated in the notebook, regardless of thread, will go into the output widget. The best way to avoid surprises is to never use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use append_display_data(), append_stdout(), or append_stderr() methods to append displayable output to the output widget.
import threading from IPython.display import display, HTML import ipywidgets as widgets import time def thread_func(something, out): for i in range(1, 5): time.sleep(0.3) out.append_stdout('{} {} {}\n'.format(i, '**'*i, something)) out.append_display_data(HTML("<em>All done!</em>")) display('Display in main thread') out = widgets.Output() # Now the key: the container is displayed (while empty) in the main thread display(out) thread = threading.Thread( target=thread_func, args=("some text", out)) thread.start() thread.join()
docs/source/examples/Output Widget.ipynb
jupyter-widgets/ipywidgets
bsd-3-clause
Vertex client library: Local text binary classification model for online prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_local_text_binary_classification_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_local_text_binary_classification_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex client library for Python to deploy a locally trained custom text binary classification model for online prediction. Dataset The dataset used for this tutorial is the IMDB Movie Reviews from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts whether a review is positive or negative in sentiment. Objective In this notebook, you create a custom model locally in the notebook, then learn to deploy the locally trained model to Vertex, and then do a prediction on the deployed model. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console. The steps performed include: Create a model locally. Train the model locally. View the model evaluation. Upload the model as a Vertex Model resource. Deploy the Model resource to a serving Endpoint resource. Make a prediction. Undeploy the Model resource. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest version of Vertex client library.
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG
notebooks/community/gapic/custom/showcase_local_text_binary_classification_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Tutorial Now you are ready to start locally training a custom model IMDB Movie Reviews, and then deploy the model to the cloud. Set up clients The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server. You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront. Model Service for Model resources. Endpoint Service for deployment. Prediction Service for serving.
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client def create_endpoint_client(): client = aip.EndpointServiceClient(client_options=client_options) return client def create_prediction_client(): client = aip.PredictionServiceClient(client_options=client_options) return client clients = {} clients["model"] = create_model_client() clients["endpoint"] = create_endpoint_client() clients["prediction"] = create_prediction_client() for client in clients.items(): print(client)
notebooks/community/gapic/custom/showcase_local_text_binary_classification_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train a model locally In this tutorial, you train a IMDB Movie Reviews model locally. Set location to store trained model You set the variable MODEL_DIR for where in your Cloud Storage bucket to save the model in TensorFlow SavedModel format. Also, you create a local folder for the training script.
MODEL_DIR = BUCKET_NAME + "/imdb" model_path_to_deploy = MODEL_DIR ! rm -rf custom ! mkdir custom ! mkdir custom/trainer
notebooks/community/gapic/custom/showcase_local_text_binary_classification_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
通常,所有的权重都是可以训练的权重。keras自动的layer中只有BatchNormalization有不可训练的权重。BatchNormalization使用不可训练的权重来跟踪训练过程中输入的mean和variance。 学习在自定义layers,如何使用不可训练权重,请看 guide to writing new layers from scratch. In general, all weights are trainable weights. The only built-in layer that has non-trainable weights is the BatchNormalization layer. It uses non-trainable weights to keep track of the mean and variance of its inputs during training. To learn how to use non-trainable weights in your own custom layers, see the guide to writing new layers from scratch. Example: the BatchNormalization layer has 2 trainable weights and 2 non-trainable weights 例子:BatchNormalization层有3个可训练权重和2个不可训练权重
layer = keras.layers.BatchNormalization() layer.build((None, 4)) # Create the weights print("weights:", len(layer.weights)) print("trainable_weights:", len(layer.trainable_weights)) print("non_trainable_weights:", len(layer.non_trainable_weights))
tensorflow_learning/tf2/notebooks/.ipynb_checkpoints/transfer_learning-中文-checkpoint-checkpoint.ipynb
jeffzhengye/pylearn
unlicense
Find a genome and download the annotations You need to find your genome in PATRIC and download the annotations. Once you have identified the genome you would like to build the model for, choose Feature Table from the menu bar: <img src="img/patric_ft.png"> Next, choose Download and save as a text file (.txt). <img src="img/patric_dl.png"> That will save a file called FeatureTable.txt to your Downloads location. That file has the following columns: | Genome | Genome ID | Accession | PATRIC ID | RefSeq Locus Tag | Alt Locus Tag | Feature ID | | Annotation | Feature Type | Start | End | Length | Strand | FIGfam ID | | PATRIC genus-specific families (PLfams) | PATRIC cross-genus families (PGfams) | Protein ID | AA Length | Gene Symbol | Product | GO The key columns are PATRIC ID (Column 3) and Product (Column 19) [Column numbers are 0 based!] Now that we know that, we need to convert these feature names into functional roles. The key here is to split on adjoiners, such as ' / ', ' # ', and ' @ '.
assigned_functions = {} with open(os.path.join('workspace/Citrobacter_sedlakii_genome_features.txt'), 'r') as f: for l in f: p=l.strip().split("\t") assigned_functions[p[3]]=PyFBA.parse.roles_of_function(p[19]) roles = set([i[0] for i in [list(j) for j in assigned_functions.values()]]) print("There are {} unique roles in this genome".format(len(roles)))
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Next, we convert those roles to reactions. We start with a dict of roles and reactions, but we only need a list of unique reactions, so we convert the keys to a set.
roles_to_reactions = PyFBA.filters.roles_to_reactions(roles, organism_type="Gram_Negative", verbose=False)
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
If you toggle verbose=True, you will see that there are a lot of roles that we skip, even though we have an EC number for them: for whatever reason, the annotation is not quite right. We can check for those too, because our model seed parsed data has EC numbers with reactions.
# ecr2r = PyFBA.filters.roles_to_ec_reactions(roles, organism_type="Gram_Negative", verbose=False) ecr2r = set()
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
We combine roles_to_reactions and ecr2r and figure out what the unique set of reactions is for our genome.
roles_to_reactions.update(ecr2r) reactions_to_run = set() for role in roles_to_reactions: reactions_to_run.update(roles_to_reactions[role]) print("There are {}".format(len(reactions_to_run)) + " unique reactions associated with this genome".format(len(reactions_to_run)))
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Read all the reactions and compounds in our database We read all the reactions, compounds, and enzymes in the ModelSEEDDatabase into three data structures. Note, the first time you call this it is a bit slow as it has to parse the files, but if we've parsed them once, we don't need to do it again! We modify the reactions specifically for Gram negative models (there are also options for Gram positive models, Mycobacterial models, general microbial models, and plant models).
compounds, reactions, enzymes = \ PyFBA.parse.model_seed.compounds_reactions_enzymes('gramnegative') print(f"There are {len(compounds):,} compounds, {len(reactions):,} reactions, and {len(enzymes):,} enzymes in total") for r in reactions: for c in reactions[r].all_compounds(): if c.uptake_secretion: print(f"US: {c}")
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Update reactions to run, making sure that all reactions are in the list! There are some reactions that come from functional roles that do not appear in the reactions list. We're working on tracking these down, but for now we just check that all reaction IDs in reactions_to_run are in reactions, too.
tempset = set() for r in reactions_to_run: if r in reactions: tempset.add(r) else: sys.stderr.write("Reaction ID {} is not in our reactions list. Skipped\n".format(r)) reactions_to_run = tempset
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Test whether these reactions grow on ArgonneLB media We can test whether this set of reactions grows on ArgonneLB media. The media is the same one we used above, and you can download the ArgonneLB.txt and text file and put it in the same directory as this iPython notebook to run it. (Note: we don't need to convert the media components, because the media and compounds come from the same source.)
media = PyFBA.parse.read_media_file("/home/redwards/test_media/ArgonneLB.txt") print("Our media has {} components".format(len(media)))
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Define a biomass equation The biomass equation is the part that says whether the model will grow! This is a metabolism.reaction.Reaction object.
biomass_equation = PyFBA.metabolism.biomass_equation() biomass_equation.equation with open('rbad.txt', 'w') as out: for r in reactions_to_run: out.write(f"{r}\n")
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Run the FBA With the reactions, compounds, reactions_to_run, media, and biomass model, we can test whether the model grows on this media.
print(f"Before running FBA there are {len(reactions)} reactions") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run, media, biomass_equation) print(f"After running FBA there are {len(reactions)} reactions") print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth)) print(f"There are {len(reactions_to_run)} reactions to run") upsr = 0 for r in reactions_to_run: if r.startswith('upsr'): upsr += 1 print(f"There are {upsr} uptake secretion reactions in reactions_to_run") upsr = 0 for r in reactions: if r.startswith('upsr'): upsr += 1 print(f"There are {upsr} uptake secretion reactions in reactions")
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Will gap filling work? These are the reactions from the C. sedlakii SBML file, and so if we add these, we should get growth!
sbml_addnl = {'rxn00868', 'rxn01923', 'rxn02268', 'rxn10215', 'rxn10219', 'rxn08089', 'rxn10212', 'rxn08083', 'rxn10214', 'rxn10211', 'rxn10218', 'rxn08086', 'rxn10217', 'rxn08087', 'rxn08088', 'rxn08085', 'rxn10216', 'rxn08084', 'rxn10213', 'rxn05572', 'rxn05565', 'rxn00541', 'rxn10155', 'rxn10157', 'rxn05536', 'rxn05544', 'rxn12848', 'rxn12851', 'rxn05539', 'rxn05541', 'rxn05537', 'rxn05543', 'rxn12849', 'rxn05533', 'rxn05540', 'rxn05534', 'rxn05547', 'rxn05546', 'rxn05542', 'rxn05535', 'rxn12850', 'rxn05545', 'rxn05538', 'rxn05168', 'rxn05179', 'rxn05161', 'rxn03061', 'rxn09313', 'rxn08354', 'rxn08356', 'rxn09315', 'rxn05549', 'rxn05160', 'rxn05644', 'rxn05330', 'rxn05335', 'rxn05334', 'rxn05329', 'rxn05333', 'rxn05332', 'rxn05331', 'rxn05415', 'rxn05381', 'rxn05386', 'rxn05427', 'rxn05431', 'rxn05373', 'rxn05377', 'rxn05398', 'rxn05419', 'rxn05402', 'rxn05369', 'rxn05361', 'rxn05394', 'rxn05406', 'rxn05365', 'rxn05390', 'rxn05423', 'rxn05462', 'rxn05411', 'rxn03492', 'rxn04050', 'rxn08258', 'rxn04713', 'rxn00990', 'rxn00875', 'rxn08471', 'rxn05737', 'rxn08467', 'rxn10067', 'rxn08468', 'rxn08469', 'rxn08470', 'rxn02160', 'rxn05422', 'rxn05372', 'rxn05341', 'rxn05376', 'rxn05342', 'rxn05337', 'rxn05385', 'rxn05397', 'rxn05340', 'rxn05461', 'rxn05368', 'rxn05418', 'rxn05393', 'rxn05336', 'rxn05426', 'rxn05364', 'rxn05430', 'rxn05410', 'rxn05339', 'rxn05401', 'rxn05338', 'rxn05360', 'rxn05414', 'rxn05405', 'rxn05389', 'rxn05380', 'rxn03164', 'rxn05229', 'rxn07586', 'rxn05054', 'rxn04384', 'rxn00503', 'rxn00183', 'rxn05187', 'rxn05515', 'rxn02056', 'rxn09134', 'rxn09125', 'rxn09157', 'rxn09128', 'rxn09142', 'rxn09161', 'rxn09147', 'rxn09164', 'rxn09152', 'rxn09124', 'rxn09131', 'rxn09133', 'rxn09138', 'rxn09143', 'rxn09153', 'rxn09160', 'rxn09158', 'rxn09148', 'rxn09144', 'rxn09150', 'rxn09130', 'rxn09149', 'rxn09163', 'rxn09159', 'rxn09132', 'rxn09127', 'rxn09140', 'rxn09145', 'rxn09137', 'rxn09154', 'rxn09151', 'rxn09146', 'rxn09123', 'rxn09139', 'rxn09126', 'rxn09141', 'rxn09135', 'rxn09136', 'rxn09155', 'rxn09162', 'rxn09129', 'rxn09156', 'rxn02949', 'rxn03241', 'rxn03245', 'rxn02911', 'rxn02167', 'rxn03250', 'rxn02934', 'rxn03240', 'rxn03247', 'rxn05316', 'rxn09687', 'rxn05198', 'rxn09688', 'rxn05199', 'rxn05200', 'rxn09685', 'rxn05318', 'rxn05205', 'rxn05621', 'rxn05656', 'rxn05585', 'rxn05172', 'rxn05594', 'rxn05552', 'rxn05599', 'rxn05512', 'rxn05620', 'rxn01277', 'rxn05518', 'rxn05145', 'rxn05460', 'rxn05396', 'rxn05363', 'rxn05359', 'rxn05367', 'rxn05417', 'rxn05421', 'rxn05392', 'rxn05413', 'rxn05349', 'rxn05388', 'rxn05429', 'rxn05371', 'rxn05400', 'rxn05425', 'rxn05409', 'rxn05404', 'rxn05375', 'rxn05379', 'rxn05384', 'rxn04139', 'rxn00640', 'rxn05507', 'rxn05506', 'rxn01893', 'rxn00671', 'rxn00501', 'rxn10340', 'rxn10334', 'rxn10337', 'rxn10338', 'rxn10341', 'rxn10335', 'rxn10342', 'rxn10339', 'rxn10336', 'rxn00160', 'rxn01285', 'rxn04143', 'rxn01847', 'rxn01103', 'rxn00227', 'rxn05175', 'rxn05163', 'rxn05958', 'rxn05683', 'rxn05484', 'rxn02933', 'rxn04750', 'rxn03244', 'rxn01451', 'rxn03239', 'rxn03246', 'rxn03242', 'rxn03249', 'rxn06777', 'rxn05500', 'rxn01637', 'rxn01122', 'rxn04602', 'rxn02416', 'rxn04601', 'rxn04928', 'rxn05596', 'rxn02775', 'rxn04046', 'rxn07589', 'rxn03491', 'rxn10117', 'rxn10119', 'rxn08333', 'rxn04673', 'rxn10308', 'rxn10311', 'rxn10315', 'rxn10309', 'rxn10307', 'rxn10312', 'rxn10310', 'rxn10314', 'rxn08040', 'rxn10313', 'rxn12147', 'rxn03931', 'rxn03916', 'rxn04674', 'rxn03397', 'rxn10094', 'rxn02286', 'rxn00555', 'rxn08709', 'rxn04052', 'rxn03512', 'rxn04045', 'rxn12224', 'rxn09188', 'rxn02359', 'rxn02008', 'rxn03643', 'rxn09177', 'rxn12512', 'rxn07587', 'rxn02507', 'rxn05202', 'rxn08291', 'rxn06865', 'rxn00303', 'rxn00222', 'rxn09978', 'rxn09979', 'rxn07588', 'rxn03919', 'rxn03435', 'rxn02187', 'rxn02186', 'rxn03436', 'rxn03068', 'rxn05317', 'rxn01219', 'rxn00364', 'rxn03514', 'rxn04048', 'rxn02792', 'rxn00350', 'rxn02791', 'rxn00171', 'rxn01000', 'rxn00675', 'rxn00175', 'rxn00986', 'rxn03932', 'rxn08712', 'rxn04113', 'rxn04996', 'rxn08756', 'rxn08352', 'rxn06023', 'rxn03136', 'rxn00800', 'rxn05165', 'rxn05181', 'rxn08194', 'rxn09180', 'rxn00670', 'rxn00173', 'rxn03644', 'rxn08619', 'rxn09289', 'rxn00776', 'rxn01360', 'rxn08335', 'rxn08336', 'rxn12500', 'rxn02287', 'rxn02774', 'rxn09167', 'rxn08708', 'rxn05156', 'rxn05151', 'rxn01629', 'rxn12146', 'rxn01123', 'rxn05147', 'rxn05173', 'rxn08707', 'rxn00927', 'rxn01299', 'rxn01226', 'rxn01545', 'rxn02476', 'rxn02011', 'rxn05201', 'rxn01895', 'rxn04604', 'rxn00830', 'rxn01403', 'rxn00179', 'rxn03991', 'rxn03990', 'rxn03975', 'rxn03974', 'rxn00818', 'rxn03838', 'rxn00817', 'rxn02596', 'rxn05555', 'rxn00056', 'rxn00212', 'rxn06979', 'rxn11544', 'rxn03918', 'rxn05559', 'rxn08345', 'rxn00509', 'rxn00006', 'rxn00834', 'rxn05293', 'rxn00634', 'rxn08618', 'rxn06848', 'rxn09997', 'rxn05938', 'rxn04783', 'rxn05206', 'rxn00102', 'rxn05937', 'rxn01644', 'rxn02938', 'rxn00792', 'rxn08711', 'rxn03513', 'rxn04047', 'rxn01265', 'rxn03394', 'rxn00777', 'rxn01106', 'rxn07492', 'rxn03538', 'rxn01480', 'rxn00119', 'rxn01517', 'rxn01966', 'rxn01132', 'rxn05162', 'rxn02277', 'rxn08257', 'rxn01352', 'rxn03540', 'rxn00789', 'rxn00508', 'rxn04386', 'rxn10481', 'rxn05528', 'rxn06077', 'rxn01671', 'rxn02929', 'rxn03917', 'rxn03135', 'rxn00469', 'rxn00791', 'rxn00756', 'rxn03087', 'rxn01329', 'rxn01917', 'rxn01879', 'rxn02285', 'rxn08710', 'rxn07438', 'rxn02321', 'rxn00787', 'rxn01289', 'rxn00851', 'rxn05297', 'rxn00062', 'rxn04132', 'rxn04133', 'rxn05319', 'rxn05467', 'rxn05468', 'rxn02374', 'rxn03012', 'rxn05064', 'rxn02666', 'rxn04457', 'rxn04456', 'rxn01664', 'rxn02916', 'rxn05667', 'rxn10571', 'rxn05195', 'rxn05645', 'rxn05144', 'rxn02988', 'rxn01256', 'rxn12604', 'rxn05039', 'rxn10904', 'rxn05499', 'rxn01152', 'rxn05691', 'rxn12893', 'rxn11116', 'rxn00880', 'rxn05593', 'rxn05469', 'rxn00186', 'rxn05694', 'rxn05491', 'rxn05682', 'rxn01748', 'rxn00327', 'rxn01746', 'rxn09656'} r2r_plussbml = copy.copy(reactions_to_run) print(f"Before adding sbml reactions there were {len(r2r_plussbml)}") r2r_plussbml.update(sbml_addnl) print(f"After adding sbml reactions there were {len(r2r_plussbml)}") print(f"Before running FBA there are {len(reactions)} reactions") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, r2r_plussbml, media, biomass_equation, verbose=True) print(f"After running FBA there are {len(reactions)} reactions") print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth)) print(f"Before adding upsr reactions there were {len(r2r_plussbml)} reactions") for r in reactions: if r.startswith('upsr'): r2r_plussbml.update({r}) print(f"After adding upsr reactions there were {len(r2r_plussbml)} reactions") print(f"Before running FBA there are {len(reactions)} reactions") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, r2r_plussbml, media, biomass_equation, verbose=True) print(f"After running FBA there are {len(reactions)} reactions") print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth)) # seems like we need EX_cpd00034 upsr = 0 for r in reactions_to_run: if r.startswith('EX'): upsr += 1 print(f"There are {upsr} EX reactions in reactions_to_run") upsr = 0 for r in reactions: if r.startswith('EX'): upsr += 1 print(f"There are {upsr} EX reactions in reactions") biomass_equation = PyFBA.metabolism.biomass_equation('standard') biomass_equation.equation print(f"Before running FBA there are {len(reactions)} reactions") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, r2r_plussbml, media, biomass_equation, verbose=True) print(f"After running FBA there are {len(reactions)} reactions") print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth)) uptake_secretion_reactions all_compounds = compounds # Filter for compounds that are boundary compounds filtered_compounds = set() for c in all_compounds: if not compounds[c].uptake_secretion: filtered_compounds.add(c) print(f"There are {len(all_compounds)} total compounds and {len(filtered_compounds)} filtered compounds") without_ex = set() with open('rwex.txt', 'r') as fin: for l in fin: l = l.strip() without_ex.add(l) without_ex print(f"Before running FBA there are {len(reactions)} reactions") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, without_ex, media, biomass_equation, verbose=True) print(f"After running FBA there are {len(reactions)} reactions") print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth)) len(without_ex) len(reactions_to_run)
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
it is the biomass model that is the problem Lets take the biomass model from the SBML and see if this work.
sbml_equation = '(0.00778132482043096) cpd00063: Ca2 (location: c) + (0.352889948968272) cpd00156: L_Valine (location: e) + (0.00778132482043096) cpd00030: Mn2 (location: e) + (0.00778132482043096) cpd00205: K (location: c) + (0.428732289454499) cpd00035: L_Alanine (location: e) + (0.128039715997337) cpd00060: L_Methionine (location: e) + (0.15480760087483) cpd00066: L_Phenylalanine (location: c) + (0.00778132482043096) cpd00017: S_Adenosyl_L_methionine (location: c) + (0.00778132482043096) cpd00010: CoA (location: c) + (0.0609084652443221) cpd15665: Peptidoglycan_polymer_n_subunits (location: c) + (0.0841036156544863) cpd00052: CTP (location: c) + (0.00778132482043096) cpd10516: fe3 (location: e) + (0.01468498342018) cpd00357: TTP (location: c) + (0.00778132482043096) cpd00099: Cl_ (location: e) + (0.01468498342018) cpd00356: dCTP (location: c) + (0.00778132482043096) cpd10515: Fe2 (location: e) + (0.00778132482043096) cpd00254: Mg (location: c) + (0.242249358141304) cpd00322: L_Isoleucine (location: e) + (0.00778132482043096) cpd00058: Cu2 (location: e) + (0.00778132482043096) cpd00149: Co2 (location: c) + (0.201205267995816) cpd00041: L_Aspartate (location: e) + (1) cpd17043: RNA_transcription (location: c) + (0.219496655995436) cpd00023: L_Glutamate (location: e) + (0.219496655995436) cpd00053: L_Glutamine (location: e) + (0.376088782528765) cpd00107: L_Leucine (location: e) + (0.00778132482043096) cpd00220: Riboflavin (location: e) + (0.179790960093822) cpd00054: L_Serine (location: e) + (0.0472899299502361) cpd00065: L_Tryptophan (location: e) + (0.0609084652443221) cpd02229: Bactoprenyl_diphosphate (location: c) + (0.00778132482043096) cpd11493: ACP (location: c) + (1) cpd17041: Protein_biosynthesis (location: c) + (0.184698405654696) cpd00129: L_Proline (location: e) + (0.135406821203723) cpd00038: GTP (location: c) + (0.01468498342018) cpd00241: dGTP (location: c) + (1) cpd17042: DNA_replication (location: c) + (0.211466290532188) cpd00161: L_Threonine (location: e) + (40.1101757365074) cpd00002: ATP (location: c) + (0.00778132482043096) cpd00016: Pyridoxal_phosphate (location: c) + (0.00778132482043096) cpd00048: Sulfate (location: e) + (0.00778132482043096) cpd00003: NAD (location: c) + (0.01468498342018) cpd00115: dATP (location: c) + (0.115101904973216) cpd00069: L_Tyrosine (location: e) + (0.00778132482043096) cpd00015: FAD (location: c) + (0.201205267995816) cpd00132: L_Asparagine (location: e) + (0.00778132482043096) cpd00006: NADP (location: c) + (35.5386858537513) cpd00001: H2O (location: e) + (0.0762884719008526) cpd00084: L_Cysteine (location: c) + (0.0794113918032267) cpd00119: L_Histidine (location: e) + (0.285970236774541) cpd00039: L_Lysine (location: e) + (0.0908319049068452) cpd00062: UTP (location: c) + (0.00778132482043096) cpd00034: Zn2 (location: e) + (0.247156803702178) cpd00051: L_Arginine (location: e) + (0.510820469745475) cpd00033: Glycine (location: e) > (40) cpd00008: ADP (location: c) + (39.9922186751796) cpd00009: Phosphate (location: e) + (0.00778132482043096) cpd12370: apo_ACP (location: c) + (1) cpd11416: Biomass (location: c) + (40) cpd00067: H (location: e) + (0.0609084652443221) cpd15666: Peptidoglycan_polymer_n_1_subunits (location: c) + (0.405833094852252) cpd00012: PPi (location: e)' sbml_left_compounds = {'cpd00066: L_Phenylalanine (location: c)' : 0.15480760087483, 'cpd00016: Pyridoxal_phosphate (location: c)' : 0.00778132482043096, 'cpd00132: L_Asparagine (location: e)' : 0.201205267995816, 'cpd00156: L_Valine (location: e)' : 0.352889948968272, 'cpd00099: Cl_ (location: e)' : 0.00778132482043096, 'cpd00038: GTP (location: c)' : 0.135406821203723, 'cpd00003: NAD (location: c)' : 0.00778132482043096, 'cpd17041: Protein_biosynthesis (location: c)' : 1.0, 'cpd00033: Glycine (location: e)' : 0.510820469745475, 'cpd00322: L_Isoleucine (location: e)' : 0.242249358141304, 'cpd00254: Mg (location: c)' : 0.00778132482043096, 'cpd17043: RNA_transcription (location: c)' : 1.0, 'cpd00048: Sulfate (location: e)' : 0.00778132482043096, 'cpd10515: Fe2 (location: e)' : 0.00778132482043096, 'cpd02229: Bactoprenyl_diphosphate (location: c)' : 0.0609084652443221, 'cpd11493: ACP (location: c)' : 0.00778132482043096, 'cpd00161: L_Threonine (location: e)' : 0.211466290532188, 'cpd00006: NADP (location: c)' : 0.00778132482043096, 'cpd00060: L_Methionine (location: e)' : 0.128039715997337, 'cpd00119: L_Histidine (location: e)' : 0.0794113918032267, 'cpd00052: CTP (location: c)' : 0.0841036156544863, 'cpd00051: L_Arginine (location: e)' : 0.247156803702178, 'cpd15665: Peptidoglycan_polymer_n_subunits (location: c)' : 0.0609084652443221, 'cpd00017: S_Adenosyl_L_methionine (location: c)' : 0.00778132482043096, 'cpd00030: Mn2 (location: e)' : 0.00778132482043096, 'cpd10516: fe3 (location: e)' : 0.00778132482043096, 'cpd00065: L_Tryptophan (location: e)' : 0.0472899299502361, 'cpd00084: L_Cysteine (location: c)' : 0.0762884719008526, 'cpd00023: L_Glutamate (location: e)' : 0.219496655995436, 'cpd17042: DNA_replication (location: c)' : 1.0, 'cpd00356: dCTP (location: c)' : 0.01468498342018, 'cpd00035: L_Alanine (location: e)' : 0.428732289454499, 'cpd00069: L_Tyrosine (location: e)' : 0.115101904973216, 'cpd00220: Riboflavin (location: e)' : 0.00778132482043096, 'cpd00129: L_Proline (location: e)' : 0.184698405654696, 'cpd00357: TTP (location: c)' : 0.01468498342018, 'cpd00205: K (location: c)' : 0.00778132482043096, 'cpd00149: Co2 (location: c)' : 0.00778132482043096, 'cpd00063: Ca2 (location: c)' : 0.00778132482043096, 'cpd00054: L_Serine (location: e)' : 0.179790960093822, 'cpd00001: H2O (location: e)' : 35.5386858537513, 'cpd00010: CoA (location: c)' : 0.00778132482043096, 'cpd00015: FAD (location: c)' : 0.00778132482043096, 'cpd00062: UTP (location: c)' : 0.0908319049068452, 'cpd00107: L_Leucine (location: e)' : 0.376088782528765, 'cpd00241: dGTP (location: c)' : 0.01468498342018, 'cpd00053: L_Glutamine (location: e)' : 0.219496655995436, 'cpd00039: L_Lysine (location: e)' : 0.285970236774541, 'cpd00034: Zn2 (location: e)' : 0.00778132482043096, 'cpd00058: Cu2 (location: e)' : 0.00778132482043096, 'cpd00002: ATP (location: c)' : 40.1101757365074, 'cpd00041: L_Aspartate (location: e)' : 0.201205267995816, 'cpd00115: dATP (location: c)' : 0.01468498342018} sbml_right_compounds = {'cpd00067: H (location: e)' : 40.0, 'cpd00012: PPi (location: e)' : 0.405833094852252, 'cpd00008: ADP (location: c)' : 40.0, 'cpd11416: Biomass (location: c)' : 1.0, 'cpd12370: apo_ACP (location: c)' : 0.00778132482043096, 'cpd00009: Phosphate (location: e)' : 39.9922186751796, 'cpd15666: Peptidoglycan_polymer_n_1_subunits (location: c)' : 0.0609084652443221} sbml_biomass = PyFBA.metabolism.Reaction('sbml_biomass', 'sbml_biomass') sbml_biomass.equation = sbml_equation parsecomp = re.compile('^(cpd\\d+): (.*?) \(location: (.)\)') for c in sbml_left_compounds: m = parsecomp.match(c) if not m: sys.stderr.write(f"Can't parse {c}\n") if m.group(1) in compounds: if False and compounds[m.group(1)] != m.group(2): sys.stderr.write(f"We had |{compounds[m.group(1)]}| for {m.group(1)} in the SBML, but now have |{m.group(2)}|\n") newcomp = PyFBA.metabolism.CompoundWithLocation.from_compound(compounds[m.group(1)], m.group(3)) sbml_biomass.add_left_compounds({newcomp}) sbml_biomass.set_left_compound_abundance(newcomp, sbml_left_compounds[c]) else: print(f"{m.group(1)} not found") for c in sbml_right_compounds: m = parsecomp.match(c) if not m: sys.stderr.write(f"Can't parse {c}\n") if m.group(1) in compounds: if True and compounds[m.group(1)] != m.group(2): sys.stderr.write(f"We had |{compounds[m.group(1)]}| for {m.group(1)} in the SBML, but now have |{m.group(2)}|\n") newcomp = PyFBA.metabolism.CompoundWithLocation.from_compound(compounds[m.group(1)], m.group(3)) sbml_biomass.add_right_compounds({newcomp}) sbml_biomass.set_right_compound_abundance(newcomp, sbml_right_compounds[c]) else: print(f"{m.group(1)} not found") print(f"Before running FBA there are {len(reactions)} reactions") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run, media, sbml_biomass, verbose=True) print(f"After running FBA there are {len(reactions)} reactions") print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth))
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Add the missing reactions
all_reactions = {'rxn00868', 'rxn01923', 'rxn02268', 'rxn10215', 'rxn10219', 'rxn08089', 'rxn10212', 'rxn08083', 'rxn10214', 'rxn10211', 'rxn10218', 'rxn08086', 'rxn10217', 'rxn08087', 'rxn08088', 'rxn08085', 'rxn10216', 'rxn08084', 'rxn10213', 'rxn05572', 'rxn05565', 'rxn00541', 'rxn10155', 'rxn10157', 'rxn05536', 'rxn05544', 'rxn12848', 'rxn12851', 'rxn05539', 'rxn05541', 'rxn05537', 'rxn05543', 'rxn12849', 'rxn05533', 'rxn05540', 'rxn05534', 'rxn05547', 'rxn05546', 'rxn05542', 'rxn05535', 'rxn12850', 'rxn05545', 'rxn05538', 'rxn05168', 'rxn05179', 'rxn05161', 'rxn09313', 'rxn08354', 'rxn08356', 'rxn09315', 'rxn05549', 'rxn05160', 'rxn05644', 'rxn05330', 'rxn05335', 'rxn05334', 'rxn05329', 'rxn05333', 'rxn05332', 'rxn05331', 'rxn05415', 'rxn05381', 'rxn05386', 'rxn05427', 'rxn05431', 'rxn05373', 'rxn05377', 'rxn05398', 'rxn05419', 'rxn05402', 'rxn05369', 'rxn05361', 'rxn05394', 'rxn05406', 'rxn05365', 'rxn05390', 'rxn05423', 'rxn05462', 'rxn05411', 'rxn03492', 'rxn04050', 'rxn08258', 'rxn04713', 'rxn00990', 'rxn00875', 'rxn08471', 'rxn05737', 'rxn08467', 'rxn10067', 'rxn08468', 'rxn08469', 'rxn08470', 'rxn01302', 'rxn01301', 'rxn05422', 'rxn05372', 'rxn05341', 'rxn05376', 'rxn05342', 'rxn05337', 'rxn05385', 'rxn05397', 'rxn05340', 'rxn05461', 'rxn05368', 'rxn05418', 'rxn05393', 'rxn05336', 'rxn05426', 'rxn05364', 'rxn05430', 'rxn05410', 'rxn05339', 'rxn05401', 'rxn05338', 'rxn05360', 'rxn05414', 'rxn05405', 'rxn05389', 'rxn05380', 'rxn03164', 'rxn05229', 'rxn07586', 'rxn05054', 'rxn04384', 'rxn00503', 'rxn00183', 'rxn05187', 'rxn05515', 'rxn02056', 'rxn09134', 'rxn09125', 'rxn09157', 'rxn09128', 'rxn09142', 'rxn09161', 'rxn09147', 'rxn09164', 'rxn09152', 'rxn09124', 'rxn09131', 'rxn09133', 'rxn09138', 'rxn09143', 'rxn09153', 'rxn09160', 'rxn09158', 'rxn09148', 'rxn09144', 'rxn09150', 'rxn09130', 'rxn09149', 'rxn09163', 'rxn09159', 'rxn09132', 'rxn09127', 'rxn09140', 'rxn09145', 'rxn09137', 'rxn09154', 'rxn09151', 'rxn09146', 'rxn09123', 'rxn09139', 'rxn09126', 'rxn09141', 'rxn09135', 'rxn09136', 'rxn09155', 'rxn09162', 'rxn09129', 'rxn09156', 'rxn02949', 'rxn03241', 'rxn03245', 'rxn02911', 'rxn02167', 'rxn03250', 'rxn02934', 'rxn03240', 'rxn03247', 'rxn05316', 'rxn09687', 'rxn05198', 'rxn09688', 'rxn05199', 'rxn05200', 'rxn09685', 'rxn05318', 'rxn05205', 'rxn05621', 'rxn05656', 'rxn05585', 'rxn05172', 'rxn05594', 'rxn05552', 'rxn05599', 'rxn05512', 'rxn05620', 'rxn01277', 'rxn05518', 'rxn05145', 'rxn05460', 'rxn05396', 'rxn05363', 'rxn05359', 'rxn05367', 'rxn05417', 'rxn05421', 'rxn05392', 'rxn05413', 'rxn05349', 'rxn05388', 'rxn05429', 'rxn05371', 'rxn05400', 'rxn05425', 'rxn05409', 'rxn05404', 'rxn05375', 'rxn05379', 'rxn05384', 'rxn04139', 'rxn00640', 'rxn05507', 'rxn05506', 'rxn01893', 'rxn00671', 'rxn00501', 'rxn10340', 'rxn10334', 'rxn10337', 'rxn10338', 'rxn10341', 'rxn10335', 'rxn10342', 'rxn10339', 'rxn10336', 'rxn00160', 'rxn01285', 'rxn04143', 'rxn01847', 'rxn01103', 'rxn00227', 'rxn05175', 'rxn05163', 'rxn05683', 'rxn05484', 'rxn02933', 'rxn04750', 'rxn03244', 'rxn01451', 'rxn03239', 'rxn03246', 'rxn03242', 'rxn03249', 'rxn06777', 'rxn05500', 'rxn01637', 'rxn01122', 'rxn04602', 'rxn02416', 'rxn04601', 'rxn04928', 'rxn05596', 'rxn02762', 'rxn02521', 'rxn02522', 'rxn03483', 'rxn02775', 'rxn04046', 'rxn07589', 'rxn03491', 'rxn10117', 'rxn10119', 'rxn08333', 'rxn04673', 'rxn10308', 'rxn10311', 'rxn10315', 'rxn10309', 'rxn10307', 'rxn10312', 'rxn10310', 'rxn10314', 'rxn08040', 'rxn10313', 'rxn12147', 'rxn03931', 'rxn03916', 'rxn04674', 'rxn03397', 'rxn10094', 'rxn02286', 'rxn02474', 'rxn00555', 'rxn08709', 'rxn04052', 'rxn03512', 'rxn12224', 'rxn09188', 'rxn02359', 'rxn02008', 'rxn08179', 'rxn08178', 'rxn03643', 'rxn09177', 'rxn12512', 'rxn07587', 'rxn02507', 'rxn08291', 'rxn06865', 'rxn00303', 'rxn00222', 'rxn09978', 'rxn09979', 'rxn07588', 'rxn04413', 'rxn03537', 'rxn03536', 'rxn03919', 'rxn03435', 'rxn02187', 'rxn02186', 'rxn03436', 'rxn03068', 'rxn05317', 'rxn01219', 'rxn00364', 'rxn03514', 'rxn04048', 'rxn00544', 'rxn02792', 'rxn00350', 'rxn02791', 'rxn05221', 'rxn00675', 'rxn00175', 'rxn00986', 'rxn01507', 'rxn02400', 'rxn01670', 'rxn00363', 'rxn00708', 'rxn01218', 'rxn01521', 'rxn01445', 'rxn00913', 'rxn01145', 'rxn00132', 'rxn01961', 'rxn00831', 'rxn08712', 'rxn04113', 'rxn04996', 'rxn08756', 'rxn08352', 'rxn06023', 'rxn02449', 'rxn05165', 'rxn05181', 'rxn08194', 'rxn01093', 'rxn09180', 'rxn03644', 'rxn08619', 'rxn09289', 'rxn00776', 'rxn01360', 'rxn08335', 'rxn08336', 'rxn12500', 'rxn02287', 'rxn02774', 'rxn09167', 'rxn08708', 'rxn05156', 'rxn05151', 'rxn01629', 'rxn12146', 'rxn01123', 'rxn05147', 'rxn05173', 'rxn08707', 'rxn00927', 'rxn01299', 'rxn01226', 'rxn01545', 'rxn02476', 'rxn02011', 'rxn05201', 'rxn01895', 'rxn04604', 'rxn00830', 'rxn00179', 'rxn03991', 'rxn03990', 'rxn03975', 'rxn03974', 'rxn00818', 'rxn03838', 'rxn00817', 'rxn02596', 'rxn05555', 'rxn00056', 'rxn06979', 'rxn11544', 'rxn03918', 'rxn05559', 'rxn08345', 'rxn00509', 'rxn00205', 'rxn00006', 'rxn02473', 'rxn00834', 'rxn05293', 'rxn00105', 'rxn00634', 'rxn08618', 'rxn06848', 'rxn09997', 'rxn05938', 'rxn04783', 'rxn05206', 'rxn00102', 'rxn01644', 'rxn02938', 'rxn00792', 'rxn08711', 'rxn03513', 'rxn04047', 'rxn01265', 'rxn01404', 'rxn03394', 'rxn00777', 'rxn01106', 'rxn07492', 'rxn03538', 'rxn01480', 'rxn00119', 'rxn01517', 'rxn01966', 'rxn01132', 'rxn05162', 'rxn02277', 'rxn08257', 'rxn05197', 'rxn01352', 'rxn03540', 'rxn00789', 'rxn00508', 'rxn04386', 'rxn10481', 'rxn05528', 'rxn06077', 'rxn01671', 'rxn02929', 'rxn03917', 'rxn03135', 'rxn00469', 'rxn00756', 'rxn03087', 'rxn01329', 'rxn01917', 'rxn01879', 'rxn01538', 'rxn02285', 'rxn08710', 'rxn07438', 'rxn02321', 'rxn00787', 'rxn01289', 'rxn00851', 'rxn05297', 'rxn00062', 'rxn04132', 'rxn04133', 'rxn05319', 'rxn05467', 'rxn05468', 'rxn02374', 'rxn03012', 'rxn05064', 'rxn02666', 'rxn04457', 'rxn04456', 'rxn01664', 'rxn02916', 'rxn05667', 'rxn10571', 'rxn05195', 'rxn05645', 'rxn05144', 'rxn02988', 'rxn01256', 'rxn12604', 'rxn05039', 'rxn10904', 'rxn05499', 'rxn01152', 'rxn05691', 'rxn12893', 'rxn11116', 'rxn00880', 'rxn05593', 'rxn05469', 'rxn00186', 'rxn05694', 'rxn05491', 'rxn05682', 'rxn01748', 'rxn00327', 'rxn01746', 'rxn09656'} print(f"Before updating there are {len(reactions_to_run)} reactions") r2ra = copy.copy(reactions_to_run) r2ra.update(all_reactions) print(f"After updating there are {len(r2ra)} reactions") print(f"Before running FBA there are {len(reactions)} reactions") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run, media, sbml_biomass, verbose=True) print(f"After running FBA there are {len(reactions)} reactions") print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth)) new_reactions = PyFBA.gapfill.suggest_from_media(compounds, reactions, reactions_to_run, media, verbose=False) print(f"There are {len(new_reactions)} new reactions to add") transrct = set() for r in new_reactions: if reactions[r].is_transport: transrct.add(r) print(f"There are {len(transrct)} new transport reactions") reactions_to_run.update(transrct) print(f"Before running FBA there are {len(reactions)} reactions") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run, media, biomass_equation) print(f"After running FBA there are {len(reactions)} reactions") print("Initial run has a biomass flux value of {} --> Growth: {}".format(value, growth)) print(f"There are {len(reactions_to_run)} reactions to run")
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Media import reactions We need to make sure that the cell can import everything that is in the media... otherwise it won't be able to grow. Be sure to only do this step if you are certain that the cell can grow on the media you are testing.
update_type = 'media' new_reactions = PyFBA.gapfill.suggest_from_media(compounds, reactions, reactions_to_run, media, verbose=True) added_reactions.append((update_type, new_reactions)) print(f"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.") reactions_to_run.update(new_reactions) print(f"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.") for r in reactions: if reactions[r].is_transport: print(r) for r in reactions: for c in reactions[r].left_compounds: if c.location == 'e': if not reactions[r].is_transport: print(f"Check {r}") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run, media, biomass_equation) print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Essential reactions There are ~100 reactions that are in every model we have tested, and we construe these to be essential for all models, so we typically add these next!
update_type = 'essential' new_reactions = PyFBA.gapfill.suggest_essential_reactions() added_reactions.append((update_type, new_reactions)) print(f"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.") reactions_to_run.update(new_reactions) print(f"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run, media, biomass_equation) print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Subsystems The reactions connect us to subsystems (see Overbeek et al. 2014), and this test ensures that all the subsystems are complete. We add reactions required to complete the subsystem.
update_type = 'subsystems' new_reactions = \ PyFBA.gapfill.suggest_reactions_from_subsystems(reactions, reactions_to_run, threshold=0.5) added_reactions.append((update_type, new_reactions)) print(f"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.") reactions_to_run.update(new_reactions) print(f"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run, media, biomass_equation) print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth)) pre_orphan=copy.copy(reactions_to_run) pre_o_added=copy.copy(added_reactions) print("Pre orphan has {} reactions".format(len(pre_orphan)))
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Orphan compounds Orphan compounds are those compounds which are only associated with one reaction. They are either produced, or trying to be consumed. We need to add reaction(s) that complete the network of those compounds. You can change the maximum number of reactions that a compound is in to be considered an orphan (try increasing it to 2 or 3).
update_type = 'orphan compounds' new_reactions = PyFBA.gapfill.suggest_by_compound(compounds, reactions, reactions_to_run, max_reactions=1) added_reactions.append((update_type, new_reactions)) print(f"Before adding {update_type} reactions, we had {len(reactions_to_run)} reactions.") reactions_to_run.update(new_reactions) print(f"After adding {update_type} reactions, we had {len(reactions_to_run)} reactions.") status, value, growth = PyFBA.fba.run_fba(compounds, reactions, reactions_to_run, media, biomass_equation) print("Run has a biomass flux value of {} --> Growth: {}".format(value, growth))
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
Trimming the model Now that the model has been shown to grow on ArgonneLB media after several gap-fill iterations, we should trim down the reactions to only the required reactions necessary to observe growth.
reqd_additional = set() # Begin loop through all gap-filled reactions while added_reactions: ori = copy.copy(original_reactions_to_run) ori.update(reqd_additional) # Test next set of gap-filled reactions # Each set is based on a method described above how, new = added_reactions.pop() sys.stderr.write("Testing reactions from {}\n".format(how)) # Get all the other gap-filled reactions we need to add for tple in added_reactions: ori.update(tple[1]) # Use minimization function to determine the minimal # set of gap-filled reactions from the current method new_essential = PyFBA.gapfill.minimize_additional_reactions(ori, new, compounds, reactions, media, biomass_equation) sys.stderr.write("Saved {} reactions from {}\n".format(len(new_essential), how)) for r in new_essential: sys.stderr.write(r + "\n") # Record the method used to determine # how the reaction was gap-filled for new_r in new_essential: reactions[new_r].is_gapfilled = True reactions[new_r].gapfill_method = how reqd_additional.update(new_essential) # Combine old and new reactions all_reactions = original_reactions_to_run.union(reqd_additional) status, value, growth = PyFBA.fba.run_fba(compounds, reactions, all_reactions, media, biomass_equation) print("The biomass reaction has a flux of {} --> Growth: {}".format(value, growth))
iPythonNotebooks/PATRIC to FBA.ipynb
linsalrob/PyFBA
mit
3. Reading a CSV file and doing common Pandas operations
regiones_file='data/chile_regiones.csv' provincias_file='data/chile_provincias.csv' comunas_file='data/chile_comunas.csv' regiones=pd.read_csv(regiones_file, header=0, sep=',') provincias=pd.read_csv(provincias_file, header=0, sep=',') comunas=pd.read_csv(comunas_file, header=0, sep=',') print('regiones table: ', regiones.columns.values.tolist()) print('provincias table: ', provincias.columns.values.tolist()) print('comunas table: ', comunas.columns.values.tolist()) regiones.head() provincias.head() comunas.head() regiones_provincias=pd.merge(regiones, provincias, how='outer') regiones_provincias.head() provincias_comunas=pd.merge(provincias, comunas, how='outer') provincias_comunas.head() regiones_provincias_comunas=pd.merge(regiones_provincias, comunas, how='outer') regiones_provincias_comunas.index.name='ID' regiones_provincias_comunas.head() #regiones_provincias_comunas.to_csv('chile_regiones_provincia_comuna.csv', index=False)
clase_1/02 - Lectura de datos con Pandas.ipynb
rpmunoz/topicos_ingenieria_1
gpl-3.0
4. Loading ful dataset
data_file='data/chile_demographic.csv' data=pd.read_csv(data_file, header=0, sep=',') data data.sort_values('Poblacion') data.sort_values('Poblacion', ascending=False) (data.groupby(['Region'])['Poblacion','Superficie'].sum()) (data.groupby(['Region'])['Poblacion','Superficie'].sum()).sort_values('Poblacion', ascending=False) data.sort_values(['RegionID']).groupby(['RegionID','Region'])['Poblacion','Superficie'].sum()
clase_1/02 - Lectura de datos con Pandas.ipynb
rpmunoz/topicos_ingenieria_1
gpl-3.0
pandas will let us read the data. scikit-learn is the machine learning library matplotlib will let us visualize our model and data Read the Data
# read data dataframe = pd.read_fwf('brain_body.txt') x_values = dataframe[['Brain']] y_values = dataframe[['Body']]
01.neural_network/01.first_neural_net-linear_regression_1/01.linear_regression_1.ipynb
hadibakalim/deepLearning
mit
Train model on the Data
body_reg = linear_model.LinearRegression() body_reg.fit(x_values, y_values)
01.neural_network/01.first_neural_net-linear_regression_1/01.linear_regression_1.ipynb
hadibakalim/deepLearning
mit
Visualize results
plt.scatter(x_values, y_values) plt.plot(x_values, body_reg.predict(x_values)) plt.show()
01.neural_network/01.first_neural_net-linear_regression_1/01.linear_regression_1.ipynb
hadibakalim/deepLearning
mit
&larr; Back to Index Autocorrelation The autocorrelation of a signal describes the similarity of a signal against a time-shifted version of itself. For a signal $x$, the autocorrelation $r$ is: $$ r(k) = \sum_n x(n) x(n-k) $$ In this equation, $k$ is often called the lag parameter. $r(k)$ is maximized at $k = 0$ and is symmetric about $k$. The autocorrelation is useful for finding repeated patterns in a signal. For example, at short lags, the autocorrelation can tell us something about the signal's fundamental frequency. For longer lags, the autocorrelation may tell us something about the tempo of a musical signal. Let's load a file:
x, sr = librosa.load('audio/c_strum.wav') ipd.Audio(x, rate=sr) plt.figure(figsize=(14, 5)) librosa.display.waveplot(x, sr)
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
numpy.correlate There are two ways we can compute the autocorrelation in Python. The first method is numpy.correlate:
# Because the autocorrelation produces a symmetric signal, we only care about the "right half". r = numpy.correlate(x, x, mode='full')[len(x)-1:] print(x.shape, r.shape)
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
Plot the autocorrelation:
plt.figure(figsize=(14, 5)) plt.plot(r[:10000]) plt.xlabel('Lag (samples)') plt.xlim(0, 10000)
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
librosa.autocorrelate The second method is librosa.autocorrelate:
r = librosa.autocorrelate(x, max_size=10000) print(r.shape) plt.figure(figsize=(14, 5)) plt.plot(r) plt.xlabel('Lag (samples)') plt.xlim(0, 10000)
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
librosa.autocorrelate conveniently only keeps one half of the autocorrelation function, since the autocorrelation is symmetric. Also, the max_size parameter prevents unnecessary calculations. Pitch Estimation The autocorrelation is used to find repeated patterns within a signal. For musical signals, a repeated pattern can correspond to a pitch period. We can therefore use the autocorrelation function to estimate the pitch in a musical signal.
x, sr = librosa.load('audio/oboe_c6.wav') ipd.Audio(x, rate=sr)
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
Compute and plot the autocorrelation:
r = librosa.autocorrelate(x, max_size=5000) plt.figure(figsize=(14, 5)) plt.plot(r[:200])
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
The autocorrelation always has a maximum at zero, i.e. zero lag. We want to identify the maximum outside of the peak centered at zero. Therefore, we might choose only to search within a range of reasonable pitches:
midi_hi = 120.0 midi_lo = 12.0 f_hi = librosa.midi_to_hz(midi_hi) f_lo = librosa.midi_to_hz(midi_lo) t_lo = sr/f_hi t_hi = sr/f_lo print(f_lo, f_hi) print(t_lo, t_hi)
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
Set invalid pitch candidates to zero:
r[:int(t_lo)] = 0 r[int(t_hi):] = 0 plt.figure(figsize=(14, 5)) plt.plot(r[:1400])
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
Find the location of the maximum:
t_max = r.argmax() print(t_max)
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
Finally, estimate the pitch in Hertz:
float(sr)/t_max
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
Indeed, that is very close to the true frequency of C6:
librosa.midi_to_hz(84)
autocorrelation.ipynb
stevetjoa/stanford-mir
mit
cost <img style="float: left;" src="../img/linear_cost.png">
theta = np.ones(X.shape[1]) lr.cost(theta, X, y)
ex5-bias vs variance/2- regularization of linear regression.ipynb
icrtiou/coursera-ML
mit
regularized cost <img style="float: left;" src="../img/linear_reg_cost.png">
lr.regularized_cost(theta, X, y)
ex5-bias vs variance/2- regularization of linear regression.ipynb
icrtiou/coursera-ML
mit
gradient <img style="float: left;" src="../img/linear_gradient.png">
lr.gradient(theta, X, y)
ex5-bias vs variance/2- regularization of linear regression.ipynb
icrtiou/coursera-ML
mit
regularized gradient <img style="float: left;" src="../img/linear_reg_gradient.png">
lr.regularized_gradient(theta, X, y)
ex5-bias vs variance/2- regularization of linear regression.ipynb
icrtiou/coursera-ML
mit
fit the data regularization term $\lambda=0$
theta = np.ones(X.shape[0]) final_theta = lr.linear_regression_np(X, y, l=0).get('x') b = final_theta[0] # intercept m = final_theta[1] # slope plt.scatter(X[:,1], y, label="Training data") plt.plot(X[:, 1], X[:, 1]*m + b, label="Prediction") plt.legend(loc=2)
ex5-bias vs variance/2- regularization of linear regression.ipynb
icrtiou/coursera-ML
mit
The global collection of tide gauge records at the PSMSL is used to access the data. The other way to access the data is to ask the service desk data at Rijkswaterstaat. There are two types of datasets the "Revised Local Reference" and "Metric". For the Netherlands the difference is that the "Revised Local Reference" undoes the corrections from the NAP correction in 2014, to get a consistent dataset.
urls = { 'metric_monthly': 'http://www.psmsl.org/data/obtaining/met.monthly.data/met_monthly.zip', 'rlr_monthly': 'http://www.psmsl.org/data/obtaining/rlr.annual.data/rlr_monthly.zip', 'rlr_annual': 'http://www.psmsl.org/data/obtaining/rlr.annual.data/rlr_annual.zip' } dataset_name = 'rlr_annual' # these compute the rlr back to NAP (ignoring the undoing of the NAP correction) main_stations = { 20: { 'name': 'Vlissingen', 'rlr2nap': lambda x: x - (6976-46) }, 22: { 'name': 'Hoek van Holland', 'rlr2nap': lambda x:x - (6994 - 121) }, 23: { 'name': 'Den Helder', 'rlr2nap': lambda x: x - (6988-42) }, 24: { 'name': 'Delfzijl', 'rlr2nap': lambda x: x - (6978-155) }, 25: { 'name': 'Harlingen', 'rlr2nap': lambda x: x - (7036-122) }, 32: { 'name': 'IJmuiden', 'rlr2nap': lambda x: x - (7033-83) } } # the main stations are defined by their ids main_stations_idx = list(main_stations.keys()) main_stations_idx # download the zipfile resp = requests.get(urls[dataset_name]) # we can read the zipfile stream = io.BytesIO(resp.content) zf = zipfile.ZipFile(stream) # this list contains a table of # station ID, latitude, longitude, station name, coastline code, station code, and quality flag csvtext = zf.read('{}/filelist.txt'.format(dataset_name)) stations = pandas.read_csv( io.BytesIO(csvtext), sep=';', names=('id', 'lat', 'lon', 'name', 'coastline_code', 'station_code', 'quality'), converters={ 'name': str.strip, 'quality': str.strip } ) stations = stations.set_index('id') # the dutch stations in the PSMSL database, make a copy # or use stations.coastline_code == 150 for all dutch stations selected_stations = stations.ix[main_stations_idx].copy() # set the main stations, this should be a list of 6 stations selected_stations # show all the stations on a map # compute the bounds of the plot sw = (50, -5) ne = (55, 10) # transform to web mercator sw_wm = pyproj.transform(WGS84, WEBMERCATOR, sw[1], sw[0]) ne_wm = pyproj.transform(WGS84, WEBMERCATOR, ne[1], ne[0]) # create a plot fig = bokeh.plotting.figure(tools='pan, wheel_zoom', plot_width=600, plot_height=200, x_range=(sw_wm[0], ne_wm[0]), y_range=(sw_wm[1], ne_wm[1])) fig.axis.visible = False # add some background tiles fig.add_tile(bokeh.tile_providers.STAMEN_TERRAIN) # add the stations x, y = pyproj.transform(WGS84, WEBMERCATOR, np.array(stations.lon), np.array(stations.lat)) fig.circle(x, y) x, y = pyproj.transform(WGS84, WEBMERCATOR, np.array(selected_stations.lon), np.array(selected_stations.lat)) _ = fig.circle(x, y, color='red') # show the plot bokeh.io.show(fig)
sealevelmonitor.ipynb
openearth/notebooks
gpl-3.0
Now that we have defined which tide gauges we are monitoring we can start downloading the relevant data.
# each station has a number of files that you can look at. # here we define a template for each filename # stations that we are using for our computation # define the name formats for the relevant files names = { 'datum': '{dataset}/RLR_info/{id}.txt', 'diagram': '{dataset}/RLR_info/{id}.png', 'url': 'http://www.psmsl.org/data/obtaining/rlr.diagrams/{id}.php', 'data': '{dataset}/data/{id}.rlrdata', 'doc': '{dataset}/docu/{id}.txt', 'contact': '{dataset}/docu/{id}_auth.txt' } def get_url(station, dataset): """return the url of the station information (diagram and datum)""" info = dict( dataset=dataset, id=station.name ) url = names['url'].format(**info) return url # fill in the dataset parameter using the global dataset_name f = functools.partial(get_url, dataset=dataset_name) # compute the url for each station selected_stations['url'] = selected_stations.apply(f, axis=1) selected_stations def missing2nan(value, missing=-99999): """convert the value to nan if the float of value equals the missing value""" value = float(value) if value == missing: return np.nan return value def get_data(station, dataset): """get data for the station (pandas record) from the dataset (url)""" info = dict( dataset=dataset, id=station.name ) bytes = zf.read(names['data'].format(**info)) df = pandas.read_csv( io.BytesIO(bytes), sep=';', names=('year', 'height', 'interpolated', 'flags'), converters={ "height": lambda x: main_stations[station.name]['rlr2nap'](missing2nan(x)), "interpolated": str.strip, } ) df['station'] = station.name return df # get data for all stations f = functools.partial(get_data, dataset=dataset_name) # look up the data for each station selected_stations['data'] = [f(station) for _, station in selected_stations.iterrows()] # we now have data for each station selected_stations[['name', 'data']]
sealevelmonitor.ipynb
openearth/notebooks
gpl-3.0
Now that we have all data downloaded we can compute the mean.
# compute the mean grouped = pandas.concat(selected_stations['data'].tolist())[['year', 'height']].groupby('year') mean_df = grouped.mean().reset_index() # filter out non-trusted part (before NAP) mean_df = mean_df[mean_df['year'] >= 1890].copy() # these are the mean waterlevels mean_df.tail() # show all the stations, including the mean title = 'Sea-surface height for Dutch tide gauges [{year_min} - {year_max}]'.format( year_min=mean_df.year.min(), year_max=mean_df.year.max() ) fig = bokeh.plotting.figure(title=title, x_range=(1860, 2020), plot_width=900, plot_height=400) colors = bokeh.palettes.Accent6 for color, (id_, station) in zip(colors, selected_stations.iterrows()): data = station['data'] fig.circle(data.year, data.height, color=color, legend=station['name'], alpha=0.5) fig.line(mean_df.year, mean_df.height, line_width=3, alpha=0.7, color='black', legend='Mean') fig.legend.location = "bottom_right" fig.yaxis.axis_label = 'waterlevel [mm] above NAP' fig.xaxis.axis_label = 'year' bokeh.io.show(fig)
sealevelmonitor.ipynb
openearth/notebooks
gpl-3.0
Methods Now we can define the statistical model. The "current sea-level rise" is defined by the following formula. Please note that the selected epoch of 1970 is arbitrary. $ H(t) = a + b_{trend}(t-1970) + b_u\cos(2\pi\frac{t - 1970}{18.613}) + b_v\sin(2\pi\frac{t - 1970}{18.613}) $ The terms are refered to as Constant ($a$), Trend ($b_{trend}$), Nodal U ($b_u$) and Nodal V ($b_v$). Alternative models are used to detect if sea-level rise is increasing. These models include the broken linear model, defined by a possible change in trend starting at 1993. This timespan is the start of the "satellite era" (start of TOPEX/Poseidon measurements), it is also often referred to as the start of acceleration because the satellite measurements tend to show a higher rate of sea level than the "tide-gauge era" (1900-2000). If this model fits better than the linear model, one could say that there is a "increase in sea-level rise". $ H(t) = a + b_{trend}(t-1970) + b_{broken}(t > 1993)*(t-1993) + b_{u}\cos(2\pi\frac{t - 1970}{18.613}) + b_{v}\sin(2\pi\frac{t - 1970}{18.613}) $ Another way to look at increased sea-level rise is to look at sea-level acceleration. To detect sea-level acceleration one can use a quadratic model. $ H(t) = a + b_{trend}(t-1970) + b_{quadratic}(t - 1970)*(t-1970) + b_{u}\cos(2\pi\frac{t - 1970}{18.613}) + b_{v}\sin(2\pi\frac{t - 1970}{18.613}) $
# define the statistical model y = mean_df['height'] X = np.c_[ mean_df['year']-1970, np.cos(2*np.pi*(mean_df['year']-1970)/18.613), np.sin(2*np.pi*(mean_df['year']-1970)/18.613) ] X = sm.add_constant(X) model = sm.OLS(y, X) fit = model.fit() fit.summary(yname='Sea-surface height', xname=['Constant', 'Trend', 'Nodal U', 'Nodal V']) # things to check: # Durbin Watson should be >1 for no worries, >2 for no autocorrelation # JB should be non-significant for normal residuals # abs(x2.t) + abs(x3.t) should be > 3, otherwise adding nodal is not useful fig = bokeh.plotting.figure(x_range=(1860, 2020), plot_width=900, plot_height=400) for color, (id_, station) in zip(colors, selected_stations.iterrows()): data = station['data'] fig.circle(data.year, data.height, color=color, legend=station['name'], alpha=0.8) fig.circle(mean_df.year, mean_df.height, line_width=3, legend='Mean', color='black', alpha=0.5) fig.line(mean_df.year, fit.predict(), line_width=3, legend='Current') fig.legend.location = "bottom_right" fig.yaxis.axis_label = 'waterlevel [mm] above N.A.P.' fig.xaxis.axis_label = 'year' bokeh.io.show(fig)
sealevelmonitor.ipynb
openearth/notebooks
gpl-3.0
Is there a sea-level acceleration? The following section computes two common models to detect sea-level acceleration. The broken linear model expects that sea level has been rising faster since 1990. The quadratic model assumes that the sea-level is accelerating continuously. Both models are compared to the linear model. The extra terms are tested for significance and the AIC is computed to see which model is "better".
# define the statistical model y = mean_df['height'] X = np.c_[ mean_df['year']-1970, (mean_df['year'] > 1993) * (mean_df['year'] - 1993), np.cos(2*np.pi*(mean_df['year']-1970)/18.613), np.sin(2*np.pi*(mean_df['year']-1970)/18.613) ] X = sm.add_constant(X) model_broken_linear = sm.OLS(y, X) fit_broken_linear = model_broken_linear.fit() # define the statistical model y = mean_df['height'] X = np.c_[ mean_df['year']-1970, (mean_df['year'] - 1970) * (mean_df['year'] - 1970), np.cos(2*np.pi*(mean_df['year']-1970)/18.613), np.sin(2*np.pi*(mean_df['year']-1970)/18.613) ] X = sm.add_constant(X) model_quadratic = sm.OLS(y, X) fit_quadratic = model_quadratic.fit() fit_broken_linear.summary(yname='Sea-surface height', xname=['Constant', 'Trend', 'Trend(year > 1990)', 'Nodal U', 'Nodal V']) fit_quadratic.summary(yname='Sea-surface height', xname=['Constant', 'Trend', 'Trend**2', 'Nodal U', 'Nodal V']) fig = bokeh.plotting.figure(x_range=(1860, 2020), plot_width=900, plot_height=400) for color, (id_, station) in zip(colors, selected_stations.iterrows()): data = station['data'] fig.circle(data.year, data.height, color=color, legend=station['name'], alpha=0.8) fig.circle(mean_df.year, mean_df.height, line_width=3, legend='Mean', color='black', alpha=0.5) fig.line(mean_df.year, fit.predict(), line_width=3, legend='Current') fig.line(mean_df.year, fit_broken_linear.predict(), line_width=3, color='#33bb33', legend='Broken') fig.line(mean_df.year, fit_quadratic.predict(), line_width=3, color='#3333bb', legend='Quadratic') fig.legend.location = "top_left" fig.yaxis.axis_label = 'waterlevel [mm] above N.A.P.' fig.xaxis.axis_label = 'year' bokeh.io.show(fig)
sealevelmonitor.ipynb
openearth/notebooks
gpl-3.0
Conclusions Below are some statements that depend on the output calculated above.
msg = '''The current average waterlevel above NAP (in mm), based on the 6 main tide gauges for the year {year} is {height:.1f} cm. The current sea-level rise is {rate:.0f} cm/century''' print(msg.format(year=mean_df['year'].iloc[-1], height=fit.predict()[-1]/10.0, rate=fit.params.x1*100.0/10)) if (fit.aic < fit_broken_linear.aic): print('The linear model is a higher quality model (smaller AIC) than the broken linear model.') else: print('The broken linear model is a higher quality model (smaller AIC) than the linear model.') if (fit_broken_linear.pvalues['x2'] < 0.05): print('The trend break is bigger than we would have expected under the assumption that there was no trend break.') else: print('Under the assumption that there is no trend break, we would have expected a trend break as big as we have seen.') if (fit.aic < fit_quadratic.aic): print('The linear model is a higher quality model (smaller AIC) than the quadratic model.') else: print('The quadratic model is a higher quality model (smaller AIC) than the linear model.') if (fit_quadratic.pvalues['x2'] < 0.05): print('The quadratic term is bigger than we would have expected under the assumption that there was no quadraticness.') else: print('Under the assumption that there is no quadraticness, we would have expected a quadratic term as big as we have seen.')
sealevelmonitor.ipynb
openearth/notebooks
gpl-3.0
Detect the corners on the real images and compute A matrix To detect the corners of each square in each image, we used the function findChessboardCorners(), from OpenCv. This function returns every corner found in an image, given the board dimensions in the real world (6,8). So, for every image, we executed this function and associated each corner [x,y], found on each image, with it's equivalent in the real world coordinates [X,Y,Z]. Then we calculated all the rows of the A matrix, given the association between real world coordinates and image plane coordinates for every corner.
matA = list() for item in range(NUMIMG): img = imagesList[item] _, boardCorners = cv2.findChessboardCorners(img, BOARDDIM, None) boardCorners = boardCorners.reshape((BOARDDIM[0] * BOARDDIM[1], 2)) for k in range(48): x, y = boardCorners[k, :] X, Y, Z = realSquares[k, :] matA.append([x*X, x*Y, x*Z, x, -y*X, -y*Y, y*Z, -y])
quiz1/Quiz1-Calibration.ipynb
eugeniopacceli/ComputerVision
mit
Now we must compute the parameters of the rotation matrix R and translation vector T, given the results of the SVD (singular values decomposition) of matrix A (remember this matrix was generated, in the loop above, using the product of each square corner real coordinates by it's image plane coordinates, plus a column with -y in the end of each row of A). First, lets obtain the v vector: * v[1] = r[2,1] * v[2] = r[2,2] * v[3] = r[2,3] * v[4] = Ty * v[5] = a*r[1,1] * v[6] = a*r[1,2] * v[7] = a*r[1,3] * v[8] = a*Tx r[x,y] are the elements of R matrix, Ty and Tx are the elements of T vector, 'a' is the ration between the numbers of pixels on a image horizontal line by vertical line (aspect ratio). To obtain the v vector of the equation Av=0, we did the singular values decomposition using the function numpy.linalg.svd(), which returns U, D, V. The vector-solution v is obtained by extracting the column of V corresponding to the column of D that contains the minimal value in the diagonal.
matA = np.array(matA, dtype=np.float32) U, D, V = np.linalg.svd(matA, full_matrices=True) # The column of V corresponding to the minimal value in the diagonal of D # In the given sample, D always contains a 0 in the 7th columny # If we pick another value, v is generated with null values vecV = V[6,:] v1, v2, v3, v4, v5, v6, v7, v8 = vecV
quiz1/Quiz1-Calibration.ipynb
eugeniopacceli/ComputerVision
mit
Scale factor = sqrt(r[2,1]^2 + r[2,2]^2 + r[2,3]^2)
# Compute the scale factor given the vector v gamma = np.sqrt(v1**2 + v2**2 + v3**2)
quiz1/Quiz1-Calibration.ipynb
eugeniopacceli/ComputerVision
mit
Aspect ratio = sqrt(v[5]^2 + v[6]^2 + v[7]^2) / Scale factor
# Compute the aspect ratio (alpha) alpha = np.sqrt(v5**2 + v6**2 + v7**2) / gamma
quiz1/Quiz1-Calibration.ipynb
eugeniopacceli/ComputerVision
mit
Extraction of rotation matrix R and translation vector T given the elements of v vector:
# First row of R matrix r11, r12, r13 = [v5 / alpha, v6 / alpha, v7 / alpha] # Second row of R matrix r21, r22, r23 = v1/gamma, v2/gamma, v3/gamma # Third row of R matrix, computed by the cross product of rows 1 and 2 r31, r32, r33 = np.cross([r11, r12, r13], [r21, r22, r23]) # Obtain the elements of the translation vector Tx, Ty = [v8/alpha, v4]
quiz1/Quiz1-Calibration.ipynb
eugeniopacceli/ComputerVision
mit
Determinate the signal of gamma, to detect a possible signal inversion of the first two rows of R matrix. Then, we compute the parameters Tz and fx, creating another matrix A and a vector B, and solving the equation system using the least squares technique, made available by the function np.linalg.lstsq(matA,vecB).
# If this product is bigger than 0, invert the signal on R[1,:] and R[2,:] if x*(r11*X + r12*Y + r13*Z + Tx) > 0: r11 = -r11 r12 = -r12 r13 = -r13 r21 = -r21 r22 = -r22 r23 = -r23 Tx = -Tx Ty = -Ty del matA matA = list() vecB = list() # Generate new matrix A and vector B for item in range(NUMIMG): _, boardCorners = cv2.findChessboardCorners(imagesList[item], BOARDDIM, None) boardCorners = boardCorners.reshape((BOARDDIM[0] * BOARDDIM[1], 2)) for k in range(48): x, y = boardCorners[k, :] X, Y, Z = realSquares[k] matA.append([x, (r11*X + r12*Y + r13*Z + Tx)]) vecB.append([-x*(r31*X + r32*Y + r33*Z)]) matA = np.array(matA) vecB = np.array(vecB) # Solve by least squares the system Ax = B vecSol,_, _, _ = np.linalg.lstsq(matA,vecB) # Obtain Tz and fx Tz, fx = vecSol # Compute fy fy = fx / alpha # Matrix R and vector T representation in proper numpy objects matR = np.array([[r11, r12, r13], [r21, r22, r23], [r31, r32, r33]]) vecT = np.array([[Tx], [Ty], [Tz]])
quiz1/Quiz1-Calibration.ipynb
eugeniopacceli/ComputerVision
mit
Prints our results
print("Matriz R \n {}".format(matR)) print("\nVetor T\n{}".format(vecT)) print("fx = {}".format(fx)) print("fy ={}".format(fy)) print("alpha = {}".format(alpha)) print("gamma = {}".format(gamma))
quiz1/Quiz1-Calibration.ipynb
eugeniopacceli/ComputerVision
mit
Results given by the Toolbox using Matlab % Intrinsic and Extrinsic Camera Parameters % % This script file can be directly executed under Matlab to recover the camera intrinsic and extrinsic parameters. % IMPORTANT: This file contains neither the structure of the calibration objects nor the image coordinates of the calibration points. % All those complementary variables are saved in the complete matlab data file Calib_Results.mat. % For more information regarding the calibration model visit http://www.vision.caltech.edu/bouguetj/calib_doc/ Intrinsic Camera Parameters %-- Focal length: fc = [ 1319.360839018184800 ; 1325.862886727936900 ]; %-- Principal point: cc = [ 801.687461000126630 ; 398.685724393203370 ]; %-- Skew coefficient: alpha_c = 0.000000000000000; %-- Distortion coefficients: kc = [ 0.094052171503048 ; -0.196804510414059 ; -0.009826260896323 ; 0.003042938250442 ; 0.000000000000000 ]; %-- Focal length uncertainty: fc_error = [ 7.614776975212465 ; 7.882743402506637 ]; %-- Principal point uncertainty: cc_error = [ 10.631216021613728 ; 10.799630586885627 ]; %-- Skew coefficient uncertainty: alpha_c_error = 0.000000000000000; %-- Distortion coefficients uncertainty: kc_error = [ 0.022911520624474 ; 0.089089522533125 ; 0.002827995432640 ; 0.003450888593813 ; 0.000000000000000 ]; %-- Image size: nx = 1600; ny = 904; %-- Various other variables (may be ignored if you do not use the Matlab Calibration Toolbox): %-- Those variables are used to control which intrinsic parameters should be optimized n_ima = 9; % Number of calibration images est_fc = [ 1 ; 1 ]; % Estimation indicator of the two focal variables est_aspect_ratio = 1; % Estimation indicator of the aspect ratio fc(2)/fc(1) center_optim = 1; % Estimation indicator of the principal point est_alpha = 0; % Estimation indicator of the skew coefficient est_dist = [ 1 ; 1 ; 1 ; 1 ; 0 ]; % Estimation indicator of the distortion coefficients
img1w = cv2.imread('extrin_param.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(1) plt.imshow(img_rgb) img1w = cv2.imread('extrin_param1.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(2) plt.imshow(img_rgb)
quiz1/Quiz1-Calibration.ipynb
eugeniopacceli/ComputerVision
mit
Extrinsic Camera Parameters %-- The rotation (omc_kk) and the translation (Tc_kk) vectors for every calibration image and their uncertainties %-- Image #1: omc_1 = [ 1.984622e+00 ; 1.845352e-01 ; 2.870369e-01 ]; Tc_1 = [ -1.574390e+02 ; 5.101294e+01 ; 3.756956e+02 ]; omc_error_1 = [ 8.027448e-03 ; 4.932236e-03 ; 8.087287e-03 ]; Tc_error_1 = [ 3.093133e+00 ; 3.203087e+00 ; 2.546667e+00 ]; %-- Image #2: omc_2 = [ 2.508173e+00 ; -2.174940e-01 ; 2.164566e-01 ]; Tc_2 = [ -1.390132e+02 ; 6.165338e+01 ; 3.904267e+02 ]; omc_error_2 = [ 8.521395e-03 ; 3.432194e-03 ; 1.065160e-02 ]; Tc_error_2 = [ 3.169612e+00 ; 3.282157e+00 ; 2.471550e+00 ]; %-- Image #3: omc_3 = [ 2.527412e+00 ; -1.619762e-01 ; 2.726591e-01 ]; Tc_3 = [ -1.320341e+02 ; 3.732533e+01 ; 3.677618e+02 ]; omc_error_3 = [ 8.562994e-03 ; 3.575236e-03 ; 1.069876e-02 ]; Tc_error_3 = [ 2.971579e+00 ; 3.088936e+00 ; 2.324621e+00 ]; %-- Image #4: omc_4 = [ 2.403292e+00 ; 2.859610e-01 ; 3.590933e-01 ]; Tc_4 = [ -1.307800e+02 ; 6.399676e+01 ; 3.768658e+02 ]; omc_error_4 = [ 8.352029e-03 ; 3.534587e-03 ; 9.799631e-03 ]; Tc_error_4 = [ 3.104473e+00 ; 3.162434e+00 ; 2.366463e+00 ]; %-- Image #5: omc_5 = [ 1.950385e+00 ; 8.878343e-02 ; -3.789958e-02 ]; Tc_5 = [ -8.674013e+01 ; 7.116935e+01 ; 3.170839e+02 ]; omc_error_5 = [ 7.936524e-03 ; 4.988629e-03 ; 8.027915e-03 ]; Tc_error_5 = [ 2.598114e+00 ; 2.646435e+00 ; 2.018596e+00 ]; %-- Image #6: omc_6 = [ 2.396903e+00 ; -5.169785e-01 ; 4.419722e-02 ]; Tc_6 = [ -8.462341e+01 ; 5.978167e+01 ; 3.749603e+02 ]; omc_error_6 = [ 8.379910e-03 ; 3.952135e-03 ; 1.001740e-02 ]; Tc_error_6 = [ 3.031554e+00 ; 3.096359e+00 ; 2.264016e+00 ]; %-- Image #7: omc_7 = [ 2.472130e+00 ; -1.340944e+00 ; -3.767778e-01 ]; Tc_7 = [ -7.211444e+01 ; 1.104529e+02 ; 3.959351e+02 ]; omc_error_7 = [ 8.504043e-03 ; 3.186557e-03 ; 1.220480e-02 ]; Tc_error_7 = [ 3.256782e+00 ; 3.260764e+00 ; 2.366755e+00 ]; %-- Image #8: omc_8 = [ -2.244006e+00 ; 2.066472e+00 ; -2.738894e-01 ]; Tc_8 = [ 5.965238e+01 ; 8.676875e+01 ; 3.136598e+02 ]; omc_error_8 = [ 6.038871e-03 ; 5.565660e-03 ; 1.031145e-02 ]; Tc_error_8 = [ 2.556427e+00 ; 2.563756e+00 ; 1.969765e+00 ]; %-- Image #9: omc_9 = [ -2.292830e+00 ; 1.862245e+00 ; -4.464243e-02 ]; Tc_9 = [ 2.162723e+01 ; 7.229592e+01 ; 3.928030e+02 ]; omc_error_9 = [ 6.143239e-03 ; 5.905689e-03 ; 1.211607e-02 ]; Tc_error_9 = [ 3.168044e+00 ; 3.163959e+00 ; 2.144994e+00 ];
img1w = cv2.imread('corner_1.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(1) plt.imshow(img_rgb) img1w = cv2.imread('corner_2.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(2) plt.imshow(img_rgb) img1w = cv2.imread('corner_3.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(3) plt.imshow(img_rgb) img1w = cv2.imread('corner_4.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(4) plt.imshow(img_rgb) img1w = cv2.imread('corner_5.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(5) plt.imshow(img_rgb) img1w = cv2.imread('corner_6.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(6) plt.imshow(img_rgb) img1w = cv2.imread('corner_7.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(7) plt.imshow(img_rgb) img1w = cv2.imread('corner_8.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(8) plt.imshow(img_rgb) img1w = cv2.imread('corner_9.png', cv2.IMREAD_COLOR) img_rgb = cv2.cvtColor(img1w, cv2.COLOR_BGR2RGB) plt.figure(9) plt.imshow(img_rgb)
quiz1/Quiz1-Calibration.ipynb
eugeniopacceli/ComputerVision
mit
Description Figure P1-14 shows a simple single-phase ac power system with three loads. The voltage source is $\vec{V} = 240\,V\angle 0^\circ$, impedances of these three loads are: $$\vec{Z}_1 = 10\,\Omega\angle 30^\circ \quad \vec{Z}_2 = 10\,\Omega\angle 45^\circ \quad \vec{Z}_3 = 10\,\Omega\angle -90^\circ $$ <img src="figs/FigC_P1-14.jpg" width="80%">
V = 240 # [V] Z1 = 10.0 * exp(1j* 30/180*pi) Z2 = 10.0 * exp(1j* 45/180*pi) Z3 = 10.0 * exp(1j*-90/180*pi)
Chapman/Ch1-Problem_1-19.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Answer the following questions about this power system. (a) Assume that the switch shown in the figure is initially open, and calculate the current I , the power factor, and the real, reactive, and apparent power being supplied by the source. (b) How much real, reactive, and apparent power is being consumed by each load with the switch open? (c) Assume that the switch shown in the figure is now closed, and calculate the current I , the power factor, and the real, reactive, and apparent power being supplied by the source. (d) How much real, reactive, and apparent power is being consumed by each load with the switch closed? (e) What happened to the current flowing from the source when the switch closed? Why? SOLUTION (a) With the switch open, only loads 1 and 2 are connected to the source. The current $\vec{I}_1$ in Load 1 and the current $\vec{I}_2$ in Load 2 are:
I1 = V/Z1 I2 = V/Z2 I1_angle = arctan(I1.imag/I1.real) I2_angle = arctan(I2.imag/I2.real) print('''I1 = {:.1f} A ∠{:.1f}° I2 = {:.1f} A ∠{:.1f}°'''.format( abs(I1), I1_angle/pi*180, abs(I2), I2_angle/pi*180))
Chapman/Ch1-Problem_1-19.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Therefore the total current from the source is $\vec{I} = \vec{I}_1 + \vec{I}_2$:
I = I1 + I2 I_angle = arctan(I.imag/I.real) print('I = {:.1f} A ∠{:.1f}°'.format( abs(I), I_angle/pi*180)) print('==================')
Chapman/Ch1-Problem_1-19.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
The power factor supplied by the source is:
PF = cos(-I_angle) PF
Chapman/Ch1-Problem_1-19.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
lagging (because current laggs behind voltage). Note that the angle $\theta$ used in the power factor and power calculations is the impedance angle, which is the negative of the current angle as long as voltage is at $0^\circ$. The real, reactive, and apparent power supplied by the source are $$S = VI^* \quad P = VI\cos\theta = real(S) \quad Q = VI\sin\theta = imag(S)$$
So = V*conj(I) # I use index "o" for open switch So
Chapman/Ch1-Problem_1-19.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Let's pretty-print that:
print(''' So = {:>7.1f} VA Po = {:>7.1f} W Qo = {:>7.1f} var ================'''.format(abs(So), So.real, So.imag))
Chapman/Ch1-Problem_1-19.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense