markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation: $$ X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))} $$ Use Numpy ufuncs and no loops in your function.
def geo_brownian(t, W, X0, mu, sigma): "Return X(t) for geometric brownian motion with drift mu, volatility sigma.""" X0*exp(((mu-sigma**2)/2)*t + sigma*W(t)) assert True # leave this for grading
assignments/assignment03/NumpyEx03.ipynb
sthuggins/phys202-2015-work
mit
Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above. Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.
# YOUR CODE HERE raise NotImplementedError() assert True # leave this for grading
assignments/assignment03/NumpyEx03.ipynb
sthuggins/phys202-2015-work
mit
route_quad
c = gf.Component("pads_route_quad") pt = c << gf.components.pad_array(orientation=270, columns=3) pb = c << gf.components.pad_array(orientation=90, columns=3) pt.move((100, 200)) route = gf.routing.route_quad(pt.ports["e11"], pb.ports["e11"], layer=(49, 0)) c.add(route) c
docs/notebooks/041_routing_electical.ipynb
gdsfactory/gdsfactory
mit
get_route_from_steps
c = gf.Component("pads_route_from_steps") pt = c << gf.components.pad_array(orientation=270, columns=3) pb = c << gf.components.pad_array(orientation=90, columns=3) pt.move((100, 200)) route = gf.routing.get_route_from_steps( pb.ports["e11"], pt.ports["e11"], steps=[ {"y": 200}, ], cross_sec...
docs/notebooks/041_routing_electical.ipynb
gdsfactory/gdsfactory
mit
Bundle of routes (get_bundle_electrical)
import gdsfactory as gf c = gf.Component("pads_bundle") pt = c << gf.components.pad_array(orientation=270, columns=3) pb = c << gf.components.pad_array(orientation=90, columns=3) pt.move((100, 200)) routes = gf.routing.get_bundle_electrical( pb.ports, pt.ports, end_straight_length=60, separation=30 ) for route i...
docs/notebooks/041_routing_electical.ipynb
gdsfactory/gdsfactory
mit
get bundle from steps
c = gf.Component("pads_bundle_steps") pt = c << gf.components.pad_array( gf.partial(gf.components.pad, size=(30, 30)), orientation=270, columns=3, spacing=(50, 0), ) pb = c << gf.components.pad_array(orientation=90, columns=3) pt.move((300, 500)) routes = gf.routing.get_bundle_from_steps_electrical( ...
docs/notebooks/041_routing_electical.ipynb
gdsfactory/gdsfactory
mit
But is it right, though? What's with the weird hump for early departures (departure_delay less than zero)? First, we should verify that we can apply Bayes Law. Grouping by the departure delay is incorrect if the departure delay is a chaotic input variable. We have do exploratory analysis to validate that: If a flight ...
%%bigquery df SELECT departure_delay, COUNT(1) AS num_flights, APPROX_QUANTILES(arrival_delay, 10) AS arrival_delay_deciles FROM `bigquery-samples.airline_ontime_data.flights` GROUP BY departure_delay HAVING num_flights > 100 ORDER BY departure_delay ASC import pandas as pd percentiles = df['arrival_dela...
blogs/bigquery_datascience/bigquery_datascience.ipynb
turbomanage/training-data-analyst
apache-2.0
Note the crazy non-linearity for top half of of the flights that leave more than 20 minutes early. Most likely, these are planes that try to beat some weather situation. About half of such flights succeed (the linear bottom) and the other half don't (the non-linear top). The average is what we saw as the weird hump in ...
%%bigquery CREATE OR REPLACE MODEL ch09eu.bicycle_model_dnn OPTIONS(input_label_cols=['duration'], model_type='dnn_regressor', hidden_units=[32, 4]) TRANSFORM( duration , start_station_name , CAST(EXTRACT(dayofweek from start_date) AS STRING) as dayofweek , CAST(EXTRACT(hour from start_date) A...
blogs/bigquery_datascience/bigquery_datascience.ipynb
turbomanage/training-data-analyst
apache-2.0
BigQuery and TensorFlow Batch predictions of a TensorFlow model from BigQuery!
%%bigquery CREATE OR REPLACE MODEL advdata.txtclass_tf OPTIONS (model_type='tensorflow', model_path='gs://cloud-training-demos/txtclass/export/exporter/1549825580/*') %%bigquery SELECT input, (SELECT AS STRUCT(p, ['github', 'nytimes', 'techcrunch'][ORDINAL(s)]) prediction FROM (SELECT p, ROW_NUMBER() ...
blogs/bigquery_datascience/bigquery_datascience.ipynb
turbomanage/training-data-analyst
apache-2.0
Roadrunner Methoden Antimony Modell aus Modell-Datenbank abfragen: Lade mithilfe von urllib2 das Antimony-Modell des "Repressilator" herunter. Benutze dazu die urllib2 Methoden urlopen() und read() Die URL für den Repressilator lautet: http://antimony.sourceforge.net/examples/biomodels/BIOMD0000000012.txt Elowitz, M. B...
Repressilator = urllib2.urlopen('http://antimony.sourceforge.net/examples/biomodels/BIOMD0000000012.txt').read()
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
Erstelle eine Instanz von roadrunner, indem du gleichzeitig den Repressilator als Modell lädst. Benutze dazu loada() von tellurium.
rr = te.loada(Repressilator)
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
Im folgenden Teil wollen wir einige der Methoden von telluriums roadrunner ausprobieren. Lass dir dazu das Modell als Antimony oder SBML anzeigen. Dies erreichst du mit getAntimony() oder getSBML().
print rr.getAntimony() print rr.getSBML()
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
Solver Methoden Achtung: Obwohl resetToOrigin() das Modell in den ursprünglichen Zustand zurück setzt, bleiben Solver-spezifische Einstellungen erhalten. Daher benutze am besten immer te.loada() als vollständigen Reset! Mit getIntegrator() ist es möglich, den Solver und dessen gesetzte Einstellungen anzeigen zu lassen.
rr = te.loada(Repressilator) print rr.getIntegrator()
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
Ändere den verwendeten Solver von 'CVODE' auf Runge-Kutta 'rk45' und lass dir die Settings nochmals anzeigen. Verwende dazu setIntegrator() und getIntegrator(). Was fällt auf?
rr = te.loada(Repressilator) rr.setIntegrator('rk45') print rr.getIntegrator()
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
Simuliere den Repressilator von 0s bis 1000s und plotte die Ergebnisse für verschiedene steps-Werte (z.b. steps = 10 oder 10000) in der simulate-Methode. Was macht das Argument steps?
rr = te.loada(Repressilator) rr.simulate(0,1000,1000) rr.plot()
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
Benutze weiterhin 'CVODE' und verändere den Paramter 'relative_tolerance' des Solvers (z.b. 1 oder 10). Verwendete dabei steps = 10000 in simulate(). Was fällt auf? Hinweis - die nötige Methode lautet roadrunner.getIntegrator().setValue().
rr = te.loada(Repressilator) rr.getIntegrator().setValue('relative_tolerance',0.0000001) rr.getIntegrator().setValue('relative_tolerance',1) rr.simulate(0,1000,1000) rr.plot()
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
ODE-Modell als Objekt in Python Oben haben wir gesehen, dass tellurium eine Instanz von RoadRunner erzeugt, wenn ein Modell eingelesen wird. Außerdem ist der Zugriff auf das eigentliche Modell möglich. Unter Verwendung von .model gibt es zusätzliche Methoden um das eigentliche Modell zu manipulieren:
rr = te.loada(Repressilator) print type(rr) print type(rr.model)
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
Aufgabe 1 - Parameterscan: A) Sieh dir die Implementierung des Modells 'Repressilator' an, welche Paramter gibt es? B) Erstelle einen Parameterscan, welcher den Wert des Paramters mit der Bezeichnung 'n' im Repressilator ändert. (Beispielsweise für n=1,n=2,n=3,...) Lasse das Modell für jedes gewählte 'n' simulieren. Be...
import matplotlib.pyplot as plt import numpy as np fig_phase = plt.figure(figsize=(5,5)) rr = te.loada(Repressilator) for l,i in enumerate([1.0,1.7,3.0,10.]): fig_phase.add_subplot(2,2,l+1) rr.n = i rr.reset() result = rr.simulate(0,500,500,selections=['time','X','PX']) plt.plot(result[...
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
Aufgabe 2 - (Initial value)-scan: Erstelle einen "Scan", der den Anfwangswert von der Spezies Y ändert. Das Modellverhalten ist hierbei weniger interessant. Achte vielmehr darauf, die Resets so zu setzen, dass 'Y' bei der Simulation tatsächlich beim gesetzten Wert startet.
import matplotlib.pyplot as plt import numpy as np rr = te.loada(Repressilator) print rr.model.getFloatingSpeciesInitAmountIds() print rr.model.getFloatingSpeciesInitAmounts() for l,i in enumerate([1,5,10,20]): # Auswahl einiger Varianten (es gibt noch mehr Möglichkeiten...) #Variante1 - Falsch #r...
tellurium/loesungen/Roadrunner_Uebung_Loesung.ipynb
tbphu/fachkurs_bachelor
mit
Contstants
DATA_SET_URL = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data' DATA_SET = 'auto_mpg.data' RESULTS_DIR = 'results'
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Clean out previous results
if os.path.isfile(DATA_SET): os.remove(DATA_SET) shutil.rmtree(RESULTS_DIR, ignore_errors=True)
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Retrieve data from UCI Machine Learning Repository Download required data
r = requests.get(DATA_SET_URL) if r.status_code == 200: with open(DATA_SET,'w') as f: f.write(r.content.decode("utf-8"))
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Create Pandas DataFrame from downloaded data
raw_df = pd.read_csv(DATA_SET, header=None, na_values = "?", comment='\t', sep=" ", skipinitialspace=True) raw_df.columns = ['MPG','Cylinders','Displacement','Horsepower','Weight', 'Acceleration', 'ModelYear', 'Origin'] raw_df.shape raw...
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Create train/test split
train_df, test_df = train_test_split(raw_df, train_size=0.8, random_state=17) print(train_df.shape) print(test_df.shape)
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Setup Ludwig config
num_features = ['Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'ModelYear'] cat_features = ['Origin']
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Create Ludwig input_features
input_features = [] # setup input features for numerical variables for p in num_features: a_feature = {'name': p, 'type': 'numerical', 'preprocessing': {'missing_value_strategy': 'fill_with_mean', 'normalization': 'zscore'}} input_features.append(a_feature) # setkup input features for categori...
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Create Ludwig output features
output_features =[ { 'name': 'MPG', 'type': 'numerical', 'num_fc_layers': 2, 'fc_size': 64 } ] config = { 'input_features' : input_features, 'output_features': output_features, 'training' :{ 'epochs': 100, 'batch_size': 32 } } config
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Perform K-fold Cross Validation analysis
%%time with tempfile.TemporaryDirectory() as tmpdir: data_csv_fp = os.path.join(tmpdir,'train.csv') train_df.to_csv(data_csv_fp, index=False) ( kfold_cv_stats, kfold_split_indices ) = kfold_cross_validate( num_folds=5, config=config, dataset=data_csv_fp, ...
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Train model and assess model performance
model = LudwigModel( config=config, logging_level=logging.ERROR ) %%time training_stats = model.train( training_set=train_df, output_directory=RESULTS_DIR, ) test_stats, mpg_hat_df, _ = model.evaluate(dataset=test_df, collect_predictions=True, collect_overall_stats=True) test_stats a = plt.axes(aspe...
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Compare K-fold Cross Validation metrics against hold-out test metrics Hold-out Test Metrics
test_stats['MPG']
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
K-fold Cross Validation Metrics
kfold_cv_stats['overall']['MPG']
examples/kfold_cv/regression_example.ipynb
uber/ludwig
apache-2.0
Exercise 1 We proceed building the alogrithm for testing the accuracy of the numerical derivative
def f(x): return np.exp(np.sin(x)) def df(x): return f(x) * np.cos(x) def absolute_err(f, df, h): g = (f(h) - f(0)) / h return np.abs(df(0) - g) hs = 10. ** -np.arange(15) epsilons = np.empty(15) for i, h in enumerate(hs): epsilons[i] = absolute_err(f, df, h)
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
a)
plt.plot(hs, epsilons, 'o') plt.yscale('log') plt.xscale('log') plt.xlabel(r'h') plt.ylabel(r'$\epsilon(h)$') plt.grid(linestyle='dotted')
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
We can see that until $h = 10^7$ the trend is that the absolute error diminishes, but after that it goes back up. This is due to the fact that when we compute $f(h) - f(0)$ we are using an ill-conditioned operation. In fact these two values are really close to each other. Exercise 2 a. We can easily see that when $\|x\...
x_1 = symbols('x_1') fun1 = 1 / (1 + 2*x_1) - (1 - x_1) / (1 + x_1) fun1
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
We can modify the previous expression for being well conditioned around 0. This is well conditoned.
fun2 = simplify(fun1) fun2
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
A comparison between the two ways of computing this value. we can clearly see that if we are far from 1 the methods are nearly identical, but the closer you get to 0 the two methods diverge
def f1(x): return 1 / (1 + 2*x) - (1 - x) / (1 + x) def f2(x): return 2*x**2/((1 + 2*x)*(1 + x)) hs = 2. ** - np.arange(64) plt.plot(hs, np.abs(f1(hs) - f2(hs))) plt.yscale('log') plt.xscale('log') plt.xlabel(r'h') plt.ylabel('differences') plt.grid(linestyle='dotted')
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
b. As before we have the subtraction of two really close values, so it is going to be ill conditioned for $x$ really big. $ \sqrt{x + \frac{1}{x}} - \sqrt{x - \frac{1}{x}} = \sqrt{x + \frac{1}{x}} - \sqrt{x - \frac{1}{x}} \frac{\sqrt{x + \frac{1}{x}} + \sqrt{x - \frac{1}{x}} }{\sqrt{x + \frac{1}{x}} + \sqrt{x - ...
def f3(x): return np.sqrt(x + 1/x) - np.sqrt(x - 1 / x) def f4(x): return 2 / (np.sqrt(x + 1/x) + np.sqrt(x - 1 / x)) / x hs = 2 ** np.arange(64) plt.plot(hs, np.abs(f3(hs) - f4(hs)), 'o') #plt.plot(hs, , 'o') plt.yscale('log') plt.xscale('log') plt.xlabel(r'h') plt.ylabel('differences$') plt.grid(linestyle=...
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
Exercise 3. a. If we assume we posess a 6 faced dice, we have at each throw three possible outcome. So we have to take all the combination of 6 numbers repeating 3 times. It is intuitive that our $\Omega$ will be composed by $6^3 = 216$ samples, and will be of type: $(1, 1, 1), (1, 1, 2), (1, 1, 3), ... (6, 6, 5), (6, ...
import itertools x = [1, 2, 3, 4, 5, 6] omega = set([p for p in itertools.product(x, repeat=3)]) print(r'Omega has', len(omega), 'elements and they are:') print(omega)
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
Concerning the $\sigma$-algebra we need to state that there does not exist only a $\sigma$-algebra for a given $\Omega$, but it this case a reasonable choice would be the powerset of $\Omega$. b. In case of fairness of dice we will have the discrete uniform distribution. And for computing the value of $\rho(\omega)$ we...
1/(6**3)
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
c. If we want to determine the set $A$ we can take in consideration its complementary $A^c = {\text{Not even one throw is 6}}$. This event is analogous the sample space of a 5-faced dice. So it's dimension will be $5^3$. For computing the size of $A$ we can simply compute $6^3 - 5^3$ and for the event its self we just ...
print('Size of A^c:', 5**3) print('Size of A: ', 6 ** 3 - 5 ** 3) 36 + 5 * 6 + 5 * 5 x = [1, 2, 3, 4, 5] A_c = set([p for p in itertools.product(x, repeat=3)]) print('A^c has ', len(A_c), 'elements.\nA^c =', A_c) print('A has ', len(omega - A_c), 'elements.\nA^c =', omega - A_c)
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
P(A) will be $\frac{91}{216}$
91 / 216
UQ/assignment_2/Untitled.ipynb
LorenzoBi/courses
mit
In the previous chapter we modeled objects moving in one dimension, with and without drag. Now let's move on to two dimensions, and baseball! In this chapter we model the flight of a baseball including the effect of air resistance. In the next chapter we use this model to solve an optimization problem. Baseball To mode...
A = Vector(3, 4) A
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
You can access the components of a Vector by name using the dot operator, like this:
A.x, A.y
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
You can also access them by index using brackets, like this:
A[0], A[1]
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Vector objects support most mathematical operations, including addition and subtraction:
B = Vector(1, 2) B A + B A - B
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
For the definition and graphical interpretation of these operations, see http://modsimpy.com/vecops. We can specify a Vector with coordinates x and y, as in the previous examples. Equivalently, we can specify a Vector with a magnitude and angle. Magnitude is the length of the vector: if the Vector represents a position...
mag = vector_mag(A) theta = vector_angle(A) mag, theta
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
The magnitude is 5 because the length of A is the hypotenuse of a 3-4-5 triangle. The result from vector_angle is in radians, and most Python functions, like sin and cos, work with radians. But many people think more naturally in degrees. Fortunately, NumPy provides a function to convert radians to degrees:
from numpy import rad2deg angle = rad2deg(theta) angle
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
And a function to convert degrees to radians:
from numpy import deg2rad theta = deg2rad(angle) theta
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Following convention, I'll use angle for a value in degrees and theta for a value in radians. If you are given an angle and velocity, you can make a Vector using pol2cart, which converts from polar to Cartesian coordinates. For example, here's a new Vector with the same angle and magnitude of A:
x, y = pol2cart(theta, mag) Vector(x, y)
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Another way to represent the direction of A is a unit vector, which is a vector with magnitude 1 that points in the same direction as A. You can compute a unit vector by dividing a vector by its magnitude:
A / vector_mag(A)
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
We can do the same thing using the vector_hat function, so named because unit vectors are conventionally decorated with a hat, like this: $\hat{A}$.
vector_hat(A)
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Now let's get back to the game. Simulating baseball flight Let's simulate the flight of a baseball that is batted from home plate at an angle of 45° and initial speed 40 m/s. We'll use the center of home plate as the origin, a horizontal x-axis (parallel to the ground), and vertical y-axis (perpendicular to the ground)...
params = Params( x = 0, # m y = 1, # m angle = 45, # degree velocity = 40, # m / s mass = 145e-3, # kg diameter = 73e-3, # m C_d = 0.33, # dimensionless rho = 1.2, # kg/m**3 g = 9.8, # m/s**2 t_end = 10, # s )
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
I got the mass and diameter of the baseball from Wikipedia and the coefficient of drag is from The Physics of Baseball: The density of air, rho, is based on a temperature of 20 °C at sea level (see http://modsimpy.com/tempress). And we'll need the acceleration of gravity, g. The following function uses these quantitie...
from numpy import pi, deg2rad def make_system(params): # convert angle to degrees theta = deg2rad(params.angle) # compute x and y components of velocity vx, vy = pol2cart(theta, params.velocity) # make the initial state init = State(x=params.x, y=params.y, vx=vx, vy=vy) ...
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
make_system uses deg2rad to convert angle to radians and pol2cart to compute the $x$ and $y$ components of the initial velocity. init is a State object with four state variables: x and y are the components of position. vx and vy are the components of velocity. The System object also contains t_end, which is 10 se...
system = make_system(params)
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
And here's the initial State:
system.init
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Next we need a function to compute drag force:
def drag_force(V, system): rho, C_d, area = system.rho, system.C_d, system.area mag = rho * vector_mag(V)**2 * C_d * area / 2 direction = -vector_hat(V) f_drag = mag * direction return f_drag
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
This function takes V as a Vector and returns f_drag as a Vector. It uses vector_mag to compute the magnitude of V, and the drag equation to compute the magnitude of the drag force, mag. Then it uses vector_hat to compute direction, which is a unit vector in the opposite direction of V. Finally, it computes the...
vx, vy = system.init.vx, system.init.vy V_test = Vector(vx, vy) drag_force(V_test, system)
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
The result is a Vector that represents the drag force on the baseball, in Newtons, under the initial conditions. Now we're ready for a slope function:
def slope_func(t, state, system): x, y, vx, vy = state mass, g = system.mass, system.g V = Vector(vx, vy) a_drag = drag_force(V, system) / mass a_grav = g * Vector(0, -1) A = a_grav + a_drag return V.x, V.y, A.x, A.y
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
As usual, the parameters of the slope function are a time, a State object, and a System object. In this example, we don't use t, but we can't leave it out because when run_solve_ivp calls the slope function, it always provides the same arguments, whether they are needed or not. slope_func unpacks the State object into...
slope_func(0, system.init, system)
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Using vectors to represent forces and accelerations makes the code concise, readable, and less error-prone. In particular, when we add a_grav and a_drag, the directions are likely to be correct, because they are encoded in the Vector objects. We're almost ready to run the simulation. The last thing we need is an event...
def event_func(t, state, system): x, y, vx, vy = state return y
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
The event function takes the same parameters as the slope function, and returns the $y$ coordinate of position. When the $y$ coordinate passes through 0, the simulation stops. As we did with slope_func, we can test event_func with the initial conditions.
event_func(0, system.init, system)
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Now we're ready to run the simulation:
results, details = run_solve_ivp(system, slope_func, events=event_func) details.message
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
details contains information about the simulation, including a message that indicates that a "termination event" occurred; that is, the simulated ball reached the ground. results is a TimeFrame with one column for each of the state variables:
results.tail()
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
We can get the flight time like this:
flight_time = results.index[-1] flight_time
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
And the final state like this:
final_state = results.iloc[-1] final_state
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
The final value of y is close to 0, as it should be. The final value of x tells us how far the ball flew, in meters.
x_dist = final_state.x x_dist
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
We can also get the final velocity, like this:
final_V = Vector(final_state.vx, final_state.vy) final_V
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
The speed of the ball on impact is about 26 m/s, which is substantially slower than the initial velocity, 40 m/s.
vector_mag(final_V)
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Trajectories To visualize the results, we can plot the $x$ and $y$ components of position like this:
results.x.plot(color='C4') results.y.plot(color='C2', style='--') decorate(xlabel='Time (s)', ylabel='Position (m)')
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
As expected, the $x$ component increases as the ball moves away from home plate. The $y$ position climbs initially and then descends, falling to 0 m near 5.0 s. Another way to view the results is to plot the $x$ component on the x-axis and the $y$ component on the y-axis, so the plotted line follows the trajectory of t...
def plot_trajectory(results): x = results.x y = results.y make_series(x, y).plot(label='trajectory') decorate(xlabel='x position (m)', ylabel='y position (m)') plot_trajectory(results)
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
This way of visualizing the results is called a trajectory plot (see http://modsimpy.com/trajec). A trajectory plot can be easier to interpret than a time series plot, because it shows what the motion of the projectile would look like (at least from one point of view). Both plots can be useful, but don't get them mixed...
from matplotlib.pyplot import plot xlim = results.x.min(), results.x.max() ylim = results.y.min(), results.y.max() def draw_func(t, state): plot(state.x, state.y, 'bo') decorate(xlabel='x position (m)', ylabel='y position (m)', xlim=xlim, ylim=ylim) # animate(results, d...
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Exercises Exercise: Run the simulation with and without air resistance. How wrong would we be if we ignored drag?
# Hint system2 = make_system(params.set(C_d=0)) # Solution results2, details2 = run_solve_ivp(system2, slope_func, events=event_func) details.message # Solution plot_trajectory(results) plot_trajectory(results2) # Solution x_dist2 = results2.iloc[-1].x x_dist2 # Solution x_d...
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Exercise: The baseball stadium in Denver, Colorado is 1,580 meters above sea level, where the density of air is about 1.0 kg / meter$^3$. How much farther would a ball hit with the same velocity and launch angle travel?
# Hint system3 = make_system(params.set(rho=1.0)) # Solution results3, details3 = run_solve_ivp(system3, slope_func, events=event_func) x_dist3 = results3.iloc[-1].x x_dist3 # Solution x_dist3 - x_dist
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Exercise: The model so far is based on the assumption that coefficient of drag does not depend on velocity, but in reality it does. The following figure, from Adair, The Physics of Baseball, shows coefficient of drag as a function of velocity. <img src="https://github.com/AllenDowney/ModSimPy/raw/master/figs/baseball_...
import os filename = 'baseball_drag.csv' if not os.path.exists(filename): !wget https://raw.githubusercontent.com/AllenDowney/ModSimPy/master/data/baseball_drag.csv from pandas import read_csv baseball_drag = read_csv(filename) mph = Quantity(baseball_drag['Velocity in mph'], units.mph) mps = mph.to(units.mete...
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Modify the model to include the dependence of C_d on velocity, and see how much it affects the results.
# Solution def drag_force2(V, system): """Computes drag force in the opposite direction of `v`. v: velocity system: System object with rho, C_d, area returns: Vector drag force """ rho, area = system.rho, system.area C_d = drag_interp(vector_mag(V)) mag = -rho * vector_ma...
python/soln/chap22.ipynb
AllenDowney/ModSim
gpl-2.0
Test on Images Now you should build your pipeline to work on the images in the directory "test_images" You should make sure your pipeline works well on these images before you try the videos.
import os images = os.listdir("test_images/") # Create a directory to save processed images processed_directory_name = "processed_images" if not os.path.exists(processed_directory_name): os.mkdir(processed_directory_name)
Term01-Computer-Vision-and-Deep-Learning/P1-LaneLines/P1.ipynb
Raag079/self-driving-car
mit
run your solution on all test_images and make copies into the test_images directory).
# TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images directory. # kernel_size for gaussian blur kernel_size = 5 # thresholds for canny edge low_threshold = 60 high_threshold = 140 # constants for Hough transformation rho = 1 # distance resolution in pixels of ...
Term01-Computer-Vision-and-Deep-Learning/P1-LaneLines/P1.ipynb
Raag079/self-driving-car
mit
Test on Videos You know what's cooler than drawing lanes over images? Drawing lanes over video! We can test our solution on two provided videos: solidWhiteRight.mp4 solidYellowLeft.mp4
# Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, # you should return the ...
Term01-Computer-Vision-and-Deep-Learning/P1-LaneLines/P1.ipynb
Raag079/self-driving-car
mit
Reflections Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail? Please add your thoug...
challenge_output = 'extra.mp4' clip2 = VideoFileClip('challenge.mp4') challenge_clip = clip2.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output))
Term01-Computer-Vision-and-Deep-Learning/P1-LaneLines/P1.ipynb
Raag079/self-driving-car
mit
Question 1: What was the total cost to Kansas City area counties? To answer this question, we first must filter the table to only those rows which refer to a Kansas City area county.
kansas_city = table.where(lambda r: r['county'] in ('JACKSON', 'CLAY', 'CASS', 'PLATTE')) print(len(table.rows)) print(len(kansas_city.rows))
example.py.ipynb
wireservice/agate
mit
We can then print the Sum of the costs of all those rows. (The cost column is named total_cost.)
print('$%d' % kansas_city.aggregate(agate.Sum('total_cost')))
example.py.ipynb
wireservice/agate
mit
Question 2: Which counties spent the most? This question is more complicated. First we group the data by county, which gives us a TableSet named counties. A TableSet is a group of tables with the same columns.
# Group by county counties = table.group_by('county') print(counties.keys())
example.py.ipynb
wireservice/agate
mit
We then use the aggregate function to sum the total_cost column for each table in the group. The resulting values are collapsed into a new table, totals, which has a row for each county and a column named total_cost_sum containing the new total.
# Aggregate totals for all counties totals = counties.aggregate([ ('total_cost_sum', agate.Sum('total_cost'),) ]) print(totals.column_names)
example.py.ipynb
wireservice/agate
mit
Finally, we sort the counties by their total cost, limit the results to the top 10 and then print the results as a text bar chart.
totals.order_by('total_cost_sum', reverse=True).limit(20).print_bars('county', 'total_cost_sum', width=100)
example.py.ipynb
wireservice/agate
mit
stand-alone mode http://spark.apache.org/docs/1.2.0/spark-standalone.html
!ls /spark/sbin !ls ./sbin/start-master.sh
clustering.ipynb
rdhyee/ipython-spark
apache-2.0
The :class:SourceEstimate &lt;mne.SourceEstimate&gt; data structure Source estimates, commonly referred to as STC (Source Time Courses), are obtained from source localization methods. Source localization method solve the so-called 'inverse problem'. MNE provides different methods for solving it: dSPM, sLORETA, LCMV, Mx...
import os from mne import read_source_estimate from mne.datasets import sample print(__doc__) # Paths to example data sample_dir_raw = sample.data_path() sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample') subjects_dir = os.path.join(sample_dir_raw, 'subjects') fname_stc = os.path.join(sample_dir, 'sample_au...
0.17/_downloads/0d18b25bb9787426fc4d02b53e22d813/plot_object_source_estimate.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load and inspect example data This data set contains source estimation data from an audio visual task. It has been mapped onto the inflated cortical surface representation obtained from FreeSurfer &lt;sphx_glr_auto_tutorials_plot_background_freesurfer.py&gt; using the dSPM method. It highlights a noticeable peak in the...
stc = read_source_estimate(fname_stc, subject='sample') # Define plotting parameters surfer_kwargs = dict( hemi='lh', subjects_dir=subjects_dir, clim=dict(kind='value', lims=[8, 12, 15]), views='lateral', initial_time=0.09, time_unit='s', size=(800, 800), smoothing_steps=5) # Plot surface brain = stc....
0.17/_downloads/0d18b25bb9787426fc4d02b53e22d813/plot_object_source_estimate.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
SourceEstimate (stc) A source estimate contains the time series of a activations at spatial locations defined by the source space. In the context of a FreeSurfer surfaces - which consist of 3D triangulations - we could call each data point on the inflated brain representation a vertex . If every vertex represents the s...
shape = stc.data.shape print('The data has %s vertex locations with %s sample points each.' % shape)
0.17/_downloads/0d18b25bb9787426fc4d02b53e22d813/plot_object_source_estimate.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We see that stc carries 7498 time series of 25 samples length. Those time series belong to 7498 vertices, which in turn represent locations on the cortical surface. So where do those vertex values come from? FreeSurfer separates both hemispheres and creates surfaces representation for left and right hemisphere. Indices...
shape_lh = stc.lh_data.shape print('The left hemisphere has %s vertex locations with %s sample points each.' % shape_lh)
0.17/_downloads/0d18b25bb9787426fc4d02b53e22d813/plot_object_source_estimate.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Since we did not change the time representation, only the selected subset of vertices and hence only the row size of the matrix changed. We can check if the rows of stc.lh_data and stc.rh_data sum up to the value we had before.
is_equal = stc.lh_data.shape[0] + stc.rh_data.shape[0] == stc.data.shape[0] print('The number of vertices in stc.lh_data and stc.rh_data do ' + ('not ' if not is_equal else '') + 'sum up to the number of rows in stc.data')
0.17/_downloads/0d18b25bb9787426fc4d02b53e22d813/plot_object_source_estimate.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Indeed and as the mindful reader already suspected, the same can be said about vertices. stc.lh_vertno thereby maps to the left and stc.rh_vertno to the right inflated surface representation of FreeSurfer. Relationship to SourceSpaces (src) As mentioned above, :class:src &lt;mne.SourceSpaces&gt; carries the mapping fro...
peak_vertex, peak_time = stc.get_peak(hemi='lh', vert_as_index=True, time_as_index=True)
0.17/_downloads/0d18b25bb9787426fc4d02b53e22d813/plot_object_source_estimate.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The first value thereby indicates which vertex and the second which time point index from within stc.lh_vertno or stc.lh_data is used. We can use the respective information to get the index of the surface vertex resembling the peak and its value.
peak_vertex_surf = stc.lh_vertno[peak_vertex] peak_value = stc.lh_data[peak_vertex, peak_time]
0.17/_downloads/0d18b25bb9787426fc4d02b53e22d813/plot_object_source_estimate.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Let's visualize this as well, using the same surfer_kwargs as in the beginning.
brain = stc.plot(**surfer_kwargs) # We add the new peak coordinate (as vertex index) as an annotation dot brain.add_foci(peak_vertex_surf, coords_as_verts=True, hemi='lh', color='blue') # We add a title as well, stating the amplitude at this time and location brain.add_text(0.1, 0.9, 'Peak coordinate', 'title', font_...
0.17/_downloads/0d18b25bb9787426fc4d02b53e22d813/plot_object_source_estimate.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
AI Explanations: Deploying an image model <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/ml-on-gcp/blob/master/tutorials/explanations/ai-explanations-image.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run ...
PROJECT_ID="[your-project-id]"
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Authenticate your GCP account If you are using AI Platform Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
import sys, os import warnings import googleapiclient warnings.filterwarnings('ignore') os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # If you are running this notebook in Colab, follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training job...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. AI Platform runs the code from this package. In this tutorial, AI Platform als...
BUCKET_NAME = PROJECT_ID + "_flowers_model" REGION = "us-central1"
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Import libraries Import the libraries we'll be using in this tutorial. This tutorial has been tested with TensorFlow versions 1.14 and 1.15.
import math, json, random import numpy as np import PIL import tensorflow as tf from matplotlib import pyplot as plt from base64 import b64encode print("Tensorflow version " + tf.__version__) AUTO = tf.data.experimental.AUTOTUNE
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Downloading and preprocessing the flowers dataset In this section you'll download the flower images (in this dataset they are TFRecords), use the tf.data API to create a data input pipeline, and split the data into training and validation sets.
GCS_PATTERN = 'gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec' IMAGE_SIZE = [192, 192] BATCH_SIZE = 32 VALIDATION_SPLIT = 0.19 CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # do not change, maps to the labels in the data (folder names) # Split data files between training and validation fil...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model. If you're running this from Colab the cell is hidden. You can look at the code by right clicking on the cell --> "Form" --> "Show form" if you'd like to see it.
#@title display utilities [RUN ME] def dataset_to_numpy_util(dataset, N): dataset = dataset.batch(N) if tf.executing_eagerly(): # In eager mode, iterate in the Datset directly. for images, labels in dataset: numpy_images = images.numpy() numpy_labels = labels.numpy() break; el...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0
Read images and labels from TFRecords
def read_tfrecord(example): features = { "image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring "class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar "one_hot_class": tf.io.VarLenFeature(tf.float32), } example = tf.parse_single_example(example, f...
notebooks/samples/explanations/ai-explanations-image.ipynb
GoogleCloudPlatform/ai-platform-samples
apache-2.0