markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Submit the pipeline for execution Pipelines groups runs using experiments So before you submit a pipeline you need to create an experiment or pick an existing experiment Once you have compiled a pipeline, you can use the pipelines SDK to submit that pipeline
EXPERIMENT_NAME = 'MockupModel' #Specify pipeline argument values arguments = {} # Get or create an experiment and submit a pipeline run client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(expe...
xgboost_synthetic/build-train-deploy.ipynb
kubeflow/examples
apache-2.0
Training loop
epoch = 3 for e in range(epoch): train_loss = 0. acc = mx.gluon.metric.Accuracy() for i, (data, label) in enumerate(train_data): data = data.as_in_context(ctx) label = label.as_in_context(ctx) with mx.autograd.record(): output = net(data) l = loss(out...
example/adversary/adversary_generation.ipynb
leezu/mxnet
apache-2.0
Now we perturb the input
data_perturbated = data + 0.15 * mx.nd.sign(data.grad) output = net(data_perturbated) acc = mx.gluon.metric.Accuracy() acc.update(label, output) print("Validation batch accuracy after perturbation {}".format(acc.get()[1]))
example/adversary/adversary_generation.ipynb
leezu/mxnet
apache-2.0
En el último capítulo, usamos una combinación lineal de variables independientes para predecir la media de una variable dependiente. Asumimos que en general la variable dependiente se distribuía como una Gaussiana y también exploramos que sucedía al relajar esta condición y usar una distribución t de Student. En este c...
z = np.linspace(-6, 6) logística = 1 / (1 + np.exp(-z)) plt.plot(z, logística) plt.xlabel('z') plt.ylabel('logística(z)') plt.title('Figure 4.1');
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
El segundo paso consiste en usar como likelihood una distribución binomial y no una Gaussiana. De esta forma el modelo queda expresado como: $$ \theta = logistic(\alpha + x\beta) \ y = \text{Bern}(\theta) \tag{4.3} $$ Esto modelo se puede motivar de la siguiente forma. Si nuestros datos son binarios $y \in {0, 1}$, c...
iris = pd.read_csv('datos/iris.csv') iris.head()
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Ahora graficaremos las 3 especies versus la longitud del sépalo usando la función stripplot de seaborn:
sns.stripplot(x="species", y="sepal_length", data=iris, jitter=True) plt.title('Figure 4.2');
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Observe en la figura 4.2 que en el eje y se representan una variable continua mientras que en el eje x la variable es categórica. La dispersión (o jitter) de los puntos a lo largo del eje x no tiene ningún significado, y es solo un truco para evitar que todos los puntos colapsen en una sola línea (pueden probar pasando...
sns.pairplot(iris, hue='species') plt.title('Figure 4.3');
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Antes de continuar, tómese un tiempo para estudiar las gráficas anteriores y familiarizarse con el conjunto de datos y cómo se relacionan las variables dependientes y las independientes. El modelo logístico aplicado al conjunto de datos del iris. Vamos a comenzar con la regresión logística más simple posible: dos clase...
df = iris.query("species == ('setosa', 'versicolor')") y_0 = pd.Categorical(df['species']).codes x_n = 'sepal_length' x_0 = df[x_n].values x_c = x_0 - x_0.mean()
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Al igual que con otros modelos lineales, centrar los datos puede ayudar con el muestreo. Ahora que tenemos los datos en el formato adecuado, finalmente podemos construir el modelo con PyMC3. Observe cómo la primera parte del siguiente modelo se asemeja a un modelo de regresión lineal. Este modelo tiene dos variables de...
with pm.Model() as modelo_0: α = pm.Normal('α', mu=0, sd=10) β = pm.Normal('β', mu=0, sd=10) μ = α + pm.math.dot(x_c, β) θ = pm.Deterministic('θ', pm.math.sigmoid(μ)) bd = pm.Deterministic('bd', -α/β) yl = pm.Bernoulli('yl', p=θ, observed=y_0) trace_0 = pm.sample(1000) varnam...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Como es habitual, también mostramos el summary del posterior. Más adelante, compararemos el valor que obtengamos para el límite de decisión con un valor calculado utilizando otro método.
az.summary(trace_0, varnames)
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Ahora vamos a graficar los datos junto con la curva sigmoide ajustada:
theta = trace_0['θ'].mean(axis=0) idx = np.argsort(x_c) plt.figure(figsize=(10, 6)) plt.plot(x_c[idx], theta[idx], color='C2', lw=3); plt.vlines(trace_0['bd'].mean(), 0, 1, color='k') bd_hpd = az.hpd(trace_0['bd']) plt.fill_betweenx([0, 1], bd_hpd[0], bd_hpd[1], color='k', alpha=0.5) plt.scatter(x_c, np.random.normal(...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
La figura 4.4 muestra la longitud del sépalo para las especies (setosa = 0, versicolor = 1). Para mitigar la superposición de los datos, hemos agregado ruido (jitter) a las variables-respuesta binarias. Una línea verde en forma de S representa el valor medio de $\theta$. Esta línea se puede interpretar como la probabil...
df = iris.query("species == ('setosa', 'versicolor')") y_1 = pd.Categorical(df['species']).codes x_n = ['sepal_length', 'sepal_width'] #x_n = ['petal_length', 'petal_width'] x_1 = df[x_n].values
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
El límite de decisión No dudes en omitir esta sección y pasar directamente a la implementación del modelo si no estás demasiado interesado en cómo podemos obtener el límite de decisión. Desde el modelo, tenemos: $$\theta = logística(\alpha + \beta_1 x_1 + \beta_2 x_2) \tag{4.7}$$ Y a partir de la definición de la funci...
with pm.Model() as modelo_1: α = pm.Normal('α', mu=0, sd=10) β = pm.Normal('β', mu=0, sd=2, shape=len(x_n)) μ = α + pm.math.dot(x_1, β) θ = pm.Deterministic('θ', pm.math.sigmoid(μ)) bd = pm.Deterministic('bd', -α/β[1] - β[0]/β[1] * x_1[:,0]) yl = pm.Bernoulli('yl', p=θ, observed...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Como hicimos para una única variable predictiva, vamos a graficar los datos y el límite de decisión.
idx = np.argsort(x_1[:,0]) bd = trace_1['bd'].mean(0)[idx] plt.scatter(x_1[:,0], x_1[:,1], c=[f'C{x}' for x in y_0]) plt.plot(x_1[:,0][idx], bd, color='k'); bd_hpd = az.hpd(trace_1['bd'])[idx] plt.fill_between(x_1[:,0][idx], bd_hpd[:,0], bd_hpd[:,1], color='k', alpha=0.5); plt.xlabel(x_n[0]) plt.ylabel(x_n[1...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
El límite de decisión es una línea recta, como ya hemos visto. No se confunda con el aspecto curvo de la banda del 94% de HPD. La curvatura aparente es el resultado de tener múltiples líneas que giran alrededor de una región central (aproximadamente alrededor de la media de x y la media de y). Interpretación de los coe...
probability = np.linspace(0.01, 1, 100) odds = probability / (1 - probability) _, ax1 = plt.subplots() ax2 = ax1.twinx() ax1.plot(probability, odds, 'C0') ax2.plot(probability, np.log(odds), 'C1') ax1.set_xlabel('probability') ax1.set_ylabel('odds', color='C0') ax2.set_ylabel('log-odds', color='C1') ax1.grid(False) a...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Por lo tanto, los valores de los coeficientes proporcionados por summary están en la escala log-odds.
df = az.summary(trace_1, varnames) df
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Una forma muy empírica de entender los modelos es cambiar los parámetros y ver qué sucede. En el siguiente bloque de código, calculamos las log-odds en favor de versicolor como $\text {log_odds_versicolor_i} = \alpha + beta_1 x1 + \beta_2 x2$, y luego la probabilidad de versicolor con la función logística. Luego repeti...
x_1 = 4.5 # sepal_length x_2 = 3 # sepal_width log_odds_versicolor_i = (df['mean'] * [1, x_1, x_2]).sum() probability_versicolor_i = logistic(log_odds_versicolor_i) log_odds_versicolor_f = (df['mean'] * [1, x_1 + 1, x_2]).sum() probability_versicolor_f = logistic(log_odds_versicolor_f) (f'{log_odds_versicolor_f ...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Si ejecutas el código, encontrarás que el aumento en las log-odds es de $\approx 4.7$, que es exactamente el valor de $\beta_0$ (verifique el summary para trace_1). Esto está en línea con nuestro hallazgo anterior que muestra que los coeficientes $\beta$ indican el aumento en unidades log-odds por incremento unitario d...
corr = iris[iris['species'] != 'virginica'].corr() mask = np.tri(*corr.shape).T sns.heatmap(corr.abs(), mask=mask, annot=True, cmap='viridis') plt.title('Figure 4.7');
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Para generar la figura 4.7, hemos utilizado una máscara que elimina el triángulo superior y los elementos diagonales del heatmap, ya que estos son poco informativos o redundantes. Observe también que hemos graficado el valor absoluto de la correlación, ya que en este momento no nos importa el signo de la correlación en...
df = iris.query("species == ('setosa', 'versicolor')") df = df[45:] y_3 = pd.Categorical(df['species']).codes x_n = ['sepal_length', 'sepal_width'] x_3 = df[x_n].values
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Y ahora ejecutamos una regresión logística múltiple, tal cual hicimos antes.
with pm.Model() as modelo_3: α = pm.Normal('α', mu=0, sd=10) β = pm.Normal('β', mu=0, sd=2, shape=len(x_n)) μ = α + pm.math.dot(x_3, β) θ = pm.math.sigmoid(μ) bd = pm.Deterministic('bd', -α/β[1] - β[0]/β[1] * x_3[:,0]) yl = pm.Bernoulli('yl', p=θ, observed=y_3) trace_3 = ...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
El límite de decisión se desplaza hacia la clase menos abundante y la incertidumbre es más grande que antes. Este es el comportamiento típico de un modelo logístico para datos no balanceados. ¡Pero espera un minuto! Bien podrías argumentar que te estoy engañando ya que la mayor incertidumbre es en realidad el producto ...
idx = np.argsort(x_3[:,0]) bd = trace_3['bd'].mean(0)[idx] plt.scatter(x_3[:,0], x_3[:,1], c= [f'C{x}' for x in y_3]) plt.plot(x_3[:,0][idx], bd, color='k'); bd_hpd = pm.hpd(trace_3['bd'])[idx] plt.fill_between(x_3[:,0][idx], bd_hpd[:,0], bd_hpd[:,1], color='k', alpha=0.5); plt.xlabel(x_n[0]) plt.ylabel(x_n[...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
¿Qué hacer si encontramos datos desequilibrados? Bueno, la solución obvia es obtener un conjunto de datos con aproximadamente la misma cantidad por clase. Este es un punto a tener en cuenta al recopilar o generar los datos. Si no tenés control sobre el conjunto de datos, debes tener cuidado al interpretar los resultado...
iris = sns.load_dataset('iris') y_s = pd.Categorical(iris['species']).codes x_n = iris.columns[:-1] x_s = iris[x_n].values x_s = (x_s - x_s.mean(axis=0)) / x_s.std(axis=0)
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
El código de PyMC3 refleja los pocos cambios entre el modelo logístico y el modelo softmax. Presta atención a los valores de shape para los coeficientes $\alpha $ y $\beta$. En el siguiente código usamos la función softmax de Theano. Hemos utilizado la expresión import theano.tensor as tt, que es la convención utilizad...
with pm.Model() as modelo_s: α = pm.Normal('α', mu=0, sd=5, shape=3) β = pm.Normal('β', mu=0, sd=5, shape=(4,3)) μ = pm.Deterministic('μ', α + pm.math.dot(x_s, β)) θ = tt.nnet.softmax(μ) yl = pm.Categorical('yl', p=θ, observed=y_s) trace_s = pm.sample(2000) az.plot_forest(trace_s, var_names=['α...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
¿Qué tan bien funciona nuestro modelo? Averigüemos cuántos casos podemos predecir correctamente. En el siguiente código, solo usamos la media de los parámetros para calcular la probabilidad de que cada punto de datos pertenezca a cada una de las tres clases, luego asignamos la clase usando la función argmax. Y comparam...
data_pred = trace_s['μ'].mean(0) y_pred = [np.exp(point)/np.sum(np.exp(point), axis=0) for point in data_pred] f'{np.sum(y_s == np.argmax(y_pred, axis=1)) / len(y_s):.2f}'
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
El resultado es que clasificamos correctamente $\approx 98 \%$ de los datos, es decir, clasificamos erroneamente solo tres casos. Ese es realmente un muy buen trabajo. Sin embargo, una verdadera prueba para evaluar el rendimiento de nuestro modelo sería verificarlo con un conjunto de datos no usado para ajustar al mode...
with pm.Model() as modelo_sf: α = pm.Normal('α', mu=0, sd=2, shape=2) β = pm.Normal('β', mu=0, sd=2, shape=(4,2)) α_f = tt.concatenate([[0] ,α]) β_f = tt.concatenate([np.zeros((4,1)) , β], axis=1) μ = α_f + pm.math.dot(x_s, β_f) θ = tt.nnet.softmax(μ) yl = pm.Categorical('yl', p=θ, observed=...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Linear discriminant analysis (LDA) Hasta ahora hemos discutido la regresión logística y algunas extensiones de la misma. En todos estos casos, calculamos $p(y \mid x) $, es decir, la probabilidad que una clase $y$ teniendo como dato una o más variables $x$, luego usamos un umbral o límite para convertir la probabilidad...
with pm.Model() as modelo_lda: μ = pm.Normal('μ', mu=0, sd=10, shape=2) σ = pm.HalfNormal('σ', 10) setosa = pm.Normal('setosa', mu=μ[0], sd=σ, observed=x_0[:50]) versicolor = pm.Normal('versicolor', mu=μ[1], sd=σ, observed=x_0[50:]) bd = pm.Deterministic('bd', (μ[0] + μ[1]) / 2) trace_lda = pm.s...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Ahora vamos a generar una figura que muestra las dos clases (setosa = 0 yversicolor = 1) contra los valores de la longitud del sépalo, y también el límite de decisión como una línea naranja y el intervalo del 94% de HPD como una banda naranja semitransparente.
plt.axvline(trace_lda['bd'].mean(), ymax=1, color='C1') bd_hpd = pm.hpd(trace_lda['bd']) plt.fill_betweenx([0, 1], bd_hpd[0], bd_hpd[1], color='C1', alpha=0.5) plt.plot(x_0, np.random.normal(y_0, 0.02), '.', color='k') plt.ylabel('θ', rotation=0) plt.xlabel('sepal_length') plt.title('Figure 4.9');
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Como habrá notado, la figura 4.9 es bastante similar a la figura 4.4. Verifique también los valores de la decisión de límite en el siguiente summary:
az.summary(trace_lda)
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Tanto el modelo LDA como la regresión logística proporcionan resultados similares. El modelo discriminante lineal puede extenderse a más de una característica al modelar las clases como Gaussianas multivariadas. Además, es posible relajar el supuesto de que las clases comparten una varianza común (o covarianza). Esto c...
mu_params = [0.5, 1.5, 3, 8] x = np.arange(0, max(mu_params) * 3) for mu in mu_params: y = stats.poisson(mu).pmf(x) plt.plot(x, y, 'o-', label=f'μ = {mu:3.1f}') plt.legend() plt.xlabel('x') plt.ylabel('f(x)') plt.title('Figure 4.10') plt.savefig('B11197_04_10.png', dpi=300);
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Es importante notar que $\mu$ puede ser un flotante, pero la distribución modela probabilidad de un número discreto de eventos. En la figura 4.10, los puntos representan los valores de la distribución, mientras que las líneas continuas son una ayuda visual que nos ayuda a comprender fácilmente la forma de la distribuci...
#np.random.seed(42) n = 100 θ_real = 2.5 ψ = 0.1 # Simulate some data counts = np.array([(np.random.random() > (1-ψ)) * np.random.poisson(θ_real) for i in range(n)]) with pm.Model() as ZIP: ψ = pm.Beta('ψ', 1., 1.) θ = pm.Gamma('θ', 2., 0.1) y = pm.ZeroInflatedPoisson('y', ψ, θ, observe...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Regresión de Poisson y regresión ZIP El modelo ZIP puede parecer un poco aburrido, pero a veces necesitamos estimar distribuciones simples como esta u otra como las distribuciones de Poisson o Gaussianas. Además, podemos usar las distribuciones Poisson o ZIP como parte de un modelo lineal. Como vimos con la regresión l...
fish_data = pd.read_csv('datos/fish.csv')
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Lo dejo como un ejercicio para que explore el conjunto de datos utilizando gráficos y / o una función de Pandas, como describe(). Por ahora vamos a continuar traduciendo el diagrama de Kruschke anterior a PyMC3:
with pm.Model() as ZIP_reg: ψ = pm.Beta('ψ', 1, 1) α = pm.Normal('α', 0, 10) β = pm.Normal('β', 0, 10, shape=2) θ = pm.math.exp(α + β[0] * fish_data['child'] + β[1] * fish_data['camper']) yl = pm.ZeroInflatedPoisson('yl', ψ, θ, observed=fish_data['count']) trace_ZIP_reg = pm.sample(1000) az.plot...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Para entender mejor los resultados de nuestra inferencia, hagamos una gráfica.
children = [0, 1, 2, 3, 4] fish_count_pred_0 = [] fish_count_pred_1 = [] for n in children: without_camper = trace_ZIP_reg['α'] + trace_ZIP_reg['β'][:,0] * n with_camper = without_camper + trace_ZIP_reg['β'][:,1] fish_count_pred_0.append(np.exp(without_camper)) fish_count_pred_1.append(np.exp(with_campe...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Regresión logística robusta Acabamos de ver cómo corregir un exceso de ceros sin modelar directamente el factor que los genera. Se puede utilizar un enfoque similar, sugerido por Kruschke, para realizar una versión más robusta de la regresión logística. Recuerde que en la regresión logística modelamos los datos como bi...
iris = sns.load_dataset("iris") df = iris.query("species == ('setosa', 'versicolor')") y_0 = pd.Categorical(df['species']).codes x_n = 'sepal_length' x_0 = df[x_n].values y_0 = np.concatenate((y_0, np.ones(6, dtype=int))) x_0 = np.concatenate((x_0, [4.2, 4.5, 4.0, 4.3, 4.2, 4.4])) x_c = x_0 - x_0.mean() plt.pl...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
Aquí tenemos algunas versicolors (1s) con una longitud de sépalo inusualmente corta. Podemos arreglar esto con un modelo de mezcla. Vamos a decir que la variable de salida viene con probabilidad $\pi$ por adivinación aleatoria o con probabilidad $1-\pi$ de un modelo de regresión logística. Matemáticamente, tenemos: $$p...
with pm.Model() as modelo_rlg: α = pm.Normal('α', mu=0, sd=10) β = pm.Normal('β', mu=0, sd=10) μ = α + x_c * β θ = pm.Deterministic('θ', pm.math.sigmoid(μ)) bd = pm.Deterministic('bd', -α/β) π = pm.Beta('π', 1, 1) p = π * 0.5 + (1 - π) * θ yl = pm.Bernoulli('yl', p=p,...
04_Generalizando_modelos_lineales.ipynb
aloctavodia/EBAD
gpl-3.0
2D trajectory interpolation The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time: t which has discrete values of time t[i]. x which has values of the x position at those times: x[i] = x(t[i]). x which has values of the y position at those times: y[i] = y(t[i...
f = np.load('trajectory.npz') x = f['x'] y = f['y'] t = f['t'] assert isinstance(x, np.ndarray) and len(x)==40 assert isinstance(y, np.ndarray) and len(y)==40 assert isinstance(t, np.ndarray) and len(t)==40
assignments/assignment08/InterpolationEx01.ipynb
rsterbentz/phys202-2015-work
mit
Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays: newt which has 200 points between ${t_{min},t_{max}}$. newx which has the interpolated values of $x(t)$ at those times. newy which has the interpolated values of $y(t)$ at those times.
x_approx = interp1d(t, x, kind='cubic') y_approx = interp1d(t, y, kind='cubic') traj_approx = interp2d(x, y, t, kind='cubic') newt = np.linspace(t.min(),max(t),200) newx = x_approx(newt) newy = y_approx(newt) assert newt[0]==t.min() assert newt[-1]==t.max() assert len(newt)==200 assert len(newx)==200 assert len(newy...
assignments/assignment08/InterpolationEx01.ipynb
rsterbentz/phys202-2015-work
mit
Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points: For the interpolated points, use a solid line. For the original points, use circles of a different color and no line. Customize you plot to make it effective and beautiful.
fig = plt.figure(figsize=(7,7)) plt.plot(newx, newy, marker='.') plt.plot(x, y, 'ro') plt.xticks([-1.0,-0.5,0.0,0.5,1.0]) plt.yticks([-1.0,-0.5,0.0,0.5,1.0]) plt.xlabel('x(t)') plt.ylabel('y(t)') assert True # leave this to grade the trajectory plot
assignments/assignment08/InterpolationEx01.ipynb
rsterbentz/phys202-2015-work
mit
value noise We start with a function ${f_1}$ that takes an integer ${x}$ and produces a value ${v}$ so that $0 \le v \le 1$. First, we need a seeded table ${r}$ of random values between 0 and 1:
r = np.random.ranf(4)
value_noise.ipynb
basp/aya
mit
Actually, lets define a seed function that initializes this table with using a seed number and and sample size:
def seed(n, size=4): global r np.random.seed(n) r = np.random.ranf(size)
value_noise.ipynb
basp/aya
mit
And now we can define ${f_1}$ using this table: def f1(x): x = int(x % len(r)) return r[x] seed(0, 4) x = np.linspace(0, 15, 16) y = [f1(x) for x in x] plt.plot(x, y, 'bo') With a small ${r}$ table we also get a very small period. You can see that ${f_1}$ repeats every 4 units along the x-axis. Let's fix this b...
seed(0, 8) y = [f1(x) for x in x] plt.plot(x, y, 'bo')
value_noise.ipynb
basp/aya
mit
Which already looks a lot more random. Now we repeat every 8 units along the x-axis. We will get back to the amount of samples later, for now well keep it small to make things more manageable. We're also deliberately plotting the function as dots instead of a continuous line because we still haven't figured out how to,...
v0 = 2 v1 = 5 plt.plot([0,1], [v0,v1])
value_noise.ipynb
basp/aya
mit
Now let's say we want to find out what the ${y}$ value is for when ${x} = 1/3$. Let's consider this line for a moment, we notice it starts at ${y} = 2$ where ${x} = 0$. So let's start with a function ${f_1}$ that describes this: ${f_1}(x) = x + 2$ This will work for when ${x} = 0$ but not for when ${x} = 1$: ${f_1}(0) ...
def f1(x): return 3 * x + 2 x = np.linspace(0, 1, 10) y = [f1(x) for x in x] plt.plot(x, y)
value_noise.ipynb
basp/aya
mit
It would be nice to have a more generic version of this function. This is pretty easy though: ${f_2}(v0, v1, vt) = v0 + (v1 - v0)t$
def f2(v0, v1, t): return v0 + (v1 - v0) * t y = [f2(v0, v1, t) for t in x] plt.plot(x, y)
value_noise.ipynb
basp/aya
mit
In theory we now could use any ${v_0}$ and ${v_1}$ and interpolate between them. Let's try ${v_0} = -3$ and ${v_1} = 2$:
v0, v1 = -3, 2 y = [f2(v0, v1, t) for t in x] plt.plot(x, y)
value_noise.ipynb
basp/aya
mit
Correlation Matrix By calling df.corr() on a full pandas DataFrame will return a square matrix containing all pairs of correlations. By plotting them as a heatmap, you can visualize many correlations more efficiently. Correlation matrix with two perfectly correlated features
df = x_plus_noise(randomness=0) sns.heatmap(df.corr(), vmin=0, vmax=1) df.corr()
course/class1/correlation/examples/01 - correlation matrix and heatmap.ipynb
hershaw/data-science-101
mit
Correlation matrix with mildly-correlated features
df = x_plus_noise(randomness=0.5) sns.heatmap(df.corr(), vmin=0, vmax=1) df.corr()
course/class1/correlation/examples/01 - correlation matrix and heatmap.ipynb
hershaw/data-science-101
mit
Correlation matrix with not-very-correlated features
df = x_plus_noise(randomness=1) sns.heatmap(df.corr(), vmin=0, vmax=1) df.corr()
course/class1/correlation/examples/01 - correlation matrix and heatmap.ipynb
hershaw/data-science-101
mit
Inspired by the Classifier comparision from SciKit Example, we are trying to see which algorithm work better. Due to heavyness of data, we are avoiding checking Linear, RBF, SVM
X, y, TEST_X = sam_pickle_load(prefix="tmp/Iteration2_vt_kb_") df_check_stats(X, y, TEST_X) clf = RandomForestClassifier(random_state=192) scores = cross_val_score(clf, X, y, cv=5, n_jobs=-1) print('AC Score:', scores.mean()) # preprocess dataset, split into training and test part X_train, X_test, y_train, y_test =...
pumpit/PumpIt-04.ipynb
msampathkumar/datadriven_pumpit
apache-2.0
Administrative Stuff Connect to the Jupyter server that I have created on Amazon EC2. URL is: https://54.172.168.134:8888/ Password is: "LifeIsGood!" When in there, create a new notebook, and then click on the "Untitled" at the top, and rename it to your own name. Learning Outcomes: By the end of this section, you wil...
null_flips = binom.rvs(n=20, p=0.5, size=10000) plt.hist(null_flips) plt.axvline(16) alpha = 5 / 100 null_flips = binom.rvs(n=20, p=0.5, size=10000) plt.hist(null_flips) plt.axvline(16) sum(null_flips >=16) / 10000
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Our test here tells us whether it is possible for us to see X number of heads or more under randomness. Let us count up the number of times we see 16 or more heads under the null model.
num_heads = sum(null_flips >= 16) num_heads
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Finally, let's calculate the probability of seeing this number by random chance.
p_value = num_heads / len(null_flips) p_value
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
The probability of seeing this under a null model of p=0.5 is very low (approximately 1 in 200). Therefore, it is likely the case that the coin is biased. Exercise 1 Epidemiologists have been longitudinally measuring the rate of cancer incidence in a population of humans. Over the first 10 years, cancer rates were 10 p...
null_poisson = poisson.rvs(10, size=100000) sum(null_poisson >= 14) / 100000
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Visualize this using the plot below.
plt.hist(null_poisson) plt.axvline(14)
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Exercise 2 In your average bag of M&Ms, there are on average 30 brown, 20 yellow, 20 red, 10 orange, 10 green, and 10 blue out of every 100 M&M candies. I draw out 50 candies, and find that I have 22 brown. Is this bag an "average" bag?
null_hypergeom = hypergeom.rvs(M=100, n=30, N=50, size=10000) sum(null_hypergeom >= 22) / 10000
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Once again, plot this below.
plt.hist(null_hypergeom) plt.axvline(22)
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Exercise 3 Systolic blood pressure in healthy patients is Normally distributed with mean 122 mmHg and std. deviation 9 mmHg. In hypertensive patients, it is Normally with mean 140 and std. deviation 6 mmHg. (I pulled out these numbers from a hat - http://www.cdc.gov/nchs/data/nhsr/nhsr035.pdf) A patient comes in and me...
null_healthy = norm.rvs(122, 9, size=100000) null_hyper = norm.rvs(140, 6, size=100000) sns.kdeplot(null_healthy) sns.kdeplot(null_hyper) plt.axvline(130) # P-value calculation under healthy hypothesis np.sum(null_healthy >= 130) / len(null_healthy) # P-value calculation under hypertension hypothesis np.sum(null_hyp...
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Example 2: Difference between two experimental groups. You've measured enzyme activity under two inhibitors and a control without an inhibitor. Is the difference a true difference, or could we have observed this purely by random chance? Take a look at the data below.
# inhibitor1 = norm.rvs(70, 22, size=20) # inhibitor2 = norm.rvs(72, 10, size=30) # control = norm.rvs(75, 20, size=15) # data = pd.DataFrame([inhibitor1, inhibitor2, control]).T # data.columns = ['Inhibitor 1', 'Inhibitor 2', 'Control'] # data.to_csv('drug_inhibitor.csv') data = pd.read_csv('drug_inhibitor.csv', index...
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Okay, so here's the important, potentially billion-dollar market question: Is inhibitor 1 any good? Basically, the question we're asking is this: Is the there a difference between the enzyme's activity under inhibitor 1 vs. the control? Here are a few problems we face here: We don't know what distribtion the inhibito...
import numpy as np differences = [] for i in range(1000): # Select out the two columns of data of interest means = data[['Inhibitor 1', 'Control']] # Select means = means.apply(np.random.permutation, axis=1).mean() difference = means['Control'] - means['Inhibitor 1'] differences.append(differen...
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
The probability of seeing this or more difference, assuming that the inhibitor treatment has no effect, is about 3 out of 20 times. We might not be so convinced that this is a good drug. Notes Once again, we have chosen a relevant statistic (difference of means), and computed its distribution under a "null" distributio...
sample = norm.rvs(10, 0.5, size=6) # we will draw 6 instances from the normal distribution. sns.kdeplot(sample, color='blue') population = norm.rvs(10, 0.5, size=10000) sns.kdeplot(sample, color='blue', label='sample') sns.kdeplot(population, color='red', label='population') plt.legend()
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Create a new distribution called fitted, and tweak its parameters so that it fits the sample distribution. Discussion What are your parameter values? What's happening here? Isn't the sample drawn from the same distribution as the population? How could the sample parameters and population parameters be different? Th...
xs = range(1,7) # numbers 1 through 6 ps = [0.4, 0.1, 0, 0.2, 0.3, 0] crazy_dice = rv_discrete(name='crazy_dice', values=[xs, ps])
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Instructor demo: Let's visualize this distribution.
crazy_dice.pmf(xs) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(xs, crazy_dice.pmf(xs), 'ro', ms=12) ax.vlines(xs, [0]*len(xs), crazy_dice.pmf(xs), color='red', lw=4)
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
With this dice, we are essentially saying that we expect the dice to throw 1s (40% of the time), 5s (30% of the time), 4s (20% of the time), and 2s (10% of the time), with no expectation of getting 3s and 6s. Now, let's roll the dice 4 times. What's the mean?
# Class activity sample1 = crazy_dice.rvs(size=50) sns.kdeplot(sample1) np.mean(sample1)
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Roll the dice 4 times again. What's the mean?
# Class activity sample2 = crazy_dice.rvs(size=4) np.mean(sample2)
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
Roll the dice 4 times, but repeat 1000 times now. What's the distribution of the means?
# Class activity means = [] for i in range(1000): mean = np.mean(crazy_dice.rvs(size=4)) means.append(mean) # plt.hist(np.array(means), color='green') sns.kdeplot(np.array(means), color='blue') # plot the density plot sns.kdeplot(norm.rvs(2.9,0.6,size=100000), color='red') # plot an approximation of the...
Inferential Statistics.ipynb
ericmjl/be-stats-iap2016
mit
<h4>Demo of scikit - Underfit, Normal, Overfit example</h4> Purpose: Demonstrate how higher order polynomials can fit complex non-linear shape This demo contains AWS ML equivalent example of underfitting vs overfitting example described here:<br> http://scikit-learn.org/stable/auto_examples/model_selection/plot_underfi...
# Function to generate target value for a given x. true_func = lambda X: np.cos(1.5 * np.pi * X) np.random.seed(0) # Training Set: No. of random samples used for training the model n_samples = 30 x = np.sort(np.random.rand(n_samples)) y = true_func(x) + np.random.randn(n_samples) * 0.1 # Test Set: 100 samples for whi...
17-09-27-AWS Machine Learning A Complete Guide With Python/05 - Improving the model/03 - ml_underfit_overfit_normal_30samples.ipynb
arcyfelix/Courses
apache-2.0
Polynomial with degree 1 is a straight line - Underfitting<br> Training RMSE:0.5063, Evaluation RMSE:0.4308, Baseline RMSE:0.689
fig = plt.figure(figsize = (12, 8)) plt.boxplot([df_actual['y'], df_d1_predicted['y_predicted']], labels=['actual','predicted with deg1']) plt.title('Box Plot - Actual, Predicted') plt.ylabel('Target Attribute') plt.grid(True)
17-09-27-AWS Machine Learning A Complete Guide With Python/05 - Improving the model/03 - ml_underfit_overfit_normal_30samples.ipynb
arcyfelix/Courses
apache-2.0
<h4>Model with degree 4 features</h4>
df_d4_predicted = pd.read_csv( os.path.join(data_path,'output_deg_4', 'bp-W4oBOhwClbH-fit_degree_4_example_test30.csv.gz')) df_d4_predicted.columns = ["Row","y_predicted"] fig = plt.figure(figsize = (12, 8)) plt.scatter(x = df_samples['x0'], y = df_samples['y'], color = 'b', ...
17-09-27-AWS Machine Learning A Complete Guide With Python/05 - Improving the model/03 - ml_underfit_overfit_normal_30samples.ipynb
arcyfelix/Courses
apache-2.0
Good Fit with degree 4 polynomial<br> Training RMSE:0.2563, Evaluation RMSE:0.1493, Baseline RMSE:0.689
fig = plt.figure(figsize = (12, 8)) plt.boxplot([df_actual['y'], df_d1_predicted['y_predicted'], df_d4_predicted['y_predicted']], labels=['actual','predicted with deg1','predicted with deg4']) plt.title('Box Plot - Actual, Predicted') plt.ylabel('Target Attribute') plt.grid(True)
17-09-27-AWS Machine Learning A Complete Guide With Python/05 - Improving the model/03 - ml_underfit_overfit_normal_30samples.ipynb
arcyfelix/Courses
apache-2.0
<h4>Model with degree 15 features</h4>
df_d15_predicted = pd.read_csv( os.path.join(data_path,'output_deg_15', 'bp-rBWxcnPN3zu-fit_degree_15_example_test30.csv.gz')) df_d15_predicted.columns = ["Row","y_predicted"] fig = plt.figure(figsize = (12, 8)) plt.scatter(x = df_samples['x0'], y = df_samples['y'], color = 'b', ...
17-09-27-AWS Machine Learning A Complete Guide With Python/05 - Improving the model/03 - ml_underfit_overfit_normal_30samples.ipynb
arcyfelix/Courses
apache-2.0
Not quite over fitting as shown in sci-kit example; fit is actually pretty good here.<br> Training RMSE:0.2984, Evaluation RMSE:0.1222, Baseline RMSE:0.689
fig = plt.figure(figsize = (12, 8)) plt.boxplot([df_actual['y'], df_d1_predicted['y_predicted'], df_d4_predicted['y_predicted'], df_d15_predicted['y_predicted']], labels = ['actual','predicted deg1','predicted deg4','predicted deg15']) plt.title('Box Plot - Actual, Pr...
17-09-27-AWS Machine Learning A Complete Guide With Python/05 - Improving the model/03 - ml_underfit_overfit_normal_30samples.ipynb
arcyfelix/Courses
apache-2.0
Spectra creation The Spectra class is echidna's most fundamental class. It holds the core data structure and provides much of the core functionality required. Coincidentally, this guide will be centred around this class, how to create it and then some manipulations of the class. We'll begin with how to create an instan...
import echidna.core.spectra as spectra
getting_started.ipynb
mjmottram/echidna
mit
Now we need a config file to create the spectrum from. There is an example config file in echidna/config. If we look at the contents of this yaml file, we see it tells the Spectra class to create a data structure to hold two parameters: energy_mc, with lower limit 0, upper limit 10 and 1000 bins radial_mc, with lower ...
import echidna config = spectra.SpectraConfig.load_from_file(echidna.__echidna_base__ + "/echidna/config/example.yml") print config.get_pars()
getting_started.ipynb
mjmottram/echidna
mit
And there you have it, we've created a Spectra object. Filling the spectrum Ok, so we now have a spectrum, let's fill it with some events. We'll generate random energies from a Gaussian distribution and random positions from a Uniform distribution. Much of echidna is built using the numpy and SciPy packages and we will...
# Import numpy import numpy # Generate random energies from a Gaussin with mean (mu) and sigma (sigma) mu = 2.5 # MeV sigma = 0.15 # MeV # Generate random radial position from a Uniform distribution outer_radius = 5997 # Radius of SNO+ AV # Detector efficiency efficiency = 0.9 # 90% for event in range(num_decays...
getting_started.ipynb
mjmottram/echidna
mit
Note: you probably won't see any entries in the above. For large arrays, numpy only prints the first three and last three entries. Since our energy range is in the middle, all our events are in the ... part at the moment. But we will see entries printed out later when we apply some cuts. Plotting Another useful way to ...
import echidna.output.plot as plot import echidna.output.plot_root as plot_root
getting_started.ipynb
mjmottram/echidna
mit
To plot the projection of the spectrum on the energy_mc axis:
fig1 = plot.plot_projection(spectrum, "energy_mc", fig_num=1, show_plot=False) plt.show()
getting_started.ipynb
mjmottram/echidna
mit
We can also project onto two dimensions and plot a surface:
fig_3 = plot.plot_surface(spectrum, "energy_mc", "radial_mc", fig_num=3, show_plot=False) plt.show()
getting_started.ipynb
mjmottram/echidna
mit
By default the "weighted smear" method considers all bins within a $\pm 5\sigma$ range. For the sake of speed, we will reduce this to 3 here. Also set the energy resolution - 0.05 for 5%.
smearer.set_num_sigma(3) smearer.set_resolution(0.05)
getting_started.ipynb
mjmottram/echidna
mit
this should hopefully only create a couple of seconds. The following code shows how to make a simple script, using matplotlib, to overlay the original and smeared spectra.
import numpy as np import matplotlib.pyplot as plt def overlay_spectra(original, smeared, dimension="energy_mc", fig_num=1): """ Overlay original and smeared spectra. Args: original (echidna.core.spectra.Spectra): Original spectrum. smeared (echidna.core.spectra.Spectra): Smeared spectrum. ...
getting_started.ipynb
mjmottram/echidna
mit
Other spectra manipulations We now have a nice smeared version of our original spectrum. To prepare the spectrum for a final analysis there are a few final manipulations we may wish to do. Region of Interest (ROI) There is a special version of the shrink method called shrink_to_roi that can be used for ROI cuts. It sav...
roi = (mu - 0.5*sigma, mu + 1.45*sigma) # To get nice shape for rebinning smeared_spectrum.shrink_to_roi(roi[0], roi[1], "energy_mc") print smeared_spectrum.get_roi("energy_mc")
getting_started.ipynb
mjmottram/echidna
mit
We can now verify that Q is an orthognal matrix. We first check that $\boldsymbol{Q}^{-1} = \boldsymbol{Q}^{T}$ by computing $\boldsymbol{Q}\boldsymbol{Q}^{-1}$
print(Q.dot(Q.T))
Lecture08.ipynb
garth-wells/IA-maths-Jupyter
mit
We can see that $\boldsymbol{Q}\boldsymbol{Q}^{-1} = \boldsymbol{I}$ (within numerical precision). We can check that the colums of $\boldsymbol{Q}$ are orthonormal
import itertools # Build pairs (0,0), (0,1), . . . (0, n-1), (1, 2), (1, 3), . . . pairs = itertools.combinations_with_replacement(range(len(Q)), 2) # Compute dot product of column vectors q_{i} \cdot q_{j} for p in pairs: col0, col1 = p[0], p[1] print ("Dot product of column vectors {}, {}: {}".format(col0,...
Lecture08.ipynb
garth-wells/IA-maths-Jupyter
mit
The columns of $\boldsymbol{Q}$ are orthonormal, and $\boldsymbol{Q}^{T}$ is also a rotation matrix and has orthonormal columns. Therefore, the rows of $\boldsymbol{Q}$ are also orthonormal.
# Compute dot product of row vectors q_{i} \cdot q_{j} pairs = itertools.combinations_with_replacement(range(len(Q)), 2) for p in pairs: row0, row1 = p[0], p[1] print ("Dot product of row vectors {}, {}: {}".format(row0, row1, Q[row0, :].dot(Q[row1, :])))
Lecture08.ipynb
garth-wells/IA-maths-Jupyter
mit
Finally, we check the determinant of $\boldsymbol{Q}$:
print("Determinant of Q: {}".format(np.linalg.det(Q)))
Lecture08.ipynb
garth-wells/IA-maths-Jupyter
mit
Goal We want to generate observed arrival times, in a format similar to GTFS. The GTFS schedule will be useful in this process, data was downloaded from Transit Feeds, the schema of the data is in ttc_gtfs_create.sql and it is processed to a more useful format in PostgreSQL with ttc_gtfs_process.sql. From gtfs, we can ...
sql = '''SELECT COUNT(1) FROM gtfs.stop_times INNER JOIN gtfs.trips USING (trip_id) INNER JOIN gtfs.routes USING (route_id) INNER JOIN gtfs.calendar USING (service_id) WHERE monday AND route_type = 1 AND route_short_name != '3' ''' with con.cursor() as cur: cur.execute(sql) print(cur.fetchone()[0])
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
This is a ball park figure we are aiming for in our filtering. Creating a materialized view of the raw poll data for a given day Wednesday, June 14th 2017
sql = '''DROP MATERIALIZED VIEW IF EXISTS test_day CASCADE; CREATE MATERIALIZED VIEW test_day AS SELECT requestid, stationid, lineid, create_date, request_date, station_char, subwayline, system_message_type, timint, traindirection, trainid, train_message FROM requests_serverless INNER JOIN ntas_data_serv...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
Cool. Definitely some work to do. Trying out a very basic filter, which has a Known Issue
sql = '''SELECT COUNT(DISTINCT (requestid, lineid, trainid, traindirection, stationid)) FROM test_day WHERE train_message = 'AtStation' OR timint < 1''' with con.cursor() as cur: cur.execute(sql) print(cur.fetchone()[0])
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
It's a start. If every line takes more than an hour to do a round-trip, we might be able to look for a distinct train-line-direction-station combination for each hour.
sql = '''WITH trips AS (SELECT route_short_name, (SELECT trip_id FROM gtfs.trips WHERE trips.route_id = routes.route_id LIMIT 1) FROM gtfs.routes WHERE route_type = 1 AND route_short_name != '3' ) SELECT route_short_name, MIN(arrival_time) AS "Start Time", MIN(stop_sequence) ||'-'||MAX(stop_sequence) AS "Stops", MAX(...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
So any given train on line 1 or 2 shouldn't pass the same station going the same direction in an hour. So we could add the hour in a DISTINCT query. What's up with Line 4? It's short, but not two stations short... According to TransitFeeds, a GTFS host and exploration platform, Line 4 trains start the day at non-termin...
sql = ''' WITH unique_trains AS (SELECT lineid::TEXT, COUNT(DISTINCT trainid) AS "Number of trains in a day" FROM test_day GROUP BY lineid) , unique_trips AS(SELECT route_short_name AS lineid, COUNT(DISTINCT trip_id) AS "Number of scheduled trips" FROM gtfs.routes -- ON lineid::TEXT = ro...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
According to wikipedia the number of trains for each line is: |line | number of trains| |-----|----------------:| |1 | 76 | |2 | 62 | | 4 | 6 | So the
sql = ''' SELECT trainid, lineid, traindirection, stationid, station_char, create_date, request_date, timint, train_message FROM test_day INNER JOIN (SELECT trainid FROM test_day WHERE lineid = 1 AND create_date::TIME > '07:00'::TIME LIMIT 1) one_train USING (trainid) WHERE (timint < 1 OR train_...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
Using the filtered schema instead
sql = '''CREATE MATERIALIZED VIEW filtered.test_day AS SELECT requestid, stationid, lineid, create_date, request_date, station_char, subwayline, system_message_type, timint, traindirection, trainid, train_message FROM filtered.requests INNER JOIN filtered.ntas_data USING (requestid) WHERE request_date >= ...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
Ah. We can see train 136 skipped station 14. Fortunately, we have unfiltered data from the same day
sql = ''' SELECT trainid, lineid, traindirection, stationid, station_char, create_date, create_date + timint * interval '1 minute' AS expected_arrival, timint, train_message FROM test_day WHERE trainid = 136 AND (timint < 1 OR train_message != 'Arriving') AND lineid = 1 ORDER BY create_date + ti...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
So we have an expected arrival time at Osgoode station from the unfiltered dataset, meaning that it can have some use after all! However, we can see at the end that the train is super delayed
train_136[train_136['create_date'] > datetime.datetime(2017, 6, 15, 1,30)]
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
So this doesn't seem like a particularly good example, since the train is just ultimately stuck at Sheppard West station until the end of the (scraping) day. The solution in this case would probably be to just filter out any of these observations where train_message == 'Delayed' and timint &gt; 2. Let's try to see if w...
train_136[train_136['train_message'] == 'Delayed']
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
Lucky for us, train 136 is delayed a second time in our day, around 12:55.
train_136[(train_136['create_date'] > datetime.datetime(2017, 6, 14, 12, 50)) & (train_136['create_date'] < datetime.datetime(2017, 6, 14, 13, 30))]
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
It seems like we could actually be fine if we just filtered out observations with Delayed and timint &lt;1. The delayed records could be useful to store in a separate table for their own analysis, but they don't appear to really fill in the gaps here
train_136[(train_136['create_date'] > datetime.datetime(2017, 6, 14, 12, 50)) & (train_136['create_date'] < datetime.datetime(2017, 6, 14, 13, 30)) & ((train_136['train_message'] != 'Delayed') | (train_136['timint'] < 1.0 ))]
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
Coincidentally, this period of time also features a short-turn, at 13:15, and we want to identify distinct trips (where trains turn around, either at the end of the usual run, or early). This should be relatively easy to implement with the traindirection column
split_trips = '''CREATE SEQUENCE IF NOT EXISTS trip_ids; CREATE MATERIALIZED VIEW test_day_w_trips AS SELECT trainid, lineid, traindirection, stationid, station_char, create_date, create_date + timint * interval '1 minute' AS expected_arrival, timint, train_message, CASE ...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
A final step is to group together multiple observations at a same station, during a same trip, to get an approximation of arrival and "departure" time.
final_step = ''' DROP MATERIALIZED VIEW IF EXISTS test_day_final; CREATE MATERIALIZED VIEW test_day_final AS SELECT trainid, lineid, traindirection, stationid, station_char, trip_id, MIN(expected_arrival) AS estimated_arrival, MAX(expected_arrival) AS estimated_departure, CASE (ARRAY_AGG(train_message ORDER BY expec...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
Woo! Now to test how well this process did
cnt = '''SELECT COUNT(*) FROM test_day_final''' with con.cursor() as cur: cur.execute(cnt) print('The number of station stops made is', cur.fetchone()[0])
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0
Huh. 5k higher than the scheduled number of station stops
sql = ''' WITH observed_trips AS (SELECT lineid::TEXT, COUNT(DISTINCT trip_id) AS "Number of observed trips" FROM test_day_final GROUP BY lineid) , unique_trips AS(SELECT route_short_name AS lineid, COUNT(DISTINCT trip_id) AS "Number of scheduled trips" FROM gtfs.routes -- ON lineid::TEX...
doc/filtering_observed_arrivals.ipynb
CivicTechTO/ttc_subway_times
gpl-3.0