markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
We visualize our graph:
show_graph(mlp_graph.as_graph_def())
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
Next we create a session, initialize our variables, and train the network:
sess = tf.Session(graph=mlp_graph) sess.run(init) train_steps = 2500 acc = np.zeros(train_steps) # create this op outside of the loop so we don't create it 5000 times for i in range(train_steps): batch_xs, batch_ys = mnist.train.next_batch(100) acc[i] = mlp_acc.eval(session = sess, feed_dict = {x: batch_xs, y...
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
VAEs! $$ \DeclareMathOperator{\Tr}{Tr} \newcommand{\trp}{{^\top}} % transpose \newcommand{\trace}{\text{Trace}} % trace \newcommand{\inv}{^{-1}} \newcommand{\mb}{\mathbf{b}} \newcommand{\M}{\mathbf{M}} \newcommand{\G}{\mathbf{G}} \newcommand{\A}{\mathbf{A}} \newcommand{\R}{\mathbf{R}} \renewcommand{\S}{\mathbf{S}} \new...
from tensorflow.examples.tutorials.mnist import input_data encoder_depth = 2 decoder_depth = 2 encoder_units = 500 decoder_units = 500 latent_size = 10 global_dtype = tf.float32 minibatch_size = 100 input_size = mnist.train.images.shape[1] train_steps = mnist.train.num_examples // minibatch_size encoder_nonlinearity =...
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
Construct the graph:
vae_graph = tf.Graph() with vae_graph.as_default(): with tf.name_scope("Encoder_Q"): x = tf.placeholder(shape=[None, input_size], dtype=global_dtype, name='x') q_network, q_mu_vars = _mlp(x, n_layers=encoder_depth, units_per_layer=encoder_units, input_size=input_size, out_size=encoder_units, nonl...
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
Now we run and visualize:
sess = tf.Session(graph=vae_graph) sess.run(init) elbo_log = np.zeros(n_epochs * train_steps) for i in range(n_epochs): for j in range(train_steps): batch_xs, batch_ys = mnist.train.next_batch(minibatch_size) sess.run(minimize_op, feed_dict={x: batch_xs}) elbo_log[i*train_steps + j] = elbo...
test/vae-tf-bootcamp/VAE_mshvartsman.ipynb
sheqi/TVpgGLM
mit
A Simple Example Suppose we are trying to figure out how much something weighs, but the scale we're using is unreliable and gives slightly different answers every time we weigh the same object. We could try to compensate for this variability by integrating the noisy measurement information with a guess based on some p...
def scale(guess): weight = pyro.sample("weight", dist.Normal(guess, 1.0)) return pyro.sample("measurement", dist.Normal(weight, 0.75))
tutorial/source/intro_part_ii.ipynb
uber/pyro
apache-2.0
Conditioning The real utility of probabilistic programming is in the ability to condition generative models on observed data and infer the latent factors that might have produced that data. In Pyro, we separate the expression of conditioning from its evaluation via inference, making it possible to write a model once an...
conditioned_scale = pyro.condition(scale, data={"measurement": torch.tensor(9.5)})
tutorial/source/intro_part_ii.ipynb
uber/pyro
apache-2.0
Because it behaves just like an ordinary Python function, conditioning can be deferred or parametrized with Python's lambda or def:
def deferred_conditioned_scale(measurement, guess): return pyro.condition(scale, data={"measurement": measurement})(guess)
tutorial/source/intro_part_ii.ipynb
uber/pyro
apache-2.0
In some cases it might be more convenient to pass observations directly to individual pyro.sample statements instead of using pyro.condition. The optional obs keyword argument is reserved by pyro.sample for that purpose:
def scale_obs(guess): # equivalent to conditioned_scale above weight = pyro.sample("weight", dist.Normal(guess, 1.)) # here we condition on measurement == 9.5 return pyro.sample("measurement", dist.Normal(weight, 0.75), obs=torch.tensor(9.5))
tutorial/source/intro_part_ii.ipynb
uber/pyro
apache-2.0
Finally, in addition to pyro.condition for incorporating observations, Pyro also contains pyro.do, an implementation of Pearl's do-operator used for causal inference with an identical interface to pyro.condition. condition and do can be mixed and composed freely, making Pyro a powerful tool for model-based causal infe...
def perfect_guide(guess): loc = (0.75**2 * guess + 9.5) / (1 + 0.75**2) # 9.14 scale = np.sqrt(0.75**2 / (1 + 0.75**2)) # 0.6 return pyro.sample("weight", dist.Normal(loc, scale))
tutorial/source/intro_part_ii.ipynb
uber/pyro
apache-2.0
Parametrized Stochastic Functions and Variational Inference Although we could write out the exact posterior distribution for scale, in general it is intractable to specify a guide that is a good approximation to the posterior distribution of an arbitrary conditioned stochastic function. In fact, stochastic functions f...
def intractable_scale(guess): weight = pyro.sample("weight", dist.Normal(guess, 1.0)) return pyro.sample("measurement", dist.Normal(some_nonlinear_function(weight), 0.75))
tutorial/source/intro_part_ii.ipynb
uber/pyro
apache-2.0
What we can do instead is use the top-level function pyro.param to specify a family of guides indexed by named parameters, and search for the member of that family that is the best approximation according to some loss function. This approach to approximate posterior inference is called variational inference. pyro.para...
def scale_parametrized_guide(guess): a = pyro.param("a", torch.tensor(guess)) b = pyro.param("b", torch.tensor(1.)) return pyro.sample("weight", dist.Normal(a, torch.abs(b)))
tutorial/source/intro_part_ii.ipynb
uber/pyro
apache-2.0
As an aside, note that in scale_parametrized_guide, we had to apply torch.abs to parameter b because the standard deviation of a normal distribution has to be positive; similar restrictions also apply to parameters of many other distributions. The PyTorch distributions library, which Pyro is built on, includes a constr...
from torch.distributions import constraints def scale_parametrized_guide_constrained(guess): a = pyro.param("a", torch.tensor(guess)) b = pyro.param("b", torch.tensor(1.), constraint=constraints.positive) return pyro.sample("weight", dist.Normal(a, b)) # no more torch.abs
tutorial/source/intro_part_ii.ipynb
uber/pyro
apache-2.0
Pyro is built to enable stochastic variational inference, a powerful and widely applicable class of variational inference algorithms with three key characteristics: Parameters are always real-valued tensors We compute Monte Carlo estimates of a loss function from samples of execution histories of the model and guide ...
guess = 8.5 pyro.clear_param_store() svi = pyro.infer.SVI(model=conditioned_scale, guide=scale_parametrized_guide, optim=pyro.optim.Adam({"lr": 0.003}), loss=pyro.infer.Trace_ELBO()) losses, a, b = [], [], [] num_steps = 2500 for t in range(num_steps): ...
tutorial/source/intro_part_ii.ipynb
uber/pyro
apache-2.0
Configure Object Storage connectivity Customize this cell with your Object Storage connection information
# @hidden_cell # Enter your ... OS_AUTH_URL = 'https://identity.open.softlayer.com' OS_USERID = '...' OS_PASSWORD = '...' OS_PROJECTID = '...' OS_REGION = '...' OS_SOURCE_CONTAINER = '...' OS_FILENAME = '....csv'
notebook/data-load-samples/Load from Object Storage - Python.ipynb
ibm-cds-labs/pixiedust
apache-2.0
Load CSV data Load csv file from Object Storage into a Spark DataFrame.
# no changes are required to this cell from ingest import Connectors from pyspark.sql import SQLContext sqlContext = SQLContext(sc) objectstoreloadOptions = { Connectors.BluemixObjectStorage.AUTH_URL : OS_AUTH_URL, Connectors.BluemixObjectStorage.USERID : OS_USERID, Conn...
notebook/data-load-samples/Load from Object Storage - Python.ipynb
ibm-cds-labs/pixiedust
apache-2.0
Explore the loaded data using PixieDust
display(os_data)
notebook/data-load-samples/Load from Object Storage - Python.ipynb
ibm-cds-labs/pixiedust
apache-2.0
<br> En dropna() el argumento subset considera la etiqueta para seleccionar el conjunto a descartar, axis=0 descarta filas (axis=1 columnas) y inplace= True hace que los cambios se ejecuten directamente en DataFrame.
datos1.dropna(subset=['A'], axis= 0, inplace= True) datos1
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> La función replace() permite reemplazar valores faltantes en el DataFrame por valores nuevos. En nuestro ejemplo, reemplazaremos con el promedio, que se calcula con la función mean().
datos1 = pd.DataFrame([24, np.nan, np.nan, 23,np.nan, 12, np.nan, 17, np.nan, 2 ,5], columns = list('A')) media = datos1['A'].mean() media
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Ahora usamos la función replace()
datos1['A'].replace(np.nan, media, inplace = True) datos1
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Transformando los datos <br> Mezclando y combinando DataFrames
import pandas as pd compra_1 = pd.Series({'Nombre': 'Adelis', 'Artículo comprado': 'Libro', 'Costo': 1200}) compra_2 = pd.Series({'Nombre': 'Miguel', 'Artículo comprado': 'Raspberry pi 3', 'Costo': 15000}) compra_3 = pd.Ser...
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Podemos agregar elementos al DataFrame de la siguiente manera:
df['Fecha'] = ['Diciembre 1', 'Febrero 4', 'Mediados de Julio'] df df['Entregado'] = 'Sí' df df['Retroalimentación'] = ['Positiva', None, 'Negativa'] df
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Pandas reset_index () es un método para restablecer el índice de un DataFrame. El establece como índices una lista de enteros que van desde 0 hasta la longitud de los datos.
adf = df.reset_index() adf
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Podemos tener un par de tablas de datos que nos interese unir o combinar en un mismo DataFrame.
empleados_df = pd.DataFrame([{'Nombre': 'Adriana', 'Función': 'Gerente de ventas'}, {'Nombre': 'Andrés', 'Función': 'Vendedor 1'}, {'Nombre': 'Cristóbal', 'Función': 'Gerente de departamento'}]) empleados_df = empleados_df.set_index('Nombre') grado_df = pd.DataFrame([{'...
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> pd.merge() conecta filas en el DataFrames basado en una o más teclas. Para los conocedores de SQL esta función hace unión de bases de datos por columnas o índices.
df_info_empleados=pd.merge(empleados_df, grado_df, how='outer', left_index=True, right_index=True) df_info_empleados
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Otros ejemplos de cómo variar el parámetro how se pueden encontrar en el libro Python for Data Analysis - McKinney. <br> Supongamos que tenemos ahora un nuevo DataFrame que coincide en número de filas con el anterior. Por ejemplo:
fecha_ingreso_df = pd.DataFrame([{'Nombre': 'Adriana', 'Fecha de Ingreso': '20/06/2013'}, {'Nombre': 'Andrés', 'Fecha de Ingreso': '10/01/2018'}, {'Nombre': 'Cristóbal', 'Fecha de Ingreso': '20/03/2011'}]) fecha_ingreso_df = fecha_ingreso_df.set_index('Nombre') art_vend...
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> pd.concat() pega o apila objetos a lo largo de un eje.
new_data = pd.concat([df_info_empleados, fecha_ingreso_df, art_vendidos_df], axis=1) new_data
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Hay mucho más que aprender! Por ejemplo: ¿Qué sucede si axis=0? R: pues posiblemente el resultado sea que Pandas pegue todos los valores y sus índices. Como se muestra a continuación:
pd.concat([df_info_empleados, fecha_ingreso_df, art_vendidos_df], axis=0)
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Otra transformación de interés podría ser hacer algún cálculo sobre una columna entera. En nuestro ejemplo, supongamos que deseamos colocar % de artículos vendidos y cambiar la etiqueta de esa columna.
new_data new_data['Art.Vendidos/Total Art.']= new_data['Art.Vendidos/Total Art.']*100 new_data.rename(columns = {'Art.Vendidos/Total Art.': '% Art. Vendidos'}, inplace = True) new_data
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Normalizando datos <br> Tomemos un DataFrame que representa dimensiones de cajas a ser vendidas en un almacén.
dimension1 = pd.DataFrame([168.7, 170.0, 150.3, 168.7, 145.2, 200.0, 175.4, 163.0, 230.0, 129.6, 178.2], columns = list('L')) dimension1.rename(columns = {'L': 'Largo'}, inplace = True) dimension2 = pd.DataFrame([68.3, 60.2, 65.0, 68.3, 45.9, 70.0, 75.1, 63.5, 65.2, 68.7, 78], columns = list('A')) dimension2.rename(c...
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Método de "Escala de característica simple": se divide cada valor por el valor máximo para esa característica, $x_{nuevo} = \frac{x_{viejo}}{x_{máximo}}$
dimensiones['Largo'] = dimensiones['Largo']/dimensiones['Largo'].max() dimensiones['Ancho'] = dimensiones['Ancho']/dimensiones['Ancho'].max() dimensiones['Alto'] = dimensiones['Alto']/dimensiones['Alto'].max() dimensiones
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Método Mínimo - Máximo: toma cada valor, $x_{viejo}$ le resta el mínimo valor de esa característica y luego se divide por el rango de esa característica, es decir, $x_{nuevo} = \frac{x_{viejo} - x_{mínimo}}{x_{máximo} - x_{mínimo}}$
dimensiones['Largo'] = (dimensiones['Largo']-dimensiones['Largo'].min())/(dimensiones['Largo'].max() - dimensiones['Largo'].min()) dimensiones['Ancho'] = (dimensiones['Ancho']-dimensiones['Ancho'].min())/(dimensiones['Ancho'].max() - dimensiones['Ancho'].min()) dimensiones['Alto'] = (dimensiones['Alto']-dimensiones['Al...
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Método Puntaje estándar:
dimensiones['Largo'] = (dimensiones['Largo']-dimensiones['Largo'].mean())/(dimensiones['Largo'].std()) dimensiones['Ancho'] = (dimensiones['Ancho']-dimensiones['Ancho'].mean())/(dimensiones['Ancho'].std()) dimensiones['Alto'] = (dimensiones['Alto']-dimensiones['Alto'].mean())/(dimensiones['Alto'].std()) dimensiones
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Estadística descriptiva <br> Tabla de resumen estadístico
import numpy as np df = pd.read_csv('Automobile_data.csv') df.head() df.describe()
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Gráficos de cajas (o Boxplots) <br> Vamos a generar datos aleatoriamente y hacer un gráfico de caja.
np.random.seed(1500) #generación aleatoria números dfb = pd.DataFrame(np.random.randn(10,5)) #DataFrame de dimensiones 10x5 dfb.boxplot(return_type='axes') #Grafico de caja de cada categoría. dfb.head()
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Tomemos los datos del archivo Automobile_data.csv para crear un gráfico de caja de 3 variables que definen las dimensiones de los automóviles.
x = df['length'] #Variable Largo y = df['width'] #Variable Ancho z =df['height'] #Variable Alto dfbp = pd.DataFrame([x,y,z]).T #Creando un DataFrame con las dimensiones de los autosmóviles dfbp.boxplot(fontsize=13, return_type='axes') #Gráfico de caja de las 3 variables #Tarea!!!!! Normalice estos datos y haga el nuev...
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Gráficos de barras (o histogramas) <br> Vamos a generar datos aleatoriamente y hacer un gráfico de barras.
np.random.seed(14000) #Generación de números aleatorios pdhist = pd.Series(np.random.randn(1000)) #Serie de números aleatorios pdhist.hist(normed=True) # Muestra las barras pdhist.plot(fontsize=13, kind='kde') #Gráfico de barras (kde = Kernel Density Estimation plot. Haga la prueba con 'hist')
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Utilicemos los datos de Automobile_data.csv para hacer un gráfico de barras o histograma de la variable price (precio).
import matplotlib.pyplot as plt p = df['price'] #Seleccionamos la variable price pdf = pd.Series(p) #Convertimos la selección en una serie de Pandas pdf.hist(normed=True) # Muestra las barras pdf.plot(fontsize=11, kind = 'hist') #Gráfico de barras plt.xlabel('Precio',fontsize=13) plt.ylabel('Frecuencia', fontsize=13)
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Este gráfico de barras nos indica que hay un número alto de automóviles con precio menor a 10000, entre otras cosas .... ¿Qué cosas? ;) <br> Gráfico de dispersión <br> Este gráfico de dispersión muestra la relación entre las variables tamaño del motor y precio.
import matplotlib.pyplot as plt x= df['engine-size'] #Variable predictora y= df['price'] #Variable objetivo o que deseamos predecir plt.scatter(x, y) #Gráfico de dispersión en Matplotlib plt.title('Gráfico de dispersión de Tamaño del motor Vs. Precio', fontsize=13)#Nombre del gráfico plt.xlabel('Tamaño del motor', fon...
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
<br> Correlación entre variables Tomememos las dos variables del ejemplo anterior...
import matplotlib.pyplot as plt from scipy import stats x=df['engine-size'] #Variable predictora y= df['price'] #Variable objetivo o que deseamos predecir slope, intercept, r_value, p_value, std_err = stats.linregress(x,y) line = slope*x+intercept plt.plot(x,y,'o', x, line) ax = plt.gca() fig = plt.gcf() plt.xlabel('Ta...
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
El gráfico de dispersión anterior revela que hay una relación lineal positiva entre el tamaño del motor y el precio del auto. Es decir, a medida que aumenta el tamaño del motor aumenta el precio. Este gráfico de dispersión revela que hay una relación lineal negativa entre las millas que recorre el auto por combustible ...
import matplotlib.pyplot as plt from scipy import stats x=df['highway-mpg'] #Variable predictora y= df['price'] #Variable objetivo o que deseamos predecir slope, intercept, r_value, p_value, std_err = stats.linregress(x,y) line = slope*x+intercept plt.plot(x,y,'o', x, line) ax = plt.gca() fig = plt.gcf() plt.xlabel('Mi...
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
Ahora calculemos el coeficiente de correlación y el p-valor entre las variables 'Caballos de Fuerza' y 'Precio' usando 'stats.pearson()'
from scipy import stats stats.pearsonr(df['horsepower'], df['price'])
EVI - 2018/EVI 04/Modulo3.ipynb
miky-kr5/Presentations
cc0-1.0
절편과 기울기를 추정한다.
import thinkstats2 inter, slope = thinkstats2.LeastSquares(heights, weights) inter, slope
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
데이터에 대한 산점도와 적합선을 보여준다.
import thinkplot thinkplot.Scatter(heights, weights, alpha=0.01) fxs, fys = thinkstats2.FitLine(heights, inter, slope) thinkplot.Plot(fxs, fys) thinkplot.Config(xlabel='height (cm)', ylabel='log10 weight (kg)', legend=False)
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
동일한 도식화를 하지만, 역변환을 적용해서 선형(log 아님) 척도로 체중을 나타낸다.
thinkplot.Scatter(heights, 10**weights, alpha=0.01) fxs, fys = thinkstats2.FitLine(heights, inter, slope) thinkplot.Plot(fxs, 10**fys) thinkplot.Config(xlabel='height (cm)', ylabel='weight (kg)', legend=False)
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
잔차 백분위수를 도식화한다. 선들이 범위 대부분에 걸쳐 평평하다. 관계가 선형임을 나타낸다. 선들이 거의 평행하다. 잔차 분산이 범위에 걸쳐 같음을 나타낸다.
res = thinkstats2.Residuals(heights, weights, inter, slope) df['residual'] = res bins = np.arange(130, 210, 5) indices = np.digitize(df.htm3, bins) groups = df.groupby(indices) means = [group.htm3.mean() for i, group in groups][1:-1] cdfs = [thinkstats2.Cdf(group.residual) for i, group in groups][1:-1] thinkplot.Pre...
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
상관을 계산한다.
rho = thinkstats2.Corr(heights, weights) rho
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
결정계수를 계산한다.
r2 = thinkstats2.CoefDetermination(weights, res) r2
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
$R^2 = \rho^2$ 임을 확증한다.
rho**2 - r2
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
Std(ys)를 계산하는데, 신장을 사용하지 않은 예측 RMSE가 된다.
std_ys = thinkstats2.Std(weights) std_ys
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
Std(res)를 계산하는데, 신장을 사용하는 예측 RMSE가 된다.
std_res = thinkstats2.Std(res) std_res
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
신장 정보가 RMSE를 얼마나 줄이는가? 약 15%
1 - std_res / std_ys
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
재표본추출을 사용해서 절편과 기울기에 대한 표집분포를 계산하시오.
t = [] for _ in range(100): sample = thinkstats2.ResampleRows(df) estimates = thinkstats2.LeastSquares(sample.htm3, np.log10(sample.wtkg2)) t.append(estimates) inters, slopes = zip(*t)
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
기울기에 대한 표집분포를 도식화하시오.
cdf = thinkstats2.Cdf(slopes) thinkplot.Cdf(cdf) thinkplot.Show(legend=False)
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
기울기에 대한 p-값을 계산하시오.
pvalue = cdf[0] pvalue
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
기울기 90% 신뢰구간을 계산하시오.
ci = cdf.Percentile(5), cdf.Percentile(95) ci
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
표집분포의 평균을 계산하시오.
mean = thinkstats2.Mean(slopes) mean
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
표집분포에 대한 표준편차를 계산하시오. 이것이 표준오차다.
stderr = thinkstats2.Std(slopes) stderr
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
표집가중치를 사용해서 재표본추출하시오.
def ResampleRowsWeighted(df, column='finalwt'): """Resamples a DataFrame using probabilities proportional to given column. df: DataFrame column: string column name to use as weights returns: DataFrame """ weights = df[column] cdf = thinkstats2.Cdf(dict(weights)) indices = cdf.Sample(le...
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
표집분포를 요약하시오.
def Summarize(estimates): mean = thinkstats2.Mean(estimates) stderr = thinkstats2.Std(estimates) cdf = thinkstats2.Cdf(estimates) ci = cdf.Percentile(5), cdf.Percentile(95) print('mean', mean) print('stderr', stderr) print('ci', ci)
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
가중치 없이 행을 재표본추출하고 결과를 요약하시오.
estimates_unweighted = [thinkstats2.ResampleRows(df).htm3.mean() for _ in range(100)] Summarize(estimates_unweighted)
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
가중치를 갖고 행을 재표본추출하시오. 만약 표집 가중치를 고려하면, 추정된 평균 신장이 거의 2cm 더 크고, 차이는 표집오차보다 훨씬 크다.
estimates_weighted = [ResampleRowsWeighted(df).htm3.mean() for _ in range(100)] Summarize(estimates_weighted)
code/chap10soln-kor.ipynb
statkclee/ThinkStats2
gpl-3.0
Data The data is collected and exported in JSON format, with a quick python and dirty python script 'convert.py' I converted into CSV format. We start by reading the data:
# Read local CSV df = pd.read_csv('usage.csv') # Describe the dataset df.describe()
phone-usage.ipynb
jplattel/notebooks
mit
Interesting... This already learns me something useful, I've been using my phone for 70 minutes on average each day... That's a lot of time spend on a mobile device... On average I would pick it up for 41 times a day, meaning the average duration of my phone use is about 1,7 minutes per session. So hows the distributio...
df.hist()
phone-usage.ipynb
jplattel/notebooks
mit
Pickups Now for some more interesting things, let's look at the pickups:
# Read local CSV file df = pd.read_csv('pickups.csv') # Describe the dataset df.describe()
phone-usage.ipynb
jplattel/notebooks
mit
Mmh, it looks like there are some pickups where the length in seconds is rather great, let's remove them. Also note, I picked up my phone over 11530 times. Woah, that's a lot of wear!
# Filter out values over 10 minutes (600 seconds) df = df[df['seconds'] < 600] # Show histogram of usage () df.hist(bins=100)
phone-usage.ipynb
jplattel/notebooks
mit
Most of my phone usage is very short, with the exception of 2 minutes exactly (a rare peak in the histogram around 120 seconds... Any ideas what caused it? D'oh! It's the time my phone turns off if I don't use it* How about dates and times? Well.. let's have a look, shall we?
# Create a additional column to save hour df['hour'] = pd.to_datetime(df['date']).map( lambda x: x.hour ) # Plot histogram of hours df.hist(['hour'], bins=24)
phone-usage.ipynb
jplattel/notebooks
mit
As it turns out, phone usage is the highest durign lunch break. There's also a dent in the usage around 19:00 hours, meaning I don't use my phone that often during and after dinner. How about weekdays, could they differ? On we go again:
# Create a additional column to save the weekday df['weekday'] = pd.to_datetime(df['date']).map( lambda x: x.isoweekday() ) # Then plot df.hist(['weekday'], bins=7)
phone-usage.ipynb
jplattel/notebooks
mit
We will be loading the corpus and dictionary from disk. Here our corpus in the Blei corpus format, but it can be any iterable corpus. The data set here consists of news reports over 3 months downloaded from here and cleaned. TODO: better, more interesting data-set. What is a time-slice? A very important input for DTM ...
# loading our corpus and dictionary dictionary = Dictionary.load('Corpus/news_dictionary') corpus = bleicorpus.BleiCorpus('Corpus/news_corpus') # it's very important that your corpus is saved in order of your time-slices! time_slice = [438, 430, 456]
docs/notebooks/ldaseqmodel.ipynb
pombredanne/gensim
lgpl-2.1
For DTM to work it first needs the Sufficient Statistics from a trained LDA model on the same dataset. By default LdaSeqModel trains it's own model and passes those values on, but can also accept a pre-trained gensim LDA model, or a numpy matrix which contains the Suff Stats. We will be training our model in default m...
ldaseq = ldaseqmodel.LdaSeqModel(corpus=corpus, id2word=dictionary, time_slice=time_slice, num_topics=5, passes=20)
docs/notebooks/ldaseqmodel.ipynb
pombredanne/gensim
lgpl-2.1
Now that our model is trained, let's see what our results look like. Results Much like LDA, the points of interest would be in what the topics are and how the documents are made up of these topics. In DTM we have the added interest of seeing how these topics evolve over time. Let's go through some of the functions to p...
# to print all topics, use `print_topics`. # the input parameter to `print_topics` is only a time-slice option. By passing `0` we are seeing the topics in the 1st time-slice. ldaseq.print_topics(time=0) # to fix a topic and see it evolve, use `print_topic_times` ldaseq.print_topic_times(topic=1) # evolution of 1st t...
docs/notebooks/ldaseqmodel.ipynb
pombredanne/gensim
lgpl-2.1
If you look at the lower frequencies; the word broadband is creeping itself up into prominence in topic number 1. We've had our fun looking at topics, now let us see how to analyse documents. Doc-Topics the function doc_topics checks the topic proportions on documents already trained on. It accepts the document number...
# to check Document - Topic proportions, use `doc-topics` words = [dictionary[word_id] for word_id, count in ldaseq.corpus.corpus[558]] print (words)
docs/notebooks/ldaseqmodel.ipynb
pombredanne/gensim
lgpl-2.1
It's pretty clear that it's a news article about football. What topics will it likely be comprised of?
doc_1 = ldaseq.doc_topics(558) # check the 244th document in the corpuses topic distribution print (doc_1)
docs/notebooks/ldaseqmodel.ipynb
pombredanne/gensim
lgpl-2.1
It's largely made of topics 3 and 5 - and if we go back and inspect our topics, it's quite a good match. If we wish to analyse a document not in our training set, we can use simply pass the doc to the model similar to the __getitem__ funciton for LdaModel. Let's let our document be a hypothetical news article about the...
doc_2 = ['economy', 'bank', 'mobile', 'phone', 'markets', 'buy', 'football', 'united', 'giggs'] doc_2 = dictionary.doc2bow(doc_2) doc_2 = ldaseq[doc_2] print (doc_2)
docs/notebooks/ldaseqmodel.ipynb
pombredanne/gensim
lgpl-2.1
Pretty neat! Topics 2 and 3 are about technology, the market and football, so this works well for us. Distances between documents One of the more handy uses of DTMs topic modelling is that we can compare documents across different time-frames and see how similar they are topic-wise. When words may not necessarily overl...
hellinger(doc_1, doc_2)
docs/notebooks/ldaseqmodel.ipynb
pombredanne/gensim
lgpl-2.1
The topic distributions are quite similar, so we get a high value. For more information on how to use the gensim distance metrics, check out this notebook. Performance The code currently runs between 5 to 7 times slower than the original C++ DTM code. The bottleneck is in the scipy optimize.fmin_cg method for updating ...
from gensim.models.wrappers.dtmmodel import DtmModel dtm_path = "/Users/bhargavvader/Downloads/dtm_release/dtm/main" dtm_model = DtmModel(dtm_path, corpus, time_slice, num_topics=5, id2word=dictionary, initialize_lda=True) dtm_model.save('dtm_news') ldaseq.save('ldaseq_news') # if we've saved before simply load the ...
docs/notebooks/ldaseqmodel.ipynb
pombredanne/gensim
lgpl-2.1
Loading German Credit scoring dataset transformed to use comma-separated values and printing numpy array dimensions
[X,y] = load_dataset('new-german-data.numeric',delim=',') print X.shape print y.shape
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
In order to take a first glance of the distribution of data, the first two principal components are calculated from the original data using PCA and plotted in 2D. It can be observed how there are certain spaces where data points of an specific class appear together, however there is no clear separation devisable throug...
pca = PCA(n_components=2) pca.fit(X,y) print pca.explained_variance_ plotPCA(X,y)
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
In order to understand the relevance of the features of the dataset wrapper methods will be use to see the influence of the features on the final output. First the dataset will be splitted two obtain a training and test set, and then a simple SVM Linear classifier will be built in order to have a first glance on the in...
from sklearn import cross_validation X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3, random_state=0) clf = svm.SVC(kernel='linear', C=10, probability=True) def feature_analysis_prec(X,y,clf): scores = [] names = range(0,X.shape[1]) for i in range(X.shape[1]): ...
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
It can be seen that most of the features has a level of relevance regarding the precision of the classifier; which works as an indication of the false positive rate, critical for credit scoring. To take an additional glance to the ability of the features to represent the outcome let's take the first 4 higher relevant f...
X_r = X[:,[ss[0][1],ss[1][1],ss[2][1],ss[3][1]]] pca = PCA(n_components=2) pca.fit(X_r,y) print pca.explained_variance_ plotPCA(X_r,y)
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
As expected, no much better representational ability is obtained from the principal components on the first 4 most relevant features since each of the features is adding a certain degree of value on predicting the final outcome. Given the complexity of the feature space more sophisticated models must be built in order ...
from sklearn.metrics import classification_report X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3, random_state=0) param_grid = [ {'C': [0.01,0.1, 1, 10, 100,1000], 'kernel': ['linear']} ] clf = svm.SVC(kernel='linear', C=10) clf = GridSearchCV(clf, param_grid, cv=5, scori...
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
Plotting the confusion matrix on the results we obtain the following:
from sklearn.metrics import confusion_matrix from __future__ import division cm = confusion_matrix(y_test,y_pred) plot_confusion_matrix(cm,['Approve','Deny']) # brute confusion matrix values print cm TP = cm[0,0] FN = cm[0,1] FP = cm[1,0] TN = cm[1,1] # percentual confusion matrix values total = cm.sum() print cm / ...
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
RBF Kernel SVM Given the complexity of the feature space, evidenced in the initial plots, it is expected that a more sophisticated transformation of the feature space, such as the nonlinear transformation attempted by RBF Kernel SVM, would allow a better fit. We will reuse the same training/test splitting performed bef...
from sklearn.metrics import classification_report from sklearn.lda import LDA param_grid = [ {'C': [0.01,0.1, 1, 10, 100,1000], 'kernel': ['rbf'], 'gamma' : [0.01,0.1,1,10,100,1000]} ] clf = svm.SVC(kernel='rbf', C=10, gamma = 1) clf = GridSearchCV(clf, param_grid, cv=5, scoring='f1') clf.fit(X_train, y_train) ...
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
Plotting the confusion matrix on the results we obtain the following:
from sklearn.metrics import confusion_matrix from __future__ import division cm = confusion_matrix(y_test,y_pred) plot_confusion_matrix(cm,['Approve','Deny']) # brute confusion matrix values print cm TP = cm[0,0] FN = cm[0,1] FP = cm[1,0] TN = cm[1,1] # percentual confusion matrix values total = cm.sum() print cm / ...
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
Random Decision Forests This bagging algorithm produces and ensemble of decision trees, capable to fit very complex feature spaces. RDF's have proven to be consistently accurate in wider ranges of problems. Since a sufficiently long trees within the ensembles are capable to fit any particular space, a high risk of over...
from sklearn.metrics import classification_report from scipy.stats import randint as sp_randint param_grid = [{"max_depth": [3, 5,10,15,20,30,40,70], "max_features": [1, 3, 10,15,20] } ] clf = RandomForestClassifier(max_features = 'auto', max_depth=10) clf = GridSearchCV(clf,...
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
Results Summary The following graph presents the results comparison between the best models of the three presented approaches. The comparison is performed directly on the ability of discerning of a credit should be assigned or rejected, and the feasibility of a wrongly conceded credit which is a critical issue for this...
import numpy as np import matplotlib.pyplot as plt N = 4 svm_linear = (lsvm_cm[0,0],lsvm_cm[0,1],lsvm_cm[1,0],lsvm_cm[1,1]) rbfsvm = (rbfsvm_cm[0,0],rbfsvm_cm[0,1],rbfsvm_cm[1,0],rbfsvm_cm[1,1]) rdf = (rdf_cm[0,0],rdf_cm[0,1],rdf_cm[1,0],rdf_cm[1,1]) ind = np.arange(N) # the x locations for the groups width = 0.25...
src/credit_notebook.ipynb
javierfdr/credit-scoring-analysis
mit
Prepare the Dataset The Cora dataset consists of 2,708 scientific papers classified into one of seven classes. The citation network consists of 5,429 links. Each paper has a binary word vector of size 1,433, indicating the presence of a corresponding word. Download the dataset The dataset has two tap-separated files: c...
zip_file = keras.utils.get_file( fname="cora.tgz", origin="https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz", extract=True, ) data_dir = os.path.join(os.path.dirname(zip_file), "cora")
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Process and visualize the dataset Then we load the citations data into a Pandas DataFrame.
citations = pd.read_csv( os.path.join(data_dir, "cora.cites"), sep="\t", header=None, names=["target", "source"], ) print("Citations shape:", citations.shape)
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Now we display a sample of the citations DataFrame. The target column includes the paper ids cited by the paper ids in the source column.
citations.sample(frac=1).head()
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Now let's load the papers data into a Pandas DataFrame.
column_names = ["paper_id"] + [f"term_{idx}" for idx in range(1433)] + ["subject"] papers = pd.read_csv( os.path.join(data_dir, "cora.content"), sep="\t", header=None, names=column_names, ) print("Papers shape:", papers.shape)
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Now we display a sample of the papers DataFrame. The DataFrame includes the paper_id and the subject columns, as well as 1,433 binary column representing whether a term exists in the paper or not.
print(papers.sample(5).T)
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Let's display the count of the papers in each subject.
print(papers.subject.value_counts())
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
We convert the paper ids and the subjects into zero-based indices.
class_values = sorted(papers["subject"].unique()) class_idx = {name: id for id, name in enumerate(class_values)} paper_idx = {name: idx for idx, name in enumerate(sorted(papers["paper_id"].unique()))} papers["paper_id"] = papers["paper_id"].apply(lambda name: paper_idx[name]) citations["source"] = citations["source"]....
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Now let's visualize the citation graph. Each node in the graph represents a paper, and the color of the node corresponds to its subject. Note that we only show a sample of the papers in the dataset.
plt.figure(figsize=(10, 10)) colors = papers["subject"].tolist() cora_graph = nx.from_pandas_edgelist(citations.sample(n=1500)) subjects = list(papers[papers["paper_id"].isin(list(cora_graph.nodes))]["subject"]) nx.draw_spring(cora_graph, node_size=15, node_color=subjects)
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Split the dataset into stratified train and test sets
train_data, test_data = [], [] for _, group_data in papers.groupby("subject"): # Select around 50% of the dataset for training. random_selection = np.random.rand(len(group_data.index)) <= 0.5 train_data.append(group_data[random_selection]) test_data.append(group_data[~random_selection]) train_data = p...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Implement Train and Evaluate Experiment
hidden_units = [32, 32] learning_rate = 0.01 dropout_rate = 0.5 num_epochs = 300 batch_size = 256
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
This function compiles and trains an input model using the given training data.
def run_experiment(model, x_train, y_train): # Compile the model. model.compile( optimizer=keras.optimizers.Adam(learning_rate), loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[keras.metrics.SparseCategoricalAccuracy(name="acc")], ) # Create an early ...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
This function displays the loss and accuracy curves of the model during training.
def display_learning_curves(history): fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5)) ax1.plot(history.history["loss"]) ax1.plot(history.history["val_loss"]) ax1.legend(["train", "test"], loc="upper right") ax1.set_xlabel("Epochs") ax1.set_ylabel("Loss") ax2.plot(history.history["ac...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Implement Feedforward Network (FFN) Module We will use this module in the baseline and the GNN models.
def create_ffn(hidden_units, dropout_rate, name=None): fnn_layers = [] for units in hidden_units: fnn_layers.append(layers.BatchNormalization()) fnn_layers.append(layers.Dropout(dropout_rate)) fnn_layers.append(layers.Dense(units, activation=tf.nn.gelu)) return keras.Sequential(fn...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0
Build a Baseline Neural Network Model Prepare the data for the baseline model
feature_names = set(papers.columns) - {"paper_id", "subject"} num_features = len(feature_names) num_classes = len(class_idx) # Create train and test features as a numpy array. x_train = train_data[feature_names].to_numpy() x_test = test_data[feature_names].to_numpy() # Create train and test targets as a numpy array. y...
examples/graph/ipynb/gnn_citations.ipynb
keras-team/keras-io
apache-2.0