markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
TL;DR: A JAX transform inside of a hk.transform is likely to transform a side effecting function, which will result in an UnexpectedTracerError. This page describes two ways to get around this. Limitations of Nesting JAX Functions and Haiku Modules Once a Haiku network has been transformed to a pair of pure functions using hk.transform, it's possible to freely combine these with any JAX transformations like jax.jit, jax.grad, jax.scan and so on. If you want to use JAX transformations inside of a hk.transform however, you need to be more careful. It's possible, but most functions inside of the hk.transform boundary are still side effecting, and cannot safely be transformed by JAX. This is a common cause of UnexpectedTracerErrors in code using Haiku. These errors are a result of using a JAX transform on a side effecting function (for more information on this JAX error, see https://jax.readthedocs.io/en/latest/errors.html#jax.errors.UnexpectedTracerError). An example with jax.eval_shape:
def net(x): # inside of a hk.transform, this is still side-effecting w = hk.get_parameter("w", (2, 2), init=jnp.ones) return w @ x def eval_shape_net(x): output_shape = jax.eval_shape(net, x) # eval_shape on side-effecting function return net(x) # UnexpectedTracerError! init, _ = hk.transform(eval_shape_net) try: init(jax.random.PRNGKey(666), jnp.ones((2, 2))) except jax.errors.UnexpectedTracerError: print("UnexpectedTracerError: applied JAX transform to side effecting function")
docs/notebooks/transforms.ipynb
deepmind/dm-haiku
apache-2.0
These examples use jax.eval_shape, but could have used any higher-order JAX function (eg. jax.vmap, jax.scan, jax.while_loop, ...). The error points to hk.get_parameter. This is the operation which makes net a side effecting function. The side effect in this case is the creation of a parameter, which gets stored into the Haiku state. Similarly you would get an error using hk.next_rng_key, because it advances the Haiku RNG state and stores a new PRNGKey into the Haiku state. In general, transforming a non-transformed Haiku module will result in an UnexpectedTracerError. You could re-write the code above to create the parameter outside of the eval_shape transformation, making net a pure function by threading through the parameter explictly as an argument:
def net(w, x): # no side effects! return w @ x def eval_shape_net(x): w = hk.get_parameter("w", (3, 2), init=jnp.ones) output_shape = jax.eval_shape(net, w, x) # net is now side-effect free return output_shape, net(w, x) key = jax.random.PRNGKey(777) x = jnp.ones((2, 3)) init, apply = hk.transform(eval_shape_net) params = init(key, x) apply(params, key, x)
docs/notebooks/transforms.ipynb
deepmind/dm-haiku
apache-2.0
However, that's not always possible. Consider the following code which calls a Haiku module (hk.nets.MLP) which we don't own. This module will internally call get_parameter.
def eval_shape_net(x): net = hk.nets.MLP([300, 100]) output_shape = jax.eval_shape(net, x) return output_shape, net(x) init, _ = hk.transform(eval_shape_net) try: init(jax.random.PRNGKey(666), jnp.ones((2, 2))) except jax.errors.UnexpectedTracerError: print("UnexpectedTracerError: applied JAX transform to side effecting function")
docs/notebooks/transforms.ipynb
deepmind/dm-haiku
apache-2.0
Using hk.lift We want a way to get access to our implicit Haiku state, and get a functionally pure version of hk.nets.MLP. The way to usually achieve this is by using a hk.transform, so all we need is a way to nest an inner hk.tranform inside an outer hk.transform! We'll create another pair of init and apply functions through hk.transform, and these can then be safely combined with any higher-order JAX function. However, we need a way to register this nested hk.tranform state into the outer scope. We can use hk.lift for this. Wrapping our inner init function with hk.lift will register our inner params into the outer parameter scope.
def eval_shape_net(x): net = hk.nets.MLP([300, 100]) # still side-effecting init, apply = hk.transform(net) # nested transform params = hk.lift(init, name="inner")(hk.next_rng_key(), x) # register parameters in outer module scope with name "inner" output_shape = jax.eval_shape(apply, params, hk.next_rng_key(), x) # apply is a functionaly pure function and can be transformed! out = net(x) return out, output_shape init, apply = hk.transform(eval_shape_net) params = init(jax.random.PRNGKey(777), jnp.ones((100, 100))) apply(params, jax.random.PRNGKey(777), jnp.ones((100, 100))) jax.tree_map(lambda x: x.shape, params)
docs/notebooks/transforms.ipynb
deepmind/dm-haiku
apache-2.0
Using Haiku versions of JAX transforms Haiku also provides wrapped versions of some of the JAX functions for convenience. For example: hk.grad, hk.vmap, .... See https://dm-haiku.readthedocs.io/en/latest/api.html#jax-fundamentals for a full list of available functions. These wrappers apply the JAX function to a functionally pure version of the Haiku function, by doing the explicit state threading for you. They don't introduce an extra namescoping level like lift does.
def eval_shape_net(x): net = hk.nets.MLP([300, 100]) # still side-effecting output_shape = hk.eval_shape(net, x) # hk.eval_shape threads through the Haiku state for you out = net(x) return out, output_shape init, apply = hk.transform(eval_shape_net) params = init(jax.random.PRNGKey(777), jnp.ones((100, 100))) out = apply(params, jax.random.PRNGKey(777), jnp.ones((100, 100)))
docs/notebooks/transforms.ipynb
deepmind/dm-haiku
apache-2.0
Compute residuals The GP hyperparameters are fit to the residuals from the ckwn white-noise analysis.
lpf = LPFTM() pv0 = pd.read_hdf(RFILE_EXT, 'ckwn/fc').median().values fluxes_m = lpf.compute_transit(pv0) residuals = [fo-fm for fo,fm in zip(lpf.fluxes, fluxes_m)] gps = [GPTime(time, res) for time,res in zip(lpf.times, residuals)] hps = []
notebooks/01_broadband_analysis/E3_gp_hyperparameter_optimization_1.ipynb
hpparvi/Parviainen-2017-WASP-80b
mit
Plot the light curves
phases = list(map(lambda t: fold(t, P, TC, 0.5)-0.5, lpf.times)) fig,axs = subplots(4,3, figsize=(14,14),sharey=True, sharex=True) for iax,ilc in enumerate(lpf.lcorder): a = axs.flat[iax] a.plot(phases[ilc], lpf.fluxes[ilc],'.', alpha=0.5) a.plot(phases[ilc], fluxes_m[ilc],'k') a.plot(phases[ilc], lpf.fluxes[ilc]-fluxes_m[ilc]+0.95,'.', alpha=0.5) a.text(0.5, 0.95, lpf.passbands[ilc], ha='center', va='top', size=12, transform=a.transAxes) setp(axs, ylim=(0.94,1.01), xlim=(-0.035,0.035)) fig.tight_layout() axs.flat[-1].set_visible(False)
notebooks/01_broadband_analysis/E3_gp_hyperparameter_optimization_1.ipynb
hpparvi/Parviainen-2017-WASP-80b
mit
Fit the Hyperparameters and plot the GP mean with the data
hps = [] for gp in tqdm(gps, desc='Optimising GP hyperparameters'): gp.fit() hps.append(gp.hp) fig,axs = subplots(4,3, figsize=(14,10),sharey=True, sharex=True) for iax,ilc in enumerate(lpf.lcorder): axs.flat[iax].plot(phases[ilc], gps[ilc].flux, '.', alpha=0.5) gps[ilc].compute(hps[ilc]) pr = gps[ilc].predict() axs.flat[iax].plot(phases[ilc], pr, 'k') setp(axs, ylim=(-0.015,.015), xlim=(-0.04,0.04)) fig.tight_layout() axs.flat[-1].set_visible(False) fig,axs = subplots(4,3, figsize=(14,10),sharey=True, sharex=True) for iax,ilc in enumerate(lpf.lcorder): axs.flat[iax].plot(phases[ilc], lpf.fluxes[ilc], '.', alpha=0.5) gps[ilc].compute(hps[ilc]) pr = gps[ilc].predict() axs.flat[iax].plot(phases[ilc], fluxes_m[ilc]+pr, 'k') setp(axs, ylim=(0.955,1.015), xlim=(-0.04,0.04)) fig.tight_layout() axs.flat[-1].set_visible(False)
notebooks/01_broadband_analysis/E3_gp_hyperparameter_optimization_1.ipynb
hpparvi/Parviainen-2017-WASP-80b
mit
Create a Pandas dataframe and save the hyperparameters
with pd.HDFStore(DFILE_EXT) as f: ntr = [k[3:] for k in f.keys() if 'lc/triaud' in k] nma = [k[3:] for k in f.keys() if 'lc/mancini' in k] df = pd.DataFrame(hps, columns=gp.names, index=lpf.passbands) df['lc_name'] = ntr+nma df df.ix[:3].to_hdf(RFILE_EXT, 'gphp/triaud2013') df.ix[3:].to_hdf(RFILE_EXT, 'gphp/mancini2014')
notebooks/01_broadband_analysis/E3_gp_hyperparameter_optimization_1.ipynb
hpparvi/Parviainen-2017-WASP-80b
mit
Uso de metacaracteres Se usará <sec> para indicar una secuencia arbitraria, la cual puede ser compuesta de caracteres literales o metacaracteres. Las expresiones regulares se dividen en un conjunto BRE (Basic Regular Expression) y ERE (Extended Regular Expression) acorde al soporte definido de los metacaracteres. El primero es soportado acorde al estandar POSIX y se ilustra en el siguiente cuadro. | Metacaracter | Descripción | |:---:|:---:| | . | Permite la sustitución por cualquier caracter | | ^<sec> | Indica que la secuencia que precede se ubica al inicio. | | <sec>$ | Indica que la secuencia precedente se ubica al final. | | [<sec>] | Indica que los elementos definidos en los corchetes basta con la coincidencia de uno de ellos | | [^<sec>] | Indica que los elementos definidos en los corchetes porteriores a ^ deben estar ausentes | | [<chr>-<chr>] | Siendo los caracteres pertenecientes a un subconjunto de caracteres (clase), representa a todos los caracteres contenidos entre ellos | En el estandar POSIX se definen un conjunto de clases para simplificar los rangos cuando se usa todo el subconjunto, y es supremamente conveniente su uso para expresiones que requieren rangos donde se difiere del orden diccionario (principalmente por motivos de codificación acorde al soporte de algunas aplicaciones para las expresiones regulares). Dado que corresponden a rangos, estos deben estar a su vez en medio de otros corchetes.
%%bash echo ":: non-meta <sec> ::" grep 'zip' test_regexp echo ":: meta .<sec> ::" grep '.zip' test_regexp echo ":: meta ^<sec> ::" grep '^zip' test_regexp echo ":: meta <sec>$ ::" grep 'zip$' test_regexp echo ":: meta ^<sec>$ ::" grep '^zip$' test_regexp echo ":: meta [<sec>] ::" grep '[bg]zip' test_regexp echo ":: meta [^<sec>] ::" grep '[^bg]zip' test_regexp echo ":: meta [<chr>-<chr>] ::" grep '^[A-Z][a-z]' test_regexp echo ":: meta [[:<class>:]] ::" grep '^[[:upper:]][[:lower:]]' test_regexp
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
Para los metacaracteres ERE se tiene el siguiente comportamiento. | Metacaracter | Descripción | |:---:|:---:| |<code>sec&#124;sec</code>| Alternación Coincidencia con alguno de los patrones | |&lt;sec&gt;?| Elemento precedente opcional (cero o una vez) | |&lt;sec&gt;*| Elemento precedente opcional (cero o multiples veces) | |&lt;sec&gt;+| Elemento precedente con repetición opcional (una o multiples veces) | | &lt;sec&gt;{n,m} | Coincidencia del precedente mínimo $n$ veces y máximo $m$ veces. Si se usa uno de los dos con la presencia de la , aplica como solo mínimo o máximo según corresponda, y si es uno solo sin la , es las coincidencias exactas | Para multiples aplicaciones el soporte de ERE debe indicarse de forma explicita, así por ejemplo para la utilidad con la cual hemos ejemplificado, debe usarse grep -E, siendo el argumento -E la indicación del soporte extendido. Igualmente hay utilidades con el soporte incluido, en este caso la alternativa es egrep.
%%bash echo ":: meta <sec>|<sec> ::" grep -E '^2|.zip$' test_regexp echo ":: meta <sec>? ::" grep -E '^[a-z]?zip.' test_regexp echo ":: meta <sec>* ::" grep -E '^[a-z]*zip.' test_regexp echo ":: meta <sec>+ ::" grep -E '^[a-z]+zip.' test_regexp echo ":: meta <sec>{n,m} ::" grep -E '^[a-z]{1,3}zip.' test_regexp
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
Otras utilidades para habilitar sus expresiones regulares requieren indicación explicita incluso para el soporte POSIX. Ejemplo locate --regex '&lt;regex&gt;'. En less y vim una vez visualizamos el archivo, habilitamos la búsqueda de expresiones regulares con /. Manipulación de texto Concatenación de archivos Para la concatenación de archivos en linux podemos usar la utilidad cat. Permite la unión de archivos y redirige su unión a la salida estandar.
%%bash printf "hola\nesto\nes\nuna\nprueba.\n" > prueba1 a=1 echo $a > prueba2 while [ $a -lt 5 ]; do let a=a+1 echo $a >> prueba2 done cat prueba1 prueba2 > prueba3 more prueba3
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
Ordenamiento Es posible ordenar lineas de archivos con los valores definidos en ASCII con la instrucción sort.
!sort prueba3
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
Unicidad Es posible remover los elementos repetidos de un conjunto con utilidades de linux. Para esto, primero debe ordenarse la lista de elemento (sort) y aplicar uniq.
%%bash cat prueba3 prueba1 prueba2 prueba3 > repetido cat repetido echo ":: uniq sin sort ::" uniq repetido echo ":: uniq con sort ::" sort repetido | uniq
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
Cortar Para extraer secciones especificas de lineas de los archivos usamos la instrucción cut.
%%bash ls -oh ls -oh | grep -Ev '[[:alpha:]]+ [[:alnum:]]+\.ipynb|total' | grep -Ev '[a-z]{3} ' > cortar echo ":: archivo cortar seleccionando lineas ::" cat cortar echo ":: Horas enteras de los archivos ::" cut -d " " -f 7 cortar | cut -c 1-2 > horas cut -d " " -f 7 cortar | cut -c 4-5 > minutos echo ":: Horas ::" cat horas echo ":: Minutos ::" cat minutos
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
Pegado Para pegar columnas usamos la instrucción paste.
%%bash paste horas minutos
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
Comparación de archivos Es posible comparar archivos con las instrucciones comm y diff. La primera requiere que los archivos sean ordenados. Usando la diferencia de los archivos es posible generar la actualización del archivo base con patch.
%%bash printf "a\nb\nf\nh" > ordenado1 printf "a\nc\nd\ng\nh" > ordenado2 echo ":: comm ::" comm ordenado1 ordenado2 echo ":: diff ::" diff ordenado1 ordenado2 echo ":: patch ::" diff -Naur ordenado1 ordenado2 > diferencia cat diferencia patch < diferencia cat ordenado1
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
Edición Es posible hacer "traducciones" con la consola con la instrucción tr. Esto corresponde a substituciones basadas en expresiones regulares.
%%bash echo "EsCrItUrA TeRrIbLe" | tr [:upper:] [:lower:]
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
Una forma más avanzada es posible con sed.
%%bash echo "escritura terrible" | sed 'y/esib/3516/' echo "escritura terrible" | sed 's/terrible/hermosa/'
Presentaciones/Notas/08_Procesamiento_texto.ipynb
cosmoscalibur/herramientas_computacionales
mit
(1) The histogram and normal probability plot shows that the distribution of body temperatures approximately follows a normal distribution
import numpy as np import math import pylab import scipy.stats as stats import matplotlib.pyplot as plt plt.hist(df.temperature) plt.show() stats.probplot(df.temperature, dist="norm", plot=pylab) pylab.show()
JC_inferential_statistics_ex1.ipynb
jo-c-2017/DS_Projects
apache-2.0
(2) The sample size is 130, which is large enough (>30) for the assumption of CLT. In addition, 130 people is <10% of the human population, so we can assume that the observations are independent.
sample_size = df.temperature.count() print('sample size is ' + str(sample_size))
JC_inferential_statistics_ex1.ipynb
jo-c-2017/DS_Projects
apache-2.0
(3) We can use one-sample z test (the sample size is much larger than 30): $H_0: T = 98.6$ $H_A: T \neq 98.6$ The p value is 4.35e-08, which is much smaller than 0.05. This indicates that the true mean of the human body temperature is not 98.6. When using t-test instead, the p value is 2.19e-07, which is larger than the p value obtained from z test due to the thicker tails of t-distribution. This p value is still much smaller than 0.05, indicating that the true mean of the human body temperature is not 98.6
mean = np.mean(df.temperature) se = (np.std(df.temperature))/math.sqrt(sample_size) z = (98.6 - mean)/se p_z = (1-stats.norm.cdf(z))*2 print('p value for z test is ' + str(p_z)) dgf = sample_size - 1 p_t = 2*(1-stats.t.cdf(z, dgf)) print('p value for t test is ' + str(p_t))
JC_inferential_statistics_ex1.ipynb
jo-c-2017/DS_Projects
apache-2.0
(4) We would consider someone's temperature to be "abnormal" if it doesn't fall within the 95% confidence interval [98.12, 98.37]
ub = mean + 1.96*se lb = mean - 1.96*se print('Mean: ' + str(mean)) print('95 % Confidence Interval: [' + str(lb) + ', ' + str(ub) + ']')
JC_inferential_statistics_ex1.ipynb
jo-c-2017/DS_Projects
apache-2.0
(5) We can use two-sample z test: $H_0: T_M = T_F$ $H_A: T_M \neq T_F$ The p value is 0.02, which is smaller than 0.05. This indicates that there is a significant difference between males and females in normal temperature
male_temp = df[df.gender=='M'].temperature female_temp = df[df.gender=='F'].temperature mean_diff = abs(np.mean(male_temp) - np.mean(female_temp)) se = math.sqrt(np.var(male_temp)/male_temp.count() + np.var(female_temp)/female_temp.count() ) z = mean_diff/se p_z = (1-stats.norm.cdf(z))*2 print('mean for male is ' + str(np.mean(male_temp))) print('mean for female is ' + str(np.mean(female_temp))) print('p value for z test is ' + str(p_z))
JC_inferential_statistics_ex1.ipynb
jo-c-2017/DS_Projects
apache-2.0
Construct Two Point Sets
# Generate two circles with a small offset def make_circles(dimension:int=2,offset:list=None): PointSetType = itk.PointSet[itk.F, dimension] RADIUS = 100 if not offset or len(offset) != dimension: offset = [2.0] * dimension fixed_points = PointSetType.New() moving_points = PointSetType.New() fixed_points.Initialize() moving_points.Initialize() count = 0 step = 0.1 for count in range(0, int(2 * pi / step) + 1): theta = count * step fixed_point = list() fixed_point.append(RADIUS * cos(theta)) for dim in range(1,dimension): fixed_point.append(RADIUS * sin(theta)) fixed_points.SetPoint(count, fixed_point) moving_point = [fixed_point[dim] + offset[dim] for dim in range(0,dimension)] moving_points.SetPoint(count, moving_point) return fixed_points, moving_points POINT_SET_OFFSET = [15.0, 15.0] fixed_set, moving_set = make_circles(offset=POINT_SET_OFFSET) # Visualize point sets with matplotlib fig = plt.figure() ax = plt.axes() n_points = fixed_set.GetNumberOfPoints() ax.scatter([fixed_set.GetPoint(i)[0] for i in range(0,n_points)], [fixed_set.GetPoint(i)[1] for i in range(0,n_points)]) ax.scatter([moving_set.GetPoint(i)[0] for i in range(0,n_points)], [moving_set.GetPoint(i)[1] for i in range(0,n_points)])
src/Registration/Metricsv4/RegisterTwoPointSets/RegisterTwoPointSets.ipynb
InsightSoftwareConsortium/ITKExamples
apache-2.0
Run Gradient Descent Optimization We will quantify the point set offset with JensenHavrdaCharvatTsallisPointSetToPointSetMetricv4 and minimize the metric value over 10 gradient descent iterations.
ExhaustiveOptimizerType = itk.ExhaustiveOptimizerv4[itk.D] dim = 2 # Define translation parameters to update iteratively TransformType = itk.TranslationTransform[itk.D, dim] transform = TransformType.New() transform.SetIdentity() PointSetType = type(fixed_set) # Define a metric to reflect the difference between point sets PointSetMetricType = itk.JensenHavrdaCharvatTsallisPointSetToPointSetMetricv4[PointSetType] metric = PointSetMetricType.New( FixedPointSet=fixed_set, MovingPointSet=moving_set, MovingTransform=transform, PointSetSigma=5.0, KernelSigma=10.0, UseAnisotropicCovariances=False, CovarianceKNeighborhood=5, EvaluationKNeighborhood=10, Alpha=1.1) metric.Initialize() # Define an estimator to help determine step sizes along each transform parameter2 ShiftScalesType = itk.RegistrationParameterScalesFromPhysicalShift[PointSetMetricType] shift_scale_estimator = ShiftScalesType.New( Metric=metric, VirtualDomainPointSet=metric.GetVirtualTransformedPointSet(), TransformForward=True ) max_iterations = 10 # Define the gradient descent optimzer OptimizerType = itk.GradientDescentOptimizerv4Template[itk.D] optimizer = OptimizerType.New( Metric=metric, NumberOfIterations=max_iterations, ScalesEstimator=shift_scale_estimator, MaximumStepSizeInPhysicalUnits=8.0, MinimumConvergenceValue=-1, DoEstimateLearningRateAtEachIteration=False, DoEstimateLearningRateOnce=True, ReturnBestParametersAndValue=True) iteration_data = dict() # Track gradient descent iterations with observers def print_iteration(): print(f'It: {optimizer.GetCurrentIteration()}' f' metric value: {optimizer.GetCurrentMetricValue():.6f} ' #f' transform position: {list(optimizer.GetCurrentPosition())}' f' learning rate: {optimizer.GetLearningRate()}') def log_iteration(): iteration_data[optimizer.GetCurrentIteration() + 1] = list(optimizer.GetCurrentPosition()) optimizer.AddObserver(itk.AnyEvent(), print_iteration) optimizer.AddObserver(itk.IterationEvent(), log_iteration) # Set first value to default transform position iteration_data[0] = list(optimizer.GetCurrentPosition()) # Run optimization and print out results optimizer.StartOptimization() print(f'Number of iterations: {optimizer.GetCurrentIteration() - 1}') print(f'Moving-source final value: {optimizer.GetCurrentMetricValue()}') print(f'Moving-source final position: {list(optimizer.GetCurrentPosition())}') print(f'Optimizer scales: {list(optimizer.GetScales())}') print(f'Optimizer learning rate: {optimizer.GetLearningRate()}') print(f'Stop reason: {optimizer.GetStopConditionDescription()}')
src/Registration/Metricsv4/RegisterTwoPointSets/RegisterTwoPointSets.ipynb
InsightSoftwareConsortium/ITKExamples
apache-2.0
Resample Moving Point Set
moving_inverse = metric.GetMovingTransform().GetInverseTransform() fixed_inverse = metric.GetFixedTransform().GetInverseTransform() transformed_fixed_set = PointSetType.New() transformed_moving_set = PointSetType.New() for n in range(0,metric.GetNumberOfComponents()): transformed_moving_point = moving_inverse.TransformPoint(moving_set.GetPoint(n)) transformed_moving_set.SetPoint(n,transformed_moving_point) transformed_fixed_point = fixed_inverse.TransformPoint(fixed_set.GetPoint(n)) transformed_fixed_set.SetPoint(n,transformed_fixed_point) # Compare fixed point set with resampled moving point set to see alignment fig = plt.figure() ax = plt.axes() n_points = fixed_set.GetNumberOfPoints() ax.scatter([fixed_set.GetPoint(i)[0] for i in range(0,n_points)], [fixed_set.GetPoint(i)[1] for i in range(0,n_points)]) ax.scatter([transformed_moving_set.GetPoint(i)[0] for i in range(0,n_points)], [transformed_moving_set.GetPoint(i)[1] for i in range(0,n_points)])
src/Registration/Metricsv4/RegisterTwoPointSets/RegisterTwoPointSets.ipynb
InsightSoftwareConsortium/ITKExamples
apache-2.0
Visualize Gradient Descent We can use the ITK ExhaustiveOptimizerv4 class to view how the optimizer moved along the surface defined by the transform parameters and metric.
# Set up the new optimizer # Create a new transform and metric for analysis transform = TransformType.New() transform.SetIdentity() metric = PointSetMetricType.New( FixedPointSet=fixed_set, MovingPointSet=moving_set, MovingTransform=transform, PointSetSigma=5, KernelSigma=10.0, UseAnisotropicCovariances=False, CovarianceKNeighborhood=5, EvaluationKNeighborhood=10, Alpha=1.1) metric.Initialize() # Create a new observer to map out the parameter surface optimizer.RemoveAllObservers() optimizer = ExhaustiveOptimizerType.New( Metric=metric) # Use observers to collect points on the surface param_space = dict() def log_exhaustive_iteration(): param_space[tuple(optimizer.GetCurrentPosition())] = optimizer.GetCurrentValue() optimizer.AddObserver(itk.IterationEvent(), log_exhaustive_iteration) # Collect a moderate number of steps along each transform parameter step_count = 25 optimizer.SetNumberOfSteps([step_count,step_count]) # Step a reasonable distance along each transform parameter scales = optimizer.GetScales() scales.SetSize(2) scale_size = 1.0 scales.SetElement(0, scale_size) scales.SetElement(1, scale_size) optimizer.SetScales(scales) optimizer.StartOptimization() print(f'MinimumMetricValue: {optimizer.GetMinimumMetricValue():.4f}\t' f'MaximumMetricValue: {optimizer.GetMaximumMetricValue():.4f}\n' f'MinimumMetricValuePosition: {list(optimizer.GetMinimumMetricValuePosition())}\t' f'MaximumMetricValuePosition: {list(optimizer.GetMaximumMetricValuePosition())}\n' f'StopConditionDescription: {optimizer.GetStopConditionDescription()}\t') # Reformat gradient descent data to overlay on the plot descent_x_vals = [iteration_data[i][0] for i in range(0,len(iteration_data))] descent_y_vals = [iteration_data[i][1] for i in range(0,len(iteration_data))] # Plot the surface, extrema, and gradient descent data in a matplotlib scatter plot fig = plt.figure() ax = plt.axes() ax.scatter([x for (x,y) in param_space.keys()], [y for (x,y) in param_space.keys()], c=list(param_space.values()), cmap='Greens',zorder=1); ax.plot(optimizer.GetMinimumMetricValuePosition()[0], optimizer.GetMinimumMetricValuePosition()[1], 'kv') ax.plot(optimizer.GetMaximumMetricValuePosition()[0], optimizer.GetMaximumMetricValuePosition()[1], 'w^') for i in range(0,len(iteration_data)): ax.plot(descent_x_vals[i:i+2],descent_y_vals[i:i+2],'rx-') ax.plot(descent_x_vals[0],descent_y_vals[0],'ro') ax.plot(descent_x_vals[len(iteration_data) - 1], descent_y_vals[len(iteration_data) - 1],'bo')
src/Registration/Metricsv4/RegisterTwoPointSets/RegisterTwoPointSets.ipynb
InsightSoftwareConsortium/ITKExamples
apache-2.0
We can also view and export the surface as an image using itkwidgets.
x_vals = list(set(x for (x,y) in param_space.keys())) y_vals = list(set(y for (x,y) in param_space.keys())) x_vals.sort() y_vals.sort(reverse=True) array = np.array([[param_space[(x,y)] for x in x_vals] for y in y_vals]) image_view = itk.GetImageViewFromArray(array) view(image_view)
src/Registration/Metricsv4/RegisterTwoPointSets/RegisterTwoPointSets.ipynb
InsightSoftwareConsortium/ITKExamples
apache-2.0
Hyperparameter Search In order to find adequate results with different transforms, metrics, and optimizers it is often useful to compare results across variations in hyperparameters. In the case of this example it was necessary to evaluate performance for different values of the JensenHavrdaCharvatTsallisPointSetToPointSetMetricv4.PointSetSigma parameter and GradientDescentOptimizerv4.DoEstimateLearningRate parameters. Gradient descent iteration data was plotted along 2D scatter plots to compare and select the hyperparameter combination yielding the best performance.
# Index values for gradient descent logging FINAL_OPT_INDEX = 0 DESCENT_DATA_INDEX = 1 hyper_data = dict() final_optimizers = dict() # sigma must be sufficiently large to avoid negative entropy results point_set_sigmas = (1.0,2.5,5.0,10.0,20.0,50.0) # Compare performance with repeated or one-time learning rate estimation estimate_rates = [(False,False), (False, True), (True, False)] # Run gradient descent optimization for each combination of hyperparameters for trial_values in itertools.product(point_set_sigmas, estimate_rates): hyper_data[trial_values] = dict() (point_set_sigma, est_rate) = trial_values fixed_set, moving_set = \ make_circles(offset=POINT_SET_OFFSET) transform = TransformType.New() transform.SetIdentity() metric = PointSetMetricType.New( FixedPointSet=fixed_set, MovingPointSet=moving_set, PointSetSigma=point_set_sigma, KernelSigma=10.0, UseAnisotropicCovariances=False, CovarianceKNeighborhood=5, EvaluationKNeighborhood=10, MovingTransform=transform, Alpha=1.1) shift_scale_estimator = ShiftScalesType.New( Metric=metric, VirtualDomainPointSet=metric.GetVirtualTransformedPointSet()) metric.Initialize() optimizer = OptimizerType.New( Metric=metric, NumberOfIterations=100, MaximumStepSizeInPhysicalUnits=3.0, MinimumConvergenceValue=-1, DoEstimateLearningRateOnce=est_rate[0], DoEstimateLearningRateAtEachIteration=est_rate[1], LearningRate=1e6, # Ignored if either est_rate argument is True ReturnBestParametersAndValue=False) optimizer.SetScalesEstimator(shift_scale_estimator) def log_hyper_iteration_data(): hyper_data[trial_values][optimizer.GetCurrentIteration()] =\ round(optimizer.GetCurrentMetricValue(),8) optimizer.AddObserver(itk.IterationEvent(), log_hyper_iteration_data) optimizer.StartOptimization() final_optimizers[trial_values] = optimizer # Print results for each set of hyperparameters print('PS_sigma\test once/each:\tfinal index\tfinal metric') for trial_values in itertools.product(point_set_sigmas,estimate_rates): print(f'{trial_values[0]}\t\t{trial_values[1]}:\t\t' f'{final_optimizers[trial_values].GetCurrentIteration()}\t' f'{final_optimizers[trial_values].GetCurrentMetricValue():10.8f}')
src/Registration/Metricsv4/RegisterTwoPointSets/RegisterTwoPointSets.ipynb
InsightSoftwareConsortium/ITKExamples
apache-2.0
We can use matplotlib subplots and bar graphs to compare gradient descent performance and final metric values for each set of hyperparameters. In this example we see that estimating the learning rate once typically gives the best performance over time, while estimating the learning rate at each iteration can prevent gradient descent from converging efficiently. The hyperparameter set giving the best and most consistent metric value is that with PointSetSigma=5.0 and DoEstimateLearningRateOnce=True, which are the values we have used in this notebook.
# Visualize metric over gradient descent iterations as matplotlib subplots. f, axn = plt.subplots(len(point_set_sigmas),len(estimate_rates),sharex=True) for (i,j) in [(i,j) for i in range(0,len(point_set_sigmas)) for j in range(0,len(estimate_rates))]: axn[i,j].scatter(x=list(hyper_data[point_set_sigmas[i],estimate_rates[j]].keys())[1:], y=list(hyper_data[point_set_sigmas[i],estimate_rates[j]].values())[1:]) axn[i,j].set_title(f'sigma={point_set_sigmas[i]},est={estimate_rates[j]}') axn[i,j].set_ylim(-0.08,0) plt.subplots_adjust(top=5,bottom=1,right=5) plt.show() # Compare final metric magnitudes fig = plt.figure() ax = fig.gca() labels = [f'{round(sig,0)}{"T" if est0 else "F"}{"T" if est1 else "F"}' for (sig,(est0,est1)) in itertools.product(point_set_sigmas,estimate_rates)] vals = [final_optimizers[trial_values].GetCurrentMetricValue() for trial_values in itertools.product(point_set_sigmas,estimate_rates)] ax.bar(labels,vals) plt.show()
src/Registration/Metricsv4/RegisterTwoPointSets/RegisterTwoPointSets.ipynb
InsightSoftwareConsortium/ITKExamples
apache-2.0
Quiz question: Assume an intermediate node has 6 safe loans and 3 risky loans. For each of 4 possible features to split on, the error reduction is 0.0, 0.05, 0.1, and 0.14, respectively. If the minimum gain in error reduction parameter is set to 0.2, what should the tree learning algorithm do next? Grabbing binary decision tree helper functions from past assignment Recall from the previous assignment that we wrote a function intermediate_node_num_mistakes that calculates the number of misclassified examples when predicting the majority class. This is used to help determine which feature is best to split on at a given node of the tree. Please copy and paste your code for intermediate_node_num_mistakes here.
def intermediate_node_num_mistakes(labels_in_node): # Corner case: If labels_in_node is empty, return 0 if len(labels_in_node) == 0: return 0 # Count the number of 1's (safe loans) ## YOUR CODE HERE num_positive = sum([x == 1 for x in labels_in_node]) # Count the number of -1's (risky loans) ## YOUR CODE HERE num_negative = sum([x == -1 for x in labels_in_node]) # Return the number of mistakes that the majority classifier makes. ## YOUR CODE HERE return min(num_positive, num_negative)
ml-classification/module-6-decision-tree-practical-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
We then wrote a function best_splitting_feature that finds the best feature to split on given the data and a list of features to consider. Please copy and paste your best_splitting_feature code here.
def best_splitting_feature(data, features, target): best_feature = None # Keep track of the best feature best_error = 10 # Keep track of the best error so far # Note: Since error is always <= 1, we should intialize it with something larger than 1. # Convert to float to make sure error gets computed correctly. num_data_points = float(len(data)) # Loop through each feature to consider splitting on that feature for feature in features: # The left split will have all data points where the feature value is 0 left_split = data[data[feature] == 0] # The right split will have all data points where the feature value is 1 ## YOUR CODE HERE right_split = data[data[feature] == 1] # Calculate the number of misclassified examples in the left split. # Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes) # YOUR CODE HERE left_mistakes = intermediate_node_num_mistakes(left_split['safe_loans']) # Calculate the number of misclassified examples in the right split. ## YOUR CODE HERE right_mistakes = intermediate_node_num_mistakes(right_split['safe_loans']) # Compute the classification error of this split. # Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points) ## YOUR CODE HERE error = float(left_mistakes + right_mistakes) / len(data) # If this is the best error we have found so far, store the feature as best_feature and the error as best_error ## YOUR CODE HERE if error < best_error: best_feature, best_error = feature, error return best_feature # Return the best feature we found
ml-classification/module-6-decision-tree-practical-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Incorporating new early stopping conditions in binary decision tree implementation Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, max_depth, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment. Implementing early stopping condition 2: minimum node size: Step 1: Use the function reached_minimum_node_size that you implemented earlier to write an if condition to detect whether we have hit the base case, i.e., the node does not have enough data points and should be turned into a leaf. Don't forget to use the min_node_size argument. Step 2: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions. Implementing early stopping condition 3: minimum error reduction: Note: This has to come after finding the best splitting feature so we can calculate the error after splitting in order to calculate the error reduction. Step 1: Calculate the classification error before splitting. Recall that classification error is defined as: $$ \text{classification error} = \frac{\text{# mistakes}}{\text{# total examples}} $$ * Step 2: Calculate the classification error after splitting. This requires calculating the number of mistakes in the left and right splits, and then dividing by the total number of examples. * Step 3: Use the function error_reduction to that you implemented earlier to write an if condition to detect whether the reduction in error is less than the constant provided (min_error_reduction). Don't forget to use that argument. * Step 4: Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions. Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10, min_node_size=1, min_error_reduction=0.0): remaining_features = features[:] # Make a copy of the features. target_values = data[target] print "--------------------------------------------------------------------" print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values)) # Stopping condition 1: All nodes are of the same type. if intermediate_node_num_mistakes(target_values) == 0: print "Stopping condition 1 reached. All data points have the same target value." return create_leaf(target_values) # Stopping condition 2: No more features to split on. if remaining_features == []: print "Stopping condition 2 reached. No remaining features." return create_leaf(target_values) # Early stopping condition 1: Reached max depth limit. if current_depth >= max_depth: print "Early stopping condition 1 reached. Reached maximum depth." return create_leaf(target_values) # Early stopping condition 2: Reached the minimum node size. # If the number of data points is less than or equal to the minimum size, return a leaf. if reached_minimum_node_size(data, min_node_size): ## YOUR CODE HERE print "Early stopping condition 2 reached. Reached minimum node size." return create_leaf(target_values) ## YOUR CODE HERE # Find the best splitting feature splitting_feature = best_splitting_feature(data, features, target) # Split on the best feature that we found. left_split = data[data[splitting_feature] == 0] right_split = data[data[splitting_feature] == 1] # Early stopping condition 3: Minimum error reduction # Calculate the error before splitting (number of misclassified examples # divided by the total number of examples) error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data)) # Calculate the error after splitting (number of misclassified examples # in both groups divided by the total number of examples) left_mistakes = intermediate_node_num_mistakes(left_split['safe_loans']) ## YOUR CODE HERE right_mistakes = intermediate_node_num_mistakes(right_split['safe_loans']) ## YOUR CODE HERE error_after_split = (left_mistakes + right_mistakes) / float(len(data)) # If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf. if error_reduction(error_before_split, error_after_split) <= min_error_reduction: ## YOUR CODE HERE print "Early stopping condition 3 reached. Minimum error reduction." return create_leaf(target_values) ## YOUR CODE HERE remaining_features.remove(splitting_feature) print "Split on feature %s. (%s, %s)" % (\ splitting_feature, len(left_split), len(right_split)) # Repeat (recurse) on left and right subtrees left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth, min_node_size, min_error_reduction) ## YOUR CODE HERE right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth, min_node_size, min_error_reduction) return {'is_leaf' : False, 'prediction' : None, 'splitting_feature': splitting_feature, 'left' : left_tree, 'right' : right_tree}
ml-classification/module-6-decision-tree-practical-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for validation_set[0] shorter, longer, or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3? Quiz question: For my_decision_tree_new trained with max_depth = 6, min_node_size = 100, min_error_reduction=0.0, is the prediction path for any point always shorter, always longer, always the same, shorter or the same, or longer or the same as for my_decision_tree_old that ignored the early stopping conditions 2 and 3? Quiz question: For a tree trained on any dataset using max_depth = 6, min_node_size = 100, min_error_reduction=0.0, what is the maximum number of splits encountered while making a single prediction? Evaluating the model Now let us evaluate the model that we have trained. You implemented this evautation in the function evaluate_classification_error from the previous assignment. Please copy and paste your evaluate_classification_error code here.
def evaluate_classification_error(tree, data): # Apply the classify(tree, x) to each row in your data prediction = data.apply(lambda x: classify(tree, x)) # Once you've made the predictions, calculate the classification error and return it ## YOUR CODE HERE return (prediction != data['safe_loans']).sum() / float(len(data))
ml-classification/module-6-decision-tree-practical-assignment-solution.ipynb
dnc1994/MachineLearning-UW
mit
Encoding the labels Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1. Exercise: Convert labels from positive and negative to 1 and 0, respectively.
labels = labels.split('\n') labels = np.array([1 if each == 'positive' else 0 for each in labels])
nanodegrees/deep_learning_foundations/unit_3/lesson_34_sentiment-rnn/Sentiment RNN.ipynb
broundy/udacity
unlicense
Exercises Exercise: Maybe the reason the proportional model doesn't work very well is that the growth rate, alpha, is changing over time. So let's try a model with different growth rates before and after 1980 (as an arbitrary choice). Write an update function that takes pop, t, and system as parameters. The system object, system, should contain two parameters: the growth rate before 1980, alpha1, and the growth rate after 1980, alpha2. It should use t to determine which growth rate to use. Note: Don't forget the return statement. Test your function by calling it directly, then pass it to run_simulation. Plot the results. Adjust the parameters alpha1 and alpha2 to fit the data as well as you can.
# Solution goes here # Solution goes here
notebooks/chap06.ipynb
AllenDowney/ModSimPy
mit
Main Program (Takes a while) Check the pair-wise duration gaps of each funds by Person ID
# Merge back to the main dataset data_main = data_merged.merge(transformed, how='outer', left_on='PersonID', right_on='PersonID') data_main = data_main.sort(columns=['PersonID', 'InceptionDate', 'PerformanceEndDate', 'ProductReference']) ################################################ # WARNING: DEBUG CODE -- NEEDS TO BE DISABLED # data_main = data_main[:2000:] # data_main = data_main[data_main.PersonID==799] ################################################ def find_gaps(person_panel): '''The function finds the number of relaunched hedge funds. A gap is defined as there exists a fund, of which the PerformanceEndDate is before the inception date of all funds that were incepted later than this fund. Returns the number of gaps; and all of the Fund ID which is proceeding each gap. ''' gaps_number = 0 fund_ID_preceed_gap = [] # Reset index from 0 person_panel = person_panel.reset_index(drop=True) # print("Her data panel is as below:") # print(person_panel) for i, i_row in person_panel.iterrows(): # Reset criteria status for a new i_row criteria_backward = True criteria_forward = True criteria_exist_after = False for j, j_row in person_panel.iterrows(): if i==j: # Skip continue # print('Comparison now made for row: ', i, j) # Days to define the gap is NOT yet incorporated. # Criteria 1 - Looking backward: check if i_end>j_end for all j of which j_inc<i_inc # For all funds earlier than i # j: Inc-----End # i: Inc------------End if j_row.InceptionDate<i_row.InceptionDate: criteria_backward *= check_backward(i_row, j_row) # Criteria 2 - Looking forward: check if i_end<j_inc for all j of which j_inc>i_inc # For all funds later than i # i: Inc---------End # |***GAP***| # j: Inc--------End if j_row.InceptionDate>=i_row.InceptionDate: criteria_forward *= check_forward(i_row, j_row) # Criteria 3 - There must be funds incepted after fund i if j_row.InceptionDate>=i_row.InceptionDate: criteria_exist_after = True # If Criteria 1,2,3 are all satisfied # Fund i is the fund proceeding a gap if criteria_backward==True and criteria_forward==True and criteria_exist_after==True: fund_ID_preceed_gap.append(i_row.ProductReference) gaps_number += 1 return (gaps_number, fund_ID_preceed_gap) def check_backward(i_row, j_row): """ Compares the PerformanceEndDate for i_row and j_row. It returns True if i_end>j_end; It returns False otherwise. """ if i_row.PerformanceEndDate >= j_row.PerformanceEndDate: return True else: return False def check_forward(i_row, j_row): """ Compares the PerformanceEndDate for i_row to the InceptionDate of j_row. Returns True if i_end<j_inc; Returns False otherwise. """ global Gap_Days # Find the global variable Gap_Days gap_days = timedelta(days=Gap_Days) if i_row.PerformanceEndDate + gap_days <= j_row.InceptionDate: return True else: return False grouped_by_person = data_main.groupby('PersonID', as_index=False, axis=0) grouped_by_person.groups # A new dict to store number of gaps found for each person gaps = {} fund_IDs = {} for person_id, person_panel in grouped_by_person: # print("The person's PersonID is :", person_id) gaps_number, fund_ID_preceed_gap = find_gaps(person_panel[['ProductReference', 'InceptionDate', 'PerformanceEndDate']]) gaps[person_id] = gaps_number fund_IDs[person_id] = fund_ID_preceed_gap # Transform two dict into DataFrame data_gaps = pd.DataFrame.from_dict(gaps, orient='index') data_gaps.columns = ['number_of_gaps'] data_gaps['PersonID'] = data_gaps.index data_gaps = data_gaps.reset_index(drop=True) data_gaps.head() data_gap_fund_IDs = pd.DataFrame.from_dict(fund_IDs, orient='index') data_gap_fund_IDs.columns = ['fund_IDs_proceeding_gap'] data_gap_fund_IDs['PersonID'] = data_gap_fund_IDs.index data_gap_fund_IDs = data_gap_fund_IDs.reset_index(drop=True) data_gap_fund_IDs.head() # Merge with the main dataset data_output = data_main.merge(data_gaps, how='outer', left_on='PersonID', right_on='PersonID') data_output = data_output.merge(data_gap_fund_IDs, how='outer', left_on='PersonID', right_on='PersonID') data_output.head() # Output datafiles data_output.to_excel('data_output.xlsx') data_output.to_stata('data_output.dta', convert_dates={13:'td', 14:'td', 15:'td'})
Version 0.2.ipynb
xxPeterxx/RelaunchedFunds
gpl-2.0
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
from sklearn.preprocessing import LabelBinarizer lb = LabelBinarizer() lb.fit(np.arange(10)) def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function arr = np.array(x) return lb.transform(arr) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/dlnd_image_classification.ipynb
nimish-jose/dlnd
gpl-3.0
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name="x") def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=(None, n_classes), name="y") def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, name="keep_prob") """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/dlnd_image_classification.ipynb
nimish-jose/dlnd
gpl-3.0
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function x_depth = x_tensor.get_shape().as_list()[3] weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_depth, conv_num_outputs], stddev=0.1)) bias = tf.Variable(tf.zeros(conv_num_outputs)) conv_layer = tf.nn.conv2d(x_tensor, weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME') conv_layer = tf.nn.bias_add(conv_layer, bias) conv_layer = tf.nn.relu(conv_layer) conv_layer = tf.nn.max_pool(conv_layer, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME') return conv_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
nimish-jose/dlnd
gpl-3.0
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function shape = x_tensor.get_shape().as_list() flat_size = tf.cast(shape[1]*shape[2]*shape[3], tf.int32) return tf.reshape(x_tensor, [-1, flat_size]) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
image-classification/dlnd_image_classification.ipynb
nimish-jose/dlnd
gpl-3.0
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function x_size = x_tensor.get_shape().as_list()[1] weight = tf.Variable(tf.truncated_normal([x_size, num_outputs], stddev=0.1)) bias = tf.Variable(tf.zeros(num_outputs)) fc_layer = tf.add(tf.matmul(x_tensor, weight), bias) fc_layer = tf.nn.relu(fc_layer) return fc_layer """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/dlnd_image_classification.ipynb
nimish-jose/dlnd
gpl-3.0
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function x_size = x_tensor.get_shape().as_list()[1] weight = tf.Variable(tf.truncated_normal([x_size, num_outputs], stddev=0.1)) bias = tf.Variable(tf.zeros(num_outputs)) return tf.add(tf.matmul(x_tensor, weight), bias) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
nimish-jose/dlnd
gpl-3.0
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) x_tensor = x x_tensor = conv2d_maxpool(x_tensor, 16, (5, 5), (1, 1), (2, 2), (2, 2)) x_tensor = conv2d_maxpool(x_tensor, 32, (3, 3), (1, 1), (2, 2), (2, 2)) x_tensor = conv2d_maxpool(x_tensor, 64, (3, 3), (1, 1), (2, 2), (2, 2)) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) x_tensor = flatten(x_tensor) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) x_tensor = fully_conn(x_tensor, 1024) x_tensor = tf.nn.dropout(x_tensor, keep_prob=keep_prob) x_tensor = fully_conn(x_tensor, 256) x_tensor = tf.nn.dropout(x_tensor, keep_prob=keep_prob) x_tensor = fully_conn(x_tensor, 64) x_tensor = tf.nn.dropout(x_tensor, keep_prob=keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) x_tensor = output(x_tensor, 10) # TODO: return output return x_tensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/dlnd_image_classification.ipynb
nimish-jose/dlnd
gpl-3.0
Loading Reuter corpus from NLTK Load reuter corpus including 1000 documents with maximum vocabulary size of 10000 from NLTK corpus
n_doc = 1000 voca, doc_ids, doc_cnt = get_reuters_ids_cnt(num_doc=n_doc, max_voca=10000) docs = convert_cnt_to_list(doc_ids, doc_cnt) n_voca = len(voca) print('Vocabulary size:%d' % n_voca)
notebook/LDA_example.ipynb
arongdari/python-topic-model
apache-2.0
Inferencen through the Gibbs sampling
max_iter=100 n_topic=10 logger = logging.getLogger('GibbsLDA') logger.propagate = False model = GibbsLDA(n_doc, len(voca), n_topic) model.fit(docs, max_iter=max_iter)
notebook/LDA_example.ipynb
arongdari/python-topic-model
apache-2.0
Print top 10 probability words for each topic
for ti in range(n_topic): top_words = get_top_words(model.TW, voca, ti, n_words=10) print('Topic', ti ,': ', ','.join(top_words))
notebook/LDA_example.ipynb
arongdari/python-topic-model
apache-2.0
Inferencen through the Variational Bayes
logger = logging.getLogger('vbLDA') logger.propagate = False vbmodel = vbLDA(n_doc, n_voca, n_topic) vbmodel.fit(doc_ids, doc_cnt, max_iter=max_iter)
notebook/LDA_example.ipynb
arongdari/python-topic-model
apache-2.0
Print top 10 probability words for each topic
for ti in range(n_topic): top_words = get_top_words(vbmodel._lambda, voca, ti, n_words=10) print('Topic', ti ,': ', ','.join(top_words))
notebook/LDA_example.ipynb
arongdari/python-topic-model
apache-2.0
Simpe LSTM model
# data attributes input_seq_length = X_trn.shape[1] input_vocabulary_size = len(set(idx2w)) + 1 output_length = 127 #Model parameters embedding_size=64 num_hidden_lstm = 128 # build the model: Simple LSTM with embedings from tensorflow.contrib.keras import layers, models, optimizers print('Build model 1') seq_input = layers.Input(shape=([input_seq_length]), name='prev') #---------------------------------------- # Put your embeding + LSTM + dense model here #---------------------------------------- model1 = models.Model(inputs=seq_input, outputs=output) model1.summary() # Optimizer adam_optimizer = optimizers.Adam() model1.compile(loss='sparse_categorical_crossentropy', optimizer=adam_optimizer, metrics=['accuracy']) #Plot the model graph from tensorflow.contrib.keras import utils # Create model image utils.plot_model(model1, '/tmp/model1.png') # Show image plt.imshow(plt.imread('/tmp/model1.png')) #Fit model history = model1.fit(X_trn, y_trn, batch_size=128, epochs=20, validation_data=(X_tst, y_tst)) #Plot graphs in the notebook output plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.show() from sklearn.metrics import confusion_matrix #---------------------------------------- # real vs predict matrix using sklearn #----------------------------------------
tensorflow/02-text/03-word_tagging/01_identify_tags_in_airline_database_LSTM - EXERCISE.ipynb
sueiras/training
gpl-3.0
Betrachten wir nun Korrelationen zwischen Klassen. Dazu werden die summierten Abstände zwischen den Werten von jeweils zwei Klassen aufgetragen. Blau: Korrelation, Grün: Keine Korrelation, Rot: Antikorrelation.
fig, ax = plt.subplots(figsize=(8, 8)) classes = Tweet.get_all_keys() y_pos = np.arange(len(classes)) error = np.full((len(classes), len(classes)), 0, dtype=int) for tweet in tweets[:100]: for i, class_i in enumerate(classes): for j, class_j in enumerate(classes): error[i, j] += abs((tweet[class_i] - tweet[class_j])) ax.set_yticks(y_pos) ax.set_yticklabels(classes) ax.set_xticks(y_pos) ax.set_xticklabels(classes) ax.set_title('Korrelation zwischen Klassen') plt.imshow(error, interpolation='nearest') plt.show()
data_analysis_results/data_analysis.ipynb
Griesbacher/ContentAnalytics
gpl-3.0
Basic Aggregation Operations
# Food wise count of likes food_wise = users_likes_join.groupby('likes')['likes'].count() food_wise # Lets sort our data. Default order is ascending asc_sort = food_wise.sort_values() asc_sort # An example for descending dsc_sort = food_wise.sort_values(ascending=False) dsc_sort
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
QUICK NOTE ABOUT sort_values By default sort_values allocates new memory each time it is called. While working with larger production data we can be limited by the available memory on our machines vis-a-vis the dataset size (and really we do not wish to hit the SWAP partition even on SSDs). In such a situation, we can set the keyword argument inplace=True, this will modify the current DataFrame it self instead of allocating new memory. Beware though mutation, while memory efficient, can be a risky affair leading to complex code paths and hard to reason about code.
# Using in_place sort for memory efficiency # Notice there is no left hand side value food_wise.sort_values(ascending=False, inplace=True) # food_wise itself has changed food_wise
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
Working with visualisations and charts We use the python package - matplotlib - to generate visualisations and charts for our analysis. The command %matplotlib inline is a handy option which embeds the charts directly into our ipython/jupyter notebook. While we can directly configure and call matplotlib functions to generate charts. Pandas, via the DataFrame object, exposes some very convenient methods to quickly generate plots.
%matplotlib inline import matplotlib # ggplot is theme of matplotlib which adds # some visual asthetics to our charts. It is # inspired from the eponymous charting package # of the R programming language matplotlib.style.use('ggplot') # Every DataFrame object exposes a plot object # which can be used to generate different plots # A pie chart, figsize allows us to define size of the # plot as a tuple of (width, height) in inches food_wise.plot.pie(figsize=(7, 7)) # A bar chart food_wise.plot.bar(figsize=(7, 7)) # Horizontal bar chart food_wise.plot.barh(figsize=(7, 7)) # Lets plot the most active users - those who hit like # very often using the above techniques # Get the users by number of likes they have user_agg = users_likes_join.groupby('name')['likes'].count() # Here we go: Our most active users in a different color user_agg.plot.barh(figsize=(6, 6), color='#10d3f6')
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
matplotlib does provide many more options to generate complex and we will explore more of them as we proceed. We assemble data and massage it with the sole purpose seeking insights, getting our questions answered - exactly where pandas shines. Asking questions of our data Pandas supports boolean indexing using the square bracket notation - []. Boolean indexing enables us to pass a predicate which can be used for among other things for filtering. Pandas also provides negation operator ~ to filter based on opposite of our predicate.
# Users who never interact with our data df_users[~df_users.id.isin(df_likes['user_id'])]
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
Since pandas DataFrame is a column based abstraction (as against row) we need to reset_index after an aggregation operation in order retrieve flat DataFrame which is convenient to query.
# Oldest user who has exactly 2 likes agg_values = ( users_likes_join .groupby(['user_id', 'name', 'age']) .agg({ 'likes': 'count' }) .sort_index(level=['age'], sort_remaining=False, ascending=False) ) agg_values[agg_values['likes'] == 2].head(1)
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
In the above we used sort_index instead of sort_values because the groupby operation creates a MultiIndex on columns user_id, name and age and since age is a part of an index sort_values cannot operate on it. The head(n) function on a DataFrame returns first n records from the frame and the equivalent function tail(n) returns last n records from the frame.
# Oldest user who has at least 2 likes agg_values[agg_values['likes'] >= 2].head(1) # Lets augment our data a little more users = users + [ { 'id': 7, 'name': 'Yeti', 'age': 40 }, { 'id': 8, 'name': 'Commander', 'age': 31 }, { 'id': 9, 'name': 'Jonnah', 'age': 26 }, { 'id': 10, 'name': 'Hex', 'age': 28 }, { 'id': 11, 'name': 'Sam', 'age': 33 }, { 'id': 12, 'name': 'Madan', 'age': 53 }, { 'id': 13, 'name': 'Harry', 'age': 38 }, { 'id': 14, 'name': 'Tom', 'age': 29 }, { 'id': 15, 'name': 'Daniel', 'age': 23 }, { 'id': 16, 'name': 'Virat', 'age': 24 }, { 'id': 17, 'name': 'Nathan', 'age': 16 }, { 'id': 18, 'name': 'Stepheny', 'age': 26 }, { 'id': 19, 'name': 'Lola', 'age': 31 }, { 'id': 20, 'name': 'Amy', 'age': 25 }, ] users, len(users) likes = likes + [ { 'user_id': 17, 'likes': 'Mango' }, { 'user_id': 14, 'likes': 'Orange'}, { 'user_id': 18, 'likes': 'Burger'}, { 'user_id': 19, 'likes': 'Blueberry'}, { 'user_id': 7, 'likes': 'Cola'}, { 'user_id': 11, 'likes': 'Burger'}, { 'user_id': 13, 'likes': 'Mango'}, { 'user_id': 1, 'likes': 'Coconut'}, { 'user_id': 6, 'likes': 'Pepsi'}, { 'user_id': 8, 'likes': 'Cola'}, { 'user_id': 17, 'likes': 'Mango'}, { 'user_id': 19, 'likes': 'Coconut'}, { 'user_id': 15, 'likes': 'Blueberry'}, { 'user_id': 20, 'likes': 'Soda'}, { 'user_id': 3, 'likes': 'Cola'}, { 'user_id': 4, 'likes': 'Pepsi'}, { 'user_id': 14, 'likes': 'Coconut'}, { 'user_id': 11, 'likes': 'Mango'}, { 'user_id': 12, 'likes': 'Soda'}, { 'user_id': 16, 'likes': 'Orange'}, { 'user_id': 2, 'likes': 'Pepsi'}, { 'user_id': 19, 'likes': 'Cola'}, { 'user_id': 15, 'likes': 'Carrot'}, { 'user_id': 18, 'likes': 'Carrot'}, { 'user_id': 14, 'likes': 'Soda'}, { 'user_id': 13, 'likes': 'Cola'}, { 'user_id': 9, 'likes': 'Pepsi'}, { 'user_id': 10, 'likes': 'Blueberry'}, { 'user_id': 7, 'likes': 'Soda'}, { 'user_id': 12, 'likes': 'Burger'}, { 'user_id': 6, 'likes': 'Cola'}, { 'user_id': 4, 'likes': 'Burger'}, { 'user_id': 14, 'likes': 'Orange'}, { 'user_id': 18, 'likes': 'Blueberry'}, { 'user_id': 20, 'likes': 'Cola'}, { 'user_id': 9, 'likes': 'Soda'}, { 'user_id': 14, 'likes': 'Pepsi'}, { 'user_id': 6, 'likes': 'Mango'}, { 'user_id': 3, 'likes': 'Coconut'}, ] likes, len(likes)
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
Eating your own dog food The above data has been copy-pasted and hand edited. A problem with this approach is the possibility of data containing more than one like for the same product by the same user. While we can manually check the data the approach will be tedious and untractable as the size of the data increases. Instead we employ pandas itself to indentify duplicate likes by the same person and fix the data accordingly.
# DataFrames from native python dictionaries df_users = pd.DataFrame(users) df_likes = pd.DataFrame(likes)
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
Lets figure out where are the duplicates
_duplicate_likes = ( df_likes .groupby(['user_id', 'likes']) .agg({ 'likes': 'count' }) ) duplicate_likes = _duplicate_likes[_duplicate_likes['likes'] > 1] duplicate_likes
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
So there are in all 6 duplicate records. User#2 and Pepsi is recorded twice so that is 1 extra, 2 extra for User#3 and Cola and 1 extra for rest of the three pairs, which equals, 1 + 2 + 1 + 1 + 1 = 6.
# Now remove the duplicates df_unq_likes = df_likes.drop_duplicates() # The difference should be 6 since 6 records should be eliminated len(df_unq_likes), len(df_likes)
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
We replay our previous aggregation to verify no more duplicates indeed exist.
# Join the datasets users_likes_join = df_users.merge(df_unq_likes, left_on='id', right_on='user_id') users_likes_join.set_index('id') # We aggregate the likes column and rename it to `Records` unq_user_likes_group = ( users_likes_join .groupby(['id', 'name', 'likes']) .agg({'likes': 'count'}) .rename(columns={ 'likes': 'num_likes' }) ) # Should return empty if duplicates are removed unq_user_likes_group[unq_user_likes_group['num_likes'] > 1]
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
Lets continue with asking more questions of our data and gloss over some more convenience methods exposed by Pandas for aggregation.
# What percent of audience likes each fruit? likes_count = ( users_likes_join .groupby('likes') .agg({ 'user_id': 'count' }) ) likes_count['percent'] = likes_count['user_id'] * 100 / len(df_users) likes_count.sort_values('percent', ascending=False)
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
In the above code snippet we created a computed column percent in the likes_count DataFrame. Column operations in pandas are vectorized and execute significantly faster than row operations; always a good idea to express computations as column operations as against row operations.
# What do people who like Coconut also like? coconut_likers = users_likes_join[users_likes_join['likes'] == 'Coconut'].user_id likes_among_coconut_likers = users_likes_join[(users_likes_join['user_id'].isin(coconut_likers)) & (users_likes_join['likes'] != 'Coconut')] likes_among_coconut_likers.groupby('likes').agg({ 'user_id': pd.Series.nunique }).sort_values('user_id', ascending=False)
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
In our fictitious database, Cola and Pepsi seem to be popular among the users who like Coconut.
# What is the age group distribution of likes? users_likes_join.groupby('likes').age.plot(kind='hist', legend=True, figsize=(10, 6))
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
Most of our audience seem to fall in the 25 - 40 years age group. But this visualisation has one flaw - if records are stacked on top of each other, only one of them will be visible. Lets try an alternative plot.
users_likes_join.groupby('likes').age.plot(kind='kde', legend=True, figsize=(10, 6))
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
Anything surprising? Coconut - gray color - was not represented in the histogram. But from this visualisation, we can notice that coconut is popular among the 25 - 35 years age group only. On the other hand, if we want to plot a specific "likable" object, we can simply filter our dataframe before groupby operation.
# Age distribution only of people who like Soda users_likes_join[users_likes_join['likes'] == 'Soda'].groupby('likes').age.plot(kind='hist', legend=True, figsize=(10, 6))
1-Working-with-relational-data-using-pandas.ipynb
liquidscorpio/python-data-analysis
gpl-2.0
加载模型 HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
import hanlp hanlp.pretrained.srl.ALL # 语种见名称最后一个字段或相应语料库
plugins/hanlp_demo/hanlp_demo/zh/srl_stl.ipynb
hankcs/HanLP
apache-2.0
调用hanlp.load进行加载,模型会自动下载到本地缓存:
srl = hanlp.load('CPB3_SRL_ELECTRA_SMALL')
plugins/hanlp_demo/hanlp_demo/zh/srl_stl.ipynb
hankcs/HanLP
apache-2.0
语义角色分析 为已分词的句子执行语义角色分析:
srl(['2021年', 'HanLPv2.1', '为', '生产', '环境', '带来', '次', '世代', '最', '先进', '的', '多', '语种', 'NLP', '技术', '。'])
plugins/hanlp_demo/hanlp_demo/zh/srl_stl.ipynb
hankcs/HanLP
apache-2.0
语义角色标注结果中每个四元组的格式为[论元或谓词, 语义角色标签, 起始下标, 终止下标]。其中,谓词的语义角色标签为PRED,起止下标对应单词数组。 遍历谓词论元结构:
for i, pas in enumerate(srl(['2021年', 'HanLPv2.1', '为', '生产', '环境', '带来', '次', '世代', '最', '先进', '的', '多', '语种', 'NLP', '技术', '。'])): print(f'第{i+1}个谓词论元结构:') for form, role, begin, end in pas: print(f'{form} = {role} at [{begin}, {end}]')
plugins/hanlp_demo/hanlp_demo/zh/srl_stl.ipynb
hankcs/HanLP
apache-2.0
Import raw data The user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.
# -------------------------------- # -------- User input ------------ # -------------------------------- data = { # Specify sample type key 'wt': { # Specify path to data directory 'path': './../yot_experiment/data/Output_wt_01-23-16-06', # Specify which channels are in the directory and are of interest 'channels': ['AT','ZRF'] }, 'yot': { 'path': './../yot_experiment/data/Output_yot_01-24-14-22', 'channels': ['AT','ZRF'] }, 'hss1a': { 'path': './data/hss1a/Output-02-15-2019', 'channels': ['AT','ZRF'] }, 'hss1ayot': { 'path': './data/hss1ayot/Output-02-15-2019', 'channels': ['AT','ZRF'] } } data_pairs = [] for s in data.keys(): for c in data[s]['channels']: data_pairs.append((s,c))
experiments/s1a_rescue/landmarks.ipynb
msschwartz21/craniumPy
gpl-3.0
Display the numer of samples for each sample type.
len(D['wt']['AT'].keys()),len(D['yot']['AT'].keys()),len(D['hss1a']['AT'].keys()),len(D['hss1ayot']['AT'].keys())
experiments/s1a_rescue/landmarks.ipynb
msschwartz21/craniumPy
gpl-3.0
Landmarks Calculate landmark bins based on user input parameters and the previously specified control sample.
lm = ds.landmarks(percbins=percbins, rnull=np.nan) lm.calc_bins(D[s_ctrl][c_ctrl], anum, theta_step) print('Alpha bins') print(lm.acbins) print('Theta bins') print(lm.tbins) lmdf = pd.DataFrame() # Loop through each pair of stype and channels for s,c in tqdm.tqdm(data_pairs): print(s,c) # Calculate landmarks for each sample with this data pair for k,df in tqdm.tqdm(D[s][c].items()): lmdf = lm.calc_perc(df, k, '-'.join([s,c]), lmdf) # Set timestamp for saving data tstamp = time.strftime("%m-%d-%H-%M",time.localtime()) # Save completed landmarks to a csv file lmdf.to_csv(tstamp+'_landmarks.csv') # Save landmark bins to json file bins = { 'acbins':list(lm.acbins), 'tbins':list(lm.tbins) } with open(tstamp+'_landmarks_bins.json', 'w') as outfile: json.dump(bins, outfile)
experiments/s1a_rescue/landmarks.ipynb
msschwartz21/craniumPy
gpl-3.0
Important: Before running, ensure that Accelerate has been configured through either accelerate config in the command line or by running write_basic_config
# from accelerate.utils import write_basic_config # write_basic_config()
nbs/examples/distributed_app_examples.ipynb
fastai/fastai
apache-2.0
Image Classification
path = untar_data(URLs.PETS)/'images' def train(): dls = ImageDataLoaders.from_name_func( path, get_image_files(path), valid_pct=0.2, label_func=lambda x: x[0].isupper(), item_tfms=Resize(224)) learn = vision_learner(dls, resnet34, metrics=error_rate).to_fp16() with learn.distrib_ctx(in_notebook=True, sync_bn=False): learn.fine_tune(1) notebook_launcher(train, num_processes=2)
nbs/examples/distributed_app_examples.ipynb
fastai/fastai
apache-2.0
Image Segmentation
path = untar_data(URLs.CAMVID_TINY) def train(): dls = SegmentationDataLoaders.from_label_func( path, bs=8, fnames = get_image_files(path/"images"), label_func = lambda o: path/'labels'/f'{o.stem}_P{o.suffix}', codes = np.loadtxt(path/'codes.txt', dtype=str) ) learn = unet_learner(dls, resnet34) with learn.distrib_ctx(in_notebook=True, sync_bn=False): learn.fine_tune(8) notebook_launcher(train, num_processes=2)
nbs/examples/distributed_app_examples.ipynb
fastai/fastai
apache-2.0
Text Classification
path = untar_data(URLs.IMDB_SAMPLE) df = pd.read_csv(path/'texts.csv') def train(): imdb_clas = DataBlock(blocks=(TextBlock.from_df('text', seq_len=72), CategoryBlock), get_x=ColReader('text'), get_y=ColReader('label'), splitter=ColSplitter()) dls = imdb_clas.dataloaders(df, bs=64) learn = rank0_first(lambda: text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)) with learn.distrib_ctx(in_notebook=True): learn.fine_tune(4, 1e-2) notebook_launcher(train, num_processes=2)
nbs/examples/distributed_app_examples.ipynb
fastai/fastai
apache-2.0
Tabular
path = untar_data(URLs.ADULT_SAMPLE) df = pd.read_csv(path/'adult.csv') def train(): dls = TabularDataLoaders.from_csv(path/'adult.csv', path=path, y_names="salary", cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race'], cont_names = ['age', 'fnlwgt', 'education-num'], procs = [Categorify, FillMissing, Normalize]) learn = tabular_learner(dls, metrics=accuracy) with learn.distrib_ctx(in_notebook=True): learn.fit_one_cycle(3) notebook_launcher(train, num_processes=2)
nbs/examples/distributed_app_examples.ipynb
fastai/fastai
apache-2.0
Collab Filtering
path = untar_data(URLs.ML_SAMPLE) df = pd.read_csv(path/'ratings.csv') def train(): dls = CollabDataLoaders.from_df(df) learn = collab_learner(dls, y_range=(0.5,5.5)) with learn.distrib_ctx(in_notebook=True): learn.fine_tune(6) notebook_launcher(train, num_processes=2)
nbs/examples/distributed_app_examples.ipynb
fastai/fastai
apache-2.0
Keypoints
path = untar_data(URLs.BIWI_HEAD_POSE) def img2pose(x): return Path(f'{str(x)[:-7]}pose.txt') def get_ctr(f): ctr = np.genfromtxt(img2pose(f), skip_header=3) c1 = ctr[0] * cal[0][0]/ctr[2] + cal[0][2] c2 = ctr[1] * cal[1][1]/ctr[2] + cal[1][2] return tensor([c1,c2]) img_files = get_image_files(path) cal = np.genfromtxt(path/'01'/'rgb.cal', skip_footer=6) def train(): biwi = DataBlock( blocks=(ImageBlock, PointBlock), get_items=get_image_files, get_y=get_ctr, splitter=FuncSplitter(lambda o: o.parent.name=='13'), batch_tfms=[*aug_transforms(size=(240,320)), Normalize.from_stats(*imagenet_stats)]) dls = biwi.dataloaders(path) learn = vision_learner(dls, resnet18, y_range=(-1,1)) with learn.distrib_ctx(in_notebook=True, sync_bn=False): learn.fine_tune(1) notebook_launcher(train, num_processes=2)
nbs/examples/distributed_app_examples.ipynb
fastai/fastai
apache-2.0
In the previous notebook some hyperparameter exploration was made for the Random Forest Regressor. Let's see which are now the best predictors for each number of "ahead days".
best_raw_params = pd.read_pickle('../../data/best_dataset_params_raw_df.pkl') def keep_max_r2(record): return record.loc[np.argmax(record['r2']),:] best_ini_pred_df = best_raw_params.groupby('ahead_days').apply(keep_max_r2) best_ini_pred_df
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Those were the best predictors before the hyperparameters tuning
hyper1_df = pd.read_pickle('../../data/hyper_ahead1_random_forest_df.pkl') hyper1_df from sklearn.metrics import r2_score hyper_1_best_series = hyper1_df.iloc[np.argmax(hyper1_df['r2'])].copy() hyper_1_best_series.name = 1 hyper_1_best_series ahead_days = [1, 7, 14, 28, 56] best_hyper_df = pd.DataFrame() best_hyper_df.index.name = 'ahead_days' for ahead in ahead_days: hyper_df = pd.read_pickle('../../data/hyper_ahead{}_random_forest_df.pkl'.format(ahead)) hyper_best_series = hyper_df.iloc[np.argmax(hyper_df['r2'])].copy() hyper_best_series.name = ahead best_hyper_df = best_hyper_df.append(hyper_best_series) best_hyper_df
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Let's compare the new best Random Forest predictors with the old Linear Regressors.
def join_and_compare(df1, df2, column, labels): tj1 = pd.DataFrame(df1[column].copy()) tj1.rename(columns = {column: labels[0]}, inplace=True) tj2 = pd.DataFrame(df2[column].copy()) tj2.rename(columns = {column: labels[1]}, inplace=True) comp_df = tj1.join(tj2) comp_df['diff'] = comp_df[labels[1]] - comp_df[labels[0]] return comp_df
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
First the $r^2$ metrics
comp_r2_df = join_and_compare(best_ini_pred_df, best_hyper_df, 'r2', ['linear', 'random_forest']) comp_r2_df['best'] = comp_r2_df.apply(lambda x: np.argmax(x), axis=1) comp_r2_df
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
The values are very similar in both cases. A minor difference can be seen only in the case of 56 days ahead, in which the random forest seems to be a bit better than the linear predictor. In any case, as the linear predictor is much simpler, and faster, it's probably better to keep it as the best predictor. It can be seen that this scenario is different from the one before hyperparameter tuning, in which the linear predictor was always better. And then the MRE metrics
comp_mre_df = join_and_compare(best_ini_pred_df, best_hyper_df, 'mre', ['linear', 'random_forest']) comp_mre_df['best'] = comp_mre_df.apply(lambda x: np.argmax(x), axis=1) comp_mre_df
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
The values for the MRE metrics are almost the same for both predictors. Conlcusion: The linear predictor will be chosen for all the predictions. In the case of the 56 ahead days prediction, a better $r^2$ metric could be achieved by the random forest predictor, but the linear predictor is still far simpler and faster. Testing the chosen predictor Let's get the test data
data_test_df = pd.read_pickle('../../data/data_test_df.pkl') data_test_df.head()
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
When generating the datasets, some symbols were removed from the training set (because they contained too many missing points). The same symbols should be removed from the test set. Let's generate datasets for the test set, with the best parameters found.
best_ini_pred_df best_ini_pred_df.to_pickle('../../data/best_params_final_df.pkl')
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Some playing with the data to remove the same symbols as in the training set
params = best_ini_pred_df.loc[1] train_val_time = int(params['train_val_time']) base_days = int(params['base_days']) step_days = int(params['step_days']) ahead_days = int(params['ahead_days']) print('Generating: base{}_ahead{}'.format(base_days, ahead_days)) pid = 'base{}_ahead{}'.format(base_days, ahead_days) y_train_df = pd.read_pickle('../../data/y_{}.pkl'.format(pid)) y_train_df.head() kept_symbols = y_train_df.index.get_level_values(1).unique().tolist() len(kept_symbols) len(data_test_df.columns.get_level_values(1).unique().tolist()) filtered_data_test_df = data_test_df.loc[:, (slice(None), kept_symbols)] len(filtered_data_test_df.columns.get_level_values(1).unique().tolist())
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
OK, let's create a function to generate one test dataset
def generate_one_test_set(params, data_df): # print(('-'*70 + '\n {}, {} \n' + '-'*70).format(params['base_days'].values, params['ahead_days'].values)) tic = time() train_val_time = int(params['train_val_time']) base_days = int(params['base_days']) step_days = int(params['step_days']) ahead_days = int(params['ahead_days']) print('Generating: base{}_ahead{}'.format(base_days, ahead_days)) pid = 'base{}_ahead{}'.format(base_days, ahead_days) # Getting the data today = data_df.index[-1] # Real date print(pid + ') data_df loaded') # Drop symbols with many missing points y_train_df = pd.read_pickle('../../data/y_{}.pkl'.format(pid)) kept_symbols = y_train_df.index.get_level_values(1).unique().tolist() data_df = data_df.loc[:, (slice(None), kept_symbols)] print(pid + ') Irrelevant symbols dropped.') # Generate the intervals for the predictor x, y = fe.generate_train_intervals(data_df, train_val_time, base_days, step_days, ahead_days, today, fe.feature_close_one_to_one) print(pid + ') Intervals generated') # Drop "bad" samples and fill missing data x_y_df = pd.concat([x, y], axis=1) x_y_df = pp.drop_irrelevant_samples(x_y_df, params['SAMPLES_GOOD_DATA_RATIO']) x = x_y_df.iloc[:, :-1] y = x_y_df.iloc[:, -1] x = pp.fill_missing(x) print(pid + ') Irrelevant samples dropped and missing data filled.') # Pickle that x.to_pickle('../../data/x_{}_test.pkl'.format(pid)) y.to_pickle('../../data/y_{}_test.pkl'.format(pid)) toc = time() print('%s) %i intervals generated in: %i seconds.' % (pid, x.shape[0], (toc-tic))) return pid, x, y for ind in range(best_ini_pred_df.shape[0]): pid, x, y = generate_one_set(best_ini_pred_df.iloc[ind,:], data_test_df) x = pd.read_pickle('../../data/x_base112_ahead7_test.pkl') x x.iloc[10].plot()
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
The datasets were successfully generated There will be two types of test. One is retraining at every step, and the other is without retraining. The first one is good enough to confirm that there was no overfitting on the hyperparameters. The second can show how "valid" is the model for periods outside the training period. If there is no time dependence in the results, the test without retraining may be the only one perfomed.
best_params_df = pd.read_pickle('../../data/best_params_final_df.pkl') best_params_df
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Ahead 1 Without retraining Warning: The dates that appear on the samples are the initial dates (there are 112 days ahead after the marked date).
from predictor.linear_predictor import LinearPredictor import utils.misc as misc import predictor.evaluation as ev ahead_days = 1 # Get some parameters train_days = int(best_params_df.loc[ahead_days, 'train_days']) GOOD_DATA_RATIO, \ train_val_time, \ base_days, \ step_days, \ ahead_days, \ SAMPLES_GOOD_DATA_RATIO, \ x_filename, \ y_filename = misc.unpack_params(best_params_df.loc[ahead_days,:]) pid = 'base{}_ahead{}'.format(base_days, ahead_days) # Get the datasets x_train = pd.read_pickle('../../data/x_{}.pkl'.format(pid)) y_train = pd.read_pickle('../../data/y_{}.pkl'.format(pid)) x_test = pd.read_pickle('../../data/x_{}_test.pkl'.format(pid)).sort_index() y_test = pd.DataFrame(pd.read_pickle('../../data/y_{}_test.pkl'.format(pid))).sort_index() # Let's cut the training set to use only the required number of samples end_date = x_train.index.levels[0][-1] start_date = fe.add_market_days(end_date, -train_days) x_sub_df = x_train.loc[(slice(start_date,None),slice(None)),:] y_sub_df = pd.DataFrame(y_train.loc[(slice(start_date,None),slice(None))]) # Create the estimator and train estimator = LinearPredictor() estimator.fit(x_sub_df, y_sub_df) # Get the training and test predictions y_train_pred = estimator.predict(x_sub_df) y_test_pred = estimator.predict(x_test) # Get the training and test metrics for each symbol metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred) metrics_test = ev.get_metrics_df(y_test, y_test_pred) # Show the mean metrics metrics_df = pd.DataFrame(columns=['train', 'test']) metrics_df['train'] = metrics_train.mean() metrics_df['test'] = metrics_test.mean() print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70)) # Plot the metrics in time metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days) metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days) plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.') plt.title('$r^2$ metrics') plt.legend() plt.figure() plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.') plt.title('MRE metrics') plt.legend()
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Ahead 7
from predictor.linear_predictor import LinearPredictor import utils.misc as misc import predictor.evaluation as ev ahead_days = 7 # Get some parameters train_days = int(best_params_df.loc[ahead_days, 'train_days']) GOOD_DATA_RATIO, \ train_val_time, \ base_days, \ step_days, \ ahead_days, \ SAMPLES_GOOD_DATA_RATIO, \ x_filename, \ y_filename = misc.unpack_params(best_params_df.loc[ahead_days,:]) pid = 'base{}_ahead{}'.format(base_days, ahead_days) # Get the datasets x_train = pd.read_pickle('../../data/x_{}.pkl'.format(pid)) y_train = pd.read_pickle('../../data/y_{}.pkl'.format(pid)) x_test = pd.read_pickle('../../data/x_{}_test.pkl'.format(pid)).sort_index() y_test = pd.DataFrame(pd.read_pickle('../../data/y_{}_test.pkl'.format(pid))).sort_index() # Let's cut the training set to use only the required number of samples end_date = x_train.index.levels[0][-1] start_date = fe.add_market_days(end_date, -train_days) x_sub_df = x_train.loc[(slice(start_date,None),slice(None)),:] y_sub_df = pd.DataFrame(y_train.loc[(slice(start_date,None),slice(None))]) # Create the estimator and train estimator = LinearPredictor() estimator.fit(x_sub_df, y_sub_df) # Get the training and test predictions y_train_pred = estimator.predict(x_sub_df) y_test_pred = estimator.predict(x_test) # Get the training and test metrics for each symbol metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred) metrics_test = ev.get_metrics_df(y_test, y_test_pred) # Show the mean metrics metrics_df = pd.DataFrame(columns=['train', 'test']) metrics_df['train'] = metrics_train.mean() metrics_df['test'] = metrics_test.mean() print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70)) # Plot the metrics in time metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days) metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days) plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.') plt.title('$r^2$ metrics') plt.legend() plt.figure() plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.') plt.title('MRE metrics') plt.legend()
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Ahead 14
from predictor.linear_predictor import LinearPredictor import utils.misc as misc import predictor.evaluation as ev ahead_days = 14 # Get some parameters train_days = int(best_params_df.loc[ahead_days, 'train_days']) GOOD_DATA_RATIO, \ train_val_time, \ base_days, \ step_days, \ ahead_days, \ SAMPLES_GOOD_DATA_RATIO, \ x_filename, \ y_filename = misc.unpack_params(best_params_df.loc[ahead_days,:]) pid = 'base{}_ahead{}'.format(base_days, ahead_days) # Get the datasets x_train = pd.read_pickle('../../data/x_{}.pkl'.format(pid)) y_train = pd.read_pickle('../../data/y_{}.pkl'.format(pid)) x_test = pd.read_pickle('../../data/x_{}_test.pkl'.format(pid)).sort_index() y_test = pd.DataFrame(pd.read_pickle('../../data/y_{}_test.pkl'.format(pid))).sort_index() # Let's cut the training set to use only the required number of samples end_date = x_train.index.levels[0][-1] start_date = fe.add_market_days(end_date, -train_days) x_sub_df = x_train.loc[(slice(start_date,None),slice(None)),:] y_sub_df = pd.DataFrame(y_train.loc[(slice(start_date,None),slice(None))]) # Create the estimator and train estimator = LinearPredictor() estimator.fit(x_sub_df, y_sub_df) # Get the training and test predictions y_train_pred = estimator.predict(x_sub_df) y_test_pred = estimator.predict(x_test) # Get the training and test metrics for each symbol metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred) metrics_test = ev.get_metrics_df(y_test, y_test_pred) # Show the mean metrics metrics_df = pd.DataFrame(columns=['train', 'test']) metrics_df['train'] = metrics_train.mean() metrics_df['test'] = metrics_test.mean() print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70)) # Plot the metrics in time metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days) metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days) plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.') plt.title('$r^2$ metrics') plt.legend() plt.figure() plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.') plt.title('MRE metrics') plt.legend()
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Ahead 28
from predictor.linear_predictor import LinearPredictor import utils.misc as misc import predictor.evaluation as ev ahead_days = 28 # Get some parameters train_days = int(best_params_df.loc[ahead_days, 'train_days']) GOOD_DATA_RATIO, \ train_val_time, \ base_days, \ step_days, \ ahead_days, \ SAMPLES_GOOD_DATA_RATIO, \ x_filename, \ y_filename = misc.unpack_params(best_params_df.loc[ahead_days,:]) pid = 'base{}_ahead{}'.format(base_days, ahead_days) # Get the datasets x_train = pd.read_pickle('../../data/x_{}.pkl'.format(pid)) y_train = pd.read_pickle('../../data/y_{}.pkl'.format(pid)) x_test = pd.read_pickle('../../data/x_{}_test.pkl'.format(pid)).sort_index() y_test = pd.DataFrame(pd.read_pickle('../../data/y_{}_test.pkl'.format(pid))).sort_index() # Let's cut the training set to use only the required number of samples end_date = x_train.index.levels[0][-1] start_date = fe.add_market_days(end_date, -train_days) x_sub_df = x_train.loc[(slice(start_date,None),slice(None)),:] y_sub_df = pd.DataFrame(y_train.loc[(slice(start_date,None),slice(None))]) # Create the estimator and train estimator = LinearPredictor() estimator.fit(x_sub_df, y_sub_df) # Get the training and test predictions y_train_pred = estimator.predict(x_sub_df) y_test_pred = estimator.predict(x_test) # Get the training and test metrics for each symbol metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred) metrics_test = ev.get_metrics_df(y_test, y_test_pred) # Show the mean metrics metrics_df = pd.DataFrame(columns=['train', 'test']) metrics_df['train'] = metrics_train.mean() metrics_df['test'] = metrics_test.mean() print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70)) # Plot the metrics in time metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days) metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days) plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.') plt.title('$r^2$ metrics') plt.legend() plt.figure() plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.') plt.title('MRE metrics') plt.legend()
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
Ahead 56
from predictor.linear_predictor import LinearPredictor import utils.misc as misc import predictor.evaluation as ev ahead_days = 56 # Get some parameters train_days = int(best_params_df.loc[ahead_days, 'train_days']) GOOD_DATA_RATIO, \ train_val_time, \ base_days, \ step_days, \ ahead_days, \ SAMPLES_GOOD_DATA_RATIO, \ x_filename, \ y_filename = misc.unpack_params(best_params_df.loc[ahead_days,:]) pid = 'base{}_ahead{}'.format(base_days, ahead_days) # Get the datasets x_train = pd.read_pickle('../../data/x_{}.pkl'.format(pid)) y_train = pd.read_pickle('../../data/y_{}.pkl'.format(pid)) x_test = pd.read_pickle('../../data/x_{}_test.pkl'.format(pid)).sort_index() y_test = pd.DataFrame(pd.read_pickle('../../data/y_{}_test.pkl'.format(pid))).sort_index() # Let's cut the training set to use only the required number of samples end_date = x_train.index.levels[0][-1] start_date = fe.add_market_days(end_date, -train_days) x_sub_df = x_train.loc[(slice(start_date,None),slice(None)),:] y_sub_df = pd.DataFrame(y_train.loc[(slice(start_date,None),slice(None))]) # Create the estimator and train estimator = LinearPredictor() estimator.fit(x_sub_df, y_sub_df) # Get the training and test predictions y_train_pred = estimator.predict(x_sub_df) y_test_pred = estimator.predict(x_test) # Get the training and test metrics for each symbol metrics_train = ev.get_metrics_df(y_sub_df, y_train_pred) metrics_test = ev.get_metrics_df(y_test, y_test_pred) # Show the mean metrics metrics_df = pd.DataFrame(columns=['train', 'test']) metrics_df['train'] = metrics_train.mean() metrics_df['test'] = metrics_test.mean() print('Mean metrics: \n{}\n{}'.format(metrics_df,'-'*70)) # Plot the metrics in time metrics_train_time = ev.get_metrics_in_time(y_sub_df, y_train_pred, base_days + ahead_days) metrics_test_time = ev.get_metrics_in_time(y_test, y_test_pred, base_days + ahead_days) plt.plot(metrics_train_time[2], metrics_train_time[0], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[0], label='test', marker='.') plt.title('$r^2$ metrics') plt.legend() plt.figure() plt.plot(metrics_train_time[2], metrics_train_time[1], label='train', marker='.') plt.plot(metrics_test_time[2], metrics_test_time[1], label='test', marker='.') plt.title('MRE metrics') plt.legend()
notebooks/prod/n07_testing_and_time_validity.ipynb
mtasende/Machine-Learning-Nanodegree-Capstone
mit
<h2> Non-linear supply functions </h2> When thinking about supply it helps to start with the following considerations... <ol> <li> ...when prices are low, the quantity supplied increases slowly because of fixed costs of production (think startup costs, etc). <li> ...when prices are high, supply also increases slowly because of capacity constraints. </ol> These considerations motivate our focus on "S-shaped" supply functions... $$ S_{\gamma}(p_t^e) = -tan^{-1}(-\gamma \bar{p}) + tan^{-1}(\gamma (p_t^e - \bar{p})). \tag{10}$$ The parameter $0 < \gamma < \infty$ controls the "steepness" of the supply function.
def quantity_supply(expected_price, gamma, p_bar, **params): """The quantity of goods supplied in period t given the epxected price.""" return -np.arctan(-gamma * p_bar) + np.arctan(gamma * (expected_price - p_bar))
notebooks/cobweb-models.ipynb
davidrpugh/sfi-complexity-mooc
mit
<h3> Exploring supply shocks </h3> Interactively change the value of $\gamma$ to see the impact on the shape of the supply function.
ipywidgets.interact? interactive_quantity_supply_plot = ipywidgets.interact(cobweb.quantity_supply_plot, S=ipywidgets.fixed(quantity_supply), gamma=cobweb.gamma_float_slider, p_bar=cobweb.p_bar_float_slider)
notebooks/cobweb-models.ipynb
davidrpugh/sfi-complexity-mooc
mit
<h2> Special case: Linear demand functions </h2> Suppose the the quantity demanded of goods is a simple, decresing linear function of the observed price. $$ q_t^d = D(p_t) = a - b p_t \implies p_t = D^{-1}(q_t^d) = \frac{a}{b} - \frac{1}{b}q_t^d \tag{11} $$ ...where $-\infty < a < \infty$ and $0 < b < \infty$.
def quantity_demand(observed_price, a, b): """The quantity demand of goods in period t given the price.""" quantity = a - b * observed_price return quantity def inverse_demand(quantity_demand, a, b, **params): """The price of goods in period t given the quantity demanded.""" price = (a / b) - (1 / b) * quantity_demand return price
notebooks/cobweb-models.ipynb
davidrpugh/sfi-complexity-mooc
mit
<h3> Exploring demand shocks </h3> Interactively change the values of $a$ and $b$ to get a feel for how they impact demand. Shocks to $a$ shift the entire demand curve; shocks to $b$ change the slope of the demand curve (higher $b$ implies greater sensitivity to price; lower $b$ implies less sensitivity to price).
interactive_quantity_demand_plot = ipywidgets.interact(cobweb.quantity_demand_plot, D=ipywidgets.fixed(quantity_demand), a=cobweb.a_float_slider, b=cobweb.b_float_slider)
notebooks/cobweb-models.ipynb
davidrpugh/sfi-complexity-mooc
mit