markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return None def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return None def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) # TODO: return output return None """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network.
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function pass """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network)
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ # TODO: Implement Function pass
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout
# TODO: Tune Parameters epochs = None batch_size = None keep_probability = None
image-classification/dlnd_image_classification.ipynb
khalido/deep-learning
mit
A univariate example
np.random.seed(12345) # Seed the random number generator for reproducible results
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
We create a bimodal distribution: a mixture of two normal distributions with locations at -1 and 1.
# Location, scale and weight for the two distributions dist1_loc, dist1_scale, weight1 = -1 , .5, .25 dist2_loc, dist2_scale, weight2 = 1 , .5, .75 # Sample from a mixture of distributions obs_dist = mixture_rvs(prob=[weight1, weight2], size=250, dist=[stats.norm, stats.norm], kwargs = (dict(loc=dist1_loc, scale=dist1_scale), dict(loc=dist2_loc, scale=dist2_scale)))
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
The simplest non-parametric technique for density estimation is the histogram.
fig = plt.figure(figsize=(12, 5)) ax = fig.add_subplot(111) # Scatter plot of data samples and histogram ax.scatter(obs_dist, np.abs(np.random.randn(obs_dist.size)), zorder=15, color='red', marker='x', alpha=0.5, label='Samples') lines = ax.hist(obs_dist, bins=20, edgecolor='k', label='Histogram') ax.legend(loc='best') ax.grid(True, zorder=-5)
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
Fitting with the default arguments The histogram above is discontinuous. To compute a continuous probability density function, we can use kernel density estimation. We initialize a univariate kernel density estimator using KDEUnivariate.
kde = sm.nonparametric.KDEUnivariate(obs_dist) kde.fit() # Estimate the densities
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
We present a figure of the fit, as well as the true distribution.
fig = plt.figure(figsize=(12, 5)) ax = fig.add_subplot(111) # Plot the histrogram ax.hist(obs_dist, bins=20, density=True, label='Histogram from samples', zorder=5, edgecolor='k', alpha=0.5) # Plot the KDE as fitted using the default arguments ax.plot(kde.support, kde.density, lw=3, label='KDE from samples', zorder=10) # Plot the true distribution true_values = (stats.norm.pdf(loc=dist1_loc, scale=dist1_scale, x=kde.support)*weight1 + stats.norm.pdf(loc=dist2_loc, scale=dist2_scale, x=kde.support)*weight2) ax.plot(kde.support, true_values, lw=3, label='True distribution', zorder=15) # Plot the samples ax.scatter(obs_dist, np.abs(np.random.randn(obs_dist.size))/40, marker='x', color='red', zorder=20, label='Samples', alpha=0.5) ax.legend(loc='best') ax.grid(True, zorder=-5)
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
In the code above, default arguments were used. We can also vary the bandwidth of the kernel, as we will now see. Varying the bandwidth using the bw argument The bandwidth of the kernel can be adjusted using the bw argument. In the following example, a bandwidth of bw=0.2 seems to fit the data well.
fig = plt.figure(figsize=(12, 5)) ax = fig.add_subplot(111) # Plot the histrogram ax.hist(obs_dist, bins=25, label='Histogram from samples', zorder=5, edgecolor='k', density=True, alpha=0.5) # Plot the KDE for various bandwidths for bandwidth in [0.1, 0.2, 0.4]: kde.fit(bw=bandwidth) # Estimate the densities ax.plot(kde.support, kde.density, '--', lw=2, color='k', zorder=10, label='KDE from samples, bw = {}'.format(round(bandwidth, 2))) # Plot the true distribution ax.plot(kde.support, true_values, lw=3, label='True distribution', zorder=15) # Plot the samples ax.scatter(obs_dist, np.abs(np.random.randn(obs_dist.size))/50, marker='x', color='red', zorder=20, label='Data samples', alpha=0.5) ax.legend(loc='best') ax.set_xlim([-3, 3]) ax.grid(True, zorder=-5)
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
Comparing kernel functions In the example above, a Gaussian kernel was used. Several other kernels are also available.
from statsmodels.nonparametric.kde import kernel_switch list(kernel_switch.keys())
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
The available kernel functions
# Create a figure fig = plt.figure(figsize=(12, 5)) # Enumerate every option for the kernel for i, (ker_name, ker_class) in enumerate(kernel_switch.items()): # Initialize the kernel object kernel = ker_class() # Sample from the domain domain = kernel.domain or [-3, 3] x_vals = np.linspace(*domain, num=2**10) y_vals = kernel(x_vals) # Create a subplot, set the title ax = fig.add_subplot(2, 4, i + 1) ax.set_title('Kernel function "{}"'.format(ker_name)) ax.plot(x_vals, y_vals, lw=3, label='{}'.format(ker_name)) ax.scatter([0], [0], marker='x', color='red') plt.grid(True, zorder=-5) ax.set_xlim(domain) plt.tight_layout()
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
The available kernel functions on three data points We now examine how the kernel density estimate will fit to three equally spaced data points.
# Create three equidistant points data = np.linspace(-1, 1, 3) kde = sm.nonparametric.KDEUnivariate(data) # Create a figure fig = plt.figure(figsize=(12, 5)) # Enumerate every option for the kernel for i, kernel in enumerate(kernel_switch.keys()): # Create a subplot, set the title ax = fig.add_subplot(2, 4, i + 1) ax.set_title('Kernel function "{}"'.format(kernel)) # Fit the model (estimate densities) kde.fit(kernel=kernel, fft=False, gridsize=2**10) # Create the plot ax.plot(kde.support, kde.density, lw=3, label='KDE from samples', zorder=10) ax.scatter(data, np.zeros_like(data), marker='x', color='red') plt.grid(True, zorder=-5) ax.set_xlim([-3, 3]) plt.tight_layout()
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
A more difficult case The fit is not always perfect. See the example below for a harder case.
obs_dist = mixture_rvs([.25, .75], size=250, dist=[stats.norm, stats.beta], kwargs = (dict(loc=-1, scale=.5), dict(loc=1, scale=1, args=(1, .5)))) kde = sm.nonparametric.KDEUnivariate(obs_dist) kde.fit() fig = plt.figure(figsize=(12, 5)) ax = fig.add_subplot(111) ax.hist(obs_dist, bins=20, density=True, edgecolor='k', zorder=4, alpha=0.5) ax.plot(kde.support, kde.density, lw=3, zorder=7) # Plot the samples ax.scatter(obs_dist, np.abs(np.random.randn(obs_dist.size))/50, marker='x', color='red', zorder=20, label='Data samples', alpha=0.5) ax.grid(True, zorder=-5)
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
The KDE is a distribution Since the KDE is a distribution, we can access attributes and methods such as: entropy evaluate cdf icdf sf cumhazard
obs_dist = mixture_rvs([.25, .75], size=1000, dist=[stats.norm, stats.norm], kwargs = (dict(loc=-1, scale=.5), dict(loc=1, scale=.5))) kde = sm.nonparametric.KDEUnivariate(obs_dist) kde.fit(gridsize=2**10) kde.entropy kde.evaluate(-1)
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
Cumulative distribution, it's inverse, and the survival function
fig = plt.figure(figsize=(12, 5)) ax = fig.add_subplot(111) ax.plot(kde.support, kde.cdf, lw=3, label='CDF') ax.plot(np.linspace(0, 1, num = kde.icdf.size), kde.icdf, lw=3, label='Inverse CDF') ax.plot(kde.support, kde.sf, lw=3, label='Survival function') ax.legend(loc = 'best') ax.grid(True, zorder=-5)
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
The Cumulative Hazard Function
fig = plt.figure(figsize=(12, 5)) ax = fig.add_subplot(111) ax.plot(kde.support, kde.cumhazard, lw=3, label='Cumulative Hazard Function') ax.legend(loc = 'best') ax.grid(True, zorder=-5)
examples/notebooks/kernel_density.ipynb
jseabold/statsmodels
bsd-3-clause
Introduction The Corpus Callosum (CC) is the largest white matter structure in the central nervous system that connects both brain hemispheres and allows the communication between them. The CC has great importance in research studies due to the correlation between shape and volume with some subject's characteristics, such as: gender, age, numeric and mathematical skills and handedness. In addition, some neurodegenerative diseases like Alzheimer, autism, schizophrenia and dyslexia could cause CC shape deformation. CC segmentation is a necessary step for morphological and physiological features extraction in order to analyze the structure in image-based clinical and research applications. Magnetic Resonance Imaging (MRI) is the most suitable image technique for CC segmentation due to its ability to provide contrast between brain tissues however CC segmentation is challenging because of the shape and intensity variability between subjects, volume partial effect in diffusion MRI, fornex proximity and narrow areas in CC. Among the known MRI modalities, Diffusion-MRI arouses special interest to study the CC, despite its low resolution and high complexity, since it provides useful information related to the organization of brain tissues and the magnetic field does not interfere with the diffusion process itself. Some CC segmentation approaches using Diffusion-MRI were found in the literature. Niogi et al. proposed a method based on thresholding, Freitas et al. e Rittner et al. proposed region methods based on Watershed transform, Nazem-Zadeh et al. implemented based on level surfaces, Kong et al. presented an clustering algorithm for segmentation, Herrera et al. segmented CC directly in diffusion weighted imaging (DWI) using a model based on pixel classification and Garcia et al. proposed a hybrid segmentation method based on active geodesic regions and level surfaces. With the growing of data and the proliferation of automatic algorithms, segmentation over large databases is affordable. Therefore, error automatic detection is important in order to facilitate and speed up filter on CC segmentation databases. presented proposals for content-based image retrieval (CBIR) using shape signature of the planar object representation. In this work, a method for automatic detection of segmentation error in large datasets is proposed based on CC shape signature. Signature offers shape characterization of the CC and therefore it is expected that a "typical correct signature" represents well any correct segmentation. Signature is extracted measuring curvature along segmentation contour. The method was implemented in three main stages: mean correct signature generation, signature configuration and method testing. The first one takes 20 corrects segmentations and generates one correct signature of reference (typical correct signature), per-resolution, using mean values in each point. The second stage stage takes 10 correct segmentations and 10 erroneous segmentations and adjusts the optimal resolution and threshold, based on mean correct signature, that lets detection of erroneous segmentations. The third stage labels a new segmentation as correct and erroneous comparing with the mean signature using optimal resolution and threshold. <img src="../figures/workflow.png"> The comparison between signatures is done using root mean square error (RMSE). True label for each segmentation was done visually. Correct segmentation corresponds to segmentations with at least 50% of agreement with the structure. It is expected that RMSE for correct segmentations is lower than RMSE associated to erroneous segmentation when compared with a typical correct segmentation.
#Loading labeled segmentations seg_label = genfromtxt('../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8') list_mask = seg_label[seg_label[:,1] == 0, 0][:20] #Extracting correct segmentations for mean signature list_normal_mask = seg_label[seg_label[:,1] == 0, 0][20:30] #Extracting correct names list_error_mask = seg_label[seg_label[:,1] == 1, 0][:10] #Extracting correct names mask_correct = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_mask[0])) mask_error = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_error_mask[0])) plt.figure() plt.axis('off') plt.imshow(mask_correct,'gray',interpolation='none') plt.title("Correct segmentation example") plt.show() plt.figure() plt.axis('off') plt.imshow(mask_error,'gray',interpolation='none') plt.title("Erroneous segmentation example") plt.show()
dev/mean-WJGH.ipynb
wilomaku/IA369Z
gpl-3.0
Shape signature for comparison Signature is a shape descriptor that measures the rate of variation along the segmentation contour. As shown in figure, the curvature $k$ in the pivot point $p$, with coordinates ($x_p$,$y_p$), is calculated using the next equation. This curvature depict the angle between the segments $\overline{(x_{p-ls},y_{p-ls})(x_p,y_p)}$ and $\overline{(x_p,y_p)(x_{p+ls},y_{p+ls})}$. These segments are located to a distance $ls>0$, starting in a pivot point and finishing in anterior and posterior points, respectively. The signature is obtained calculating the curvature along all segmentation contour. \begin{equation} \label{eq:per1} k(x_p,y_p) = \arctan\left(\frac{y_{p+ls}-y_p}{x_{p+ls}-x_p}\right)-\arctan\left(\frac{y_p-y_{p-ls}}{x_p-x_{p-ls}}\right) \end{equation} <img src="../figures/curvature.png"> Signature construction is performed from segmentation contour of the CC. From contour, spline is obtained. Spline purpose is twofold: to get a smooth representation of the contour and to facilitate calculation of the curvature using its parametric representation. The signature is obtained measuring curvature along spline. $ls$ is the parametric distance between pivot point and both posterior and anterior points and it determines signature resolution. By simplicity, $ls$ is measured in percentage of reconstructed spline points. In order to achieve quantitative comparison between two signatures root mean square error (RMSE) is introduced. RMSE measures distance, point to point, between signatures $a$ and $b$ along all points $p$ of signatures. \begin{equation} \label{eq:per4} RMSE = \sqrt{\frac{1}{P}\sum_{p=1}^{P}(k_{ap}-k_{bp})^2} \end{equation} Frequently, signatures of different segmentations are not fitted along the 'x' axis because of the initial point on the spline calculation starts in different relative positions. This makes impossible to compare directly two signatures and therefore, a prior fitting process must be accomplished. The fitting process is done shifting one of the signature while the other is kept fixed. For each shift, RMSE between the two signatures is measured. The point giving the minor error is the fitting point. Fitting was done at resolution $ls = 0.35$. This resolution represents globally the CC's shape and eases their fitting. After fitting, RMSE between signatures can be measured in order to achieve final quantitative comparison. Signature for segmentation error detection For segmentation error detection, a typical correct signature is obtained calculating mean over a group of signatures from correct segmentations. Because of this signature could be used in any resolution, $ls$ must be chosen for achieve segmentation error detection. The optimal resolution must be able to return the greatest RMSE difference between correct and erroneous segmentation when compared with a typical correct signature. In the optimal resolution, a threshold must be chosen for separate erroneous and correct segmentations. This threshold stays between RMSE associated to correct ($RMSE_E$) and erroneous ($RMSE_C$) signatures and it is given by the next equation where N (in percentage) represents proximity to correct or erroneous RMSE. If RMSE calculated over a group of signatures, mean value is applied. \begin{equation} \label{eq:eq3} th = N*(\overline{RMSE_E}-\overline{RMSE_C})+\overline{RMSE_C} \end{equation} Experiments and results In this work, comparison of signatures through RMSE is used for segmentation error detection in large datasets. For this, it will be calculated a mean correct signature based on 20 correct segmentation signatures. This mean correct signature represents a tipycal correct segmentation. For a new segmentation, signature is extracted and compared with mean signature. For experiments, DWI from 152 subjects at the University of Campinas, were acquired on a Philips scanner Achieva 3T in the axial plane with a $1$x$1mm$ spatial resolution and $2mm$ slice thickness, along $32$ directions ($b-value=1000s/mm^2$, $TR=8.5s$, and $TE=61ms$). All data used in this experiment was acquired through a project approved by the research ethics committee from the School of Medicine at UNICAMP. From each acquired DWI volume, only the midsaggital slice was used. Three segmentation methods were implemented to obtained binary masks over a 152 subject dataset: Watershed, ROQS and pixel-based. 40 Watershed segmentations were chosen as follows: 20 correct segmentations for mean correct signature generation and 10 correct and 10 erroneous segmentations for signature configuration stage. Watershed was chosen to generate and adjust the mean signature because of its higher error rate and its variability in the erroneous segmentation shape. These characteristics allow improve generalization. The method was tested on the remaining Watershed segmentations (108 masks) and two additional segmentations methods: ROQS (152 masks) and pixel-based (152 masks). Mean correct signature generation In this work, segmentations based on Watershed method were used for implementation of the first and second stages. From the Watershed dataset, 20 correct segmentations were chosen. Spline for each one was obtained from segmentation contour. The contour was obtained using mathematical morphology, applying xor logical operation, pixel-wise, between original segmentation and the eroded version of itself by an structuring element b: \begin{equation} \label{eq:per2} G_E = XOR(S,S \ominus b) \end{equation} From contour, it is calculated spline. The implementation, is a B-spline (Boor's basic spline). This formulation has two parameters: degree, representing polynomial degrees of the spline, and smoothness, being the trade off between proximity and smoothness in the fitness of the spline. Degree was fixed in 5 allowing adequate representation of the contour. Smoothness was fixed in 700. This value is based on the mean quantity of pixels of the contour that are passed for spline calculation. The curvature was measured over 500 points over the spline to generate the signature along 20 segmentations. Signatures were fitted to make possible comparison (Fig. signatures). Fitting resolution was fixed in 0.35.
n_list = len(list_mask) smoothness = 700 #Smoothness degree = 5 #Spline degree fit_res = 0.35 resols = np.arange(0.01,0.5,0.01) #Signature resolutions resols = np.insert(resols,0,fit_res) #Insert resolution for signature fitting points = 500 #Points of Spline reconstruction refer_wat = np.empty((n_list,resols.shape[0],points)) #Initializing signature vector for mask in xrange(n_list): mask_p = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(list_mask[mask])) refer_temp = sign_extract(mask_p, resols) #Function for shape signature extraction refer_wat[mask] = refer_temp if mask > 0: #Fitting curves using the first one as basis prof_ref = refer_wat[0] refer_wat[mask] = sign_fit(prof_ref[0], refer_temp) #Function for signature fitting print "Signatures' vector size: ", refer_wat.shape res_ex = 10 plt.figure() plt.plot(refer_wat[:,res_ex,:].T) plt.title("Signatures for res: %f"%(resols[res_ex])) plt.show()
dev/mean-WJGH.ipynb
wilomaku/IA369Z
gpl-3.0
In order to get a representative correct signature, mean signature per-resolution is generated using 20 correct signatures. The mean is calculated in each point.
refer_wat_mean = np.mean(refer_wat,axis=0) #Finding mean signature per resolution print "Mean signature size: ", refer_wat_mean.shape plt.figure() #Plotting mean signature plt.plot(refer_wat_mean[res_ex,:]) plt.title("Mean signature for res: %f"%(resols[res_ex])) plt.show()
dev/mean-WJGH.ipynb
wilomaku/IA369Z
gpl-3.0
The RMSE over the 10 correct segmentations was compared with RMSE over the 10 erroneous segmentations. As expected, RMSE for correct segmentations was greater than RMSE for erroneous segmentations along all the resolutions. In general, this is true, but optimal resolution guarantee the maximum difference between both of RMSE results: correct and erroneous. So, to find optimal resolution, difference between correct and erroneous RMSE was calculated over all resolutions.
rmse_nacum = np.sqrt(np.sum((refer_wat_mean - refer_wat_n)**2,axis=2)/(refer_wat_mean.shape[1])) rmse_eacum = np.sqrt(np.sum((refer_wat_mean - refer_wat_e)**2,axis=2)/(refer_wat_mean.shape[1])) dif_dis = rmse_eacum - rmse_nacum #Difference between erroneous signatures and correct signatures in_max_res = np.argmax(np.mean(dif_dis,axis=0)) #Finding optimal resolution at maximum difference opt_res = resols[in_max_res] print "Optimal resolution for error detection: ", opt_res correct_max = np.mean(rmse_nacum[:,in_max_res]) #Finding threshold for separate segmentations error_min = np.mean(rmse_eacum[:,in_max_res]) th_res = 0.3*(error_min-correct_max)+correct_max print "Threshold for separate segmentations: ", th_res #### Plotting erroneous and correct segmentation signatures ticksx_resols = ["%.2f" % el for el in np.arange(0.01,0.5,0.01)] #Labels for plot xticks ticksx_resols = ticksx_resols[::6] ticksx_index = np.arange(1,50,6) figpr = plt.figure() #Plotting mean RMSE for correct segmentations plt.boxplot(rmse_nacum[:,1:], showmeans=True) #Element 0 was introduced only for fitting, #in comparation is not used. plt.axhline(y=0, color='g', linestyle='--') plt.axhline(y=th_res, color='r', linestyle='--') plt.axvline(x=in_max_res, color='r', linestyle='--') plt.xlabel('Resolutions', fontsize = 12, labelpad=-2) plt.ylabel('RMSE correct signatures', fontsize = 12) plt.xticks(ticksx_index, ticksx_resols) plt.show() figpr = plt.figure() #Plotting mean RMSE for erroneous segmentations plt.boxplot(rmse_eacum[:,1:], showmeans=True) plt.axhline(y=0, color='g', linestyle='--') plt.axhline(y=th_res, color='r', linestyle='--') plt.axvline(x=in_max_res, color='r', linestyle='--') plt.xlabel('Resolutions', fontsize = 12, labelpad=-2) plt.ylabel('RMSE error signatures', fontsize = 12) plt.xticks(ticksx_index, ticksx_resols) plt.show() figpr = plt.figure() #Plotting difference for mean RMSE over all resolutions plt.boxplot(dif_dis[:,1:], showmeans=True) plt.axhline(y=0, color='g', linestyle='--') plt.axvline(x=in_max_res, color='r', linestyle='--') plt.xlabel('Resolutions', fontsize = 12, labelpad=-2) plt.ylabel('Difference RMSE signatures', fontsize = 12) plt.xticks(ticksx_index, ticksx_resols) plt.show()
dev/mean-WJGH.ipynb
wilomaku/IA369Z
gpl-3.0
The greatest difference resulted at resolution 0.1. In this resolution, threshold for separate erroneous and correct segmentations is established as 30% of the distance between the mean RMSE of the correct masks and the mean RMSE of the erroneous masks. Method testing Finally, method test was performed in the 152 subject dataset: Watershed dataset with 112 segmentations, ROQS dataset with 152 segmentations and pixel-based dataset with 152 segmentations.
n_resols = [fit_res, opt_res] #Resolutions for fitting and comparison #### Teste dataset (Watershed) #Loading labels seg_label = genfromtxt('../dataset/Seg_Watershed/watershed_label.csv', delimiter=',').astype('uint8') all_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0][30:], seg_label[seg_label[:,1] == 1, 0][10:])) #Extracting erroneous and correct names lab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1][30:], seg_label[seg_label[:,1] == 1, 1][10:])) #Extracting erroneous and correct labels refer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting #and optimal resolution refer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector in_mask = 0 for mask in all_seg: mask_ = np.load('../dataset/Seg_Watershed/mask_wate_{}.npy'.format(mask)) refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting ###### Uncomment this block to see each segmentation with true and predicted labels #RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1])) #plt.figure() #plt.axis('off') #plt.imshow(mask_,'gray',interpolation='none') #plt.title("True label: {}, Predic. label: {}".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8'))) #plt.show() in_mask += 1 #### Segmentation evaluation result over all segmentations RMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1])) pred_seg = RMSE > th_res #Apply threshold comp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels print "Final accuracy on Watershed {} segmentations: {}".format(len(comp_seg), np.sum(comp_seg)/(1.0*len(comp_seg))) #### Teste dataset (ROQS) seg_label = genfromtxt('../dataset/Seg_ROQS/roqs_label.csv', delimiter=',').astype('uint8') #Loading labels all_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0], seg_label[seg_label[:,1] == 1, 0])) #Extracting erroneous and correct names lab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1], seg_label[seg_label[:,1] == 1, 1])) #Extracting erroneous and correct labels refer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting #and optimal resolution refer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector in_mask = 0 for mask in all_seg: mask_ = np.load('../dataset/Seg_ROQS/mask_roqs_{}.npy'.format(mask)) refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting ###### Uncomment this block to see each segmentation with true and predicted labels #RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1])) #plt.figure() #plt.axis('off') #plt.imshow(mask_,'gray',interpolation='none') #plt.title("True label: {}, Predic. label: {}".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8'))) #plt.show() in_mask += 1 #### Segmentation evaluation result over all segmentations RMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1])) pred_seg = RMSE > th_res #Apply threshold comp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels print "Final accuracy on ROQS {} segmentations: {}".format(len(comp_seg), np.sum(comp_seg)/(1.0*len(comp_seg))) #### Teste dataset (Pixel-based) seg_label = genfromtxt('../dataset/Seg_pixel/pixel_label.csv', delimiter=',').astype('uint8') #Loading labels all_seg = np.hstack((seg_label[seg_label[:,1] == 0, 0], seg_label[seg_label[:,1] == 1, 0])) #Extracting erroneous and correct names lab_seg = np.hstack((seg_label[seg_label[:,1] == 0, 1], seg_label[seg_label[:,1] == 1, 1])) #Extracting erroneous and correct labels refer_wat_mean_opt = np.vstack((refer_wat_mean[0],refer_wat_mean[in_max_res])) #Mean signature with fitting #and optimal resolution refer_seg = np.empty((all_seg.shape[0],len(n_resols),points)) #Initializing correct signature vector in_mask = 0 for mask in all_seg: mask_ = np.load('../dataset/Seg_pixel/mask_pixe_{}.npy'.format(mask)) refer_temp = sign_extract(mask_, n_resols) #Function for shape signature extraction refer_seg[in_mask] = sign_fit(refer_wat_mean_opt[0], refer_temp) #Function for signature fitting ###### Uncomment this block to see each segmentation with true and predicted labels #RMSE_ = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[in_mask,1])**2)/(refer_wat_mean_opt.shape[1])) #plt.figure() #plt.axis('off') #plt.imshow(mask_,'gray',interpolation='none') #plt.title("True label: {}, Predic. label: {}".format(lab_seg[in_mask],(RMSE_>th_res).astype('uint8'))) #plt.show() in_mask += 1 #### Segmentation evaluation result over all segmentations RMSE = np.sqrt(np.sum((refer_wat_mean_opt[1] - refer_seg[:,1])**2,axis=1)/(refer_wat_mean_opt.shape[1])) pred_seg = RMSE > th_res #Apply threshold comp_seg = np.logical_not(np.logical_xor(pred_seg,lab_seg)) #Comparation method result with true labels print "Final accuracy on pixel-based {} segmentations: {}".format(len(comp_seg), np.sum(comp_seg)/(1.0*len(comp_seg)))
dev/mean-WJGH.ipynb
wilomaku/IA369Z
gpl-3.0
CCBB Library Imports
import sys sys.path.append(g_code_location)
notebooks/crispr/Dual CRISPR 2-Constuct Filter.ipynb
ucsd-ccbb/jupyter-genomics
mit
Automated Set-Up
# %load -s describe_var_list /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/utilities/analysis_run_prefixes.py def describe_var_list(input_var_name_list): description_list = ["{0}: {1}\n".format(name, eval(name)) for name in input_var_name_list] return "".join(description_list) from ccbbucsd.utilities.analysis_run_prefixes import check_or_set, get_run_prefix, get_timestamp g_filtered_fastas_dir = check_or_set(g_filtered_fastas_dir, g_trimmed_fastqs_dir) print(describe_var_list(['g_filtered_fastas_dir'])) from ccbbucsd.utilities.files_and_paths import verify_or_make_dir verify_or_make_dir(g_filtered_fastas_dir)
notebooks/crispr/Dual CRISPR 2-Constuct Filter.ipynb
ucsd-ccbb/jupyter-genomics
mit
Info Logging Pass-Through
from ccbbucsd.utilities.notebook_logging import set_stdout_info_logger set_stdout_info_logger()
notebooks/crispr/Dual CRISPR 2-Constuct Filter.ipynb
ucsd-ccbb/jupyter-genomics
mit
Construct Filtering Functions
import enum # %load -s TrimType,get_trimmed_suffix /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/scaffold_trim.py class TrimType(enum.Enum): FIVE = "5" THREE = "3" FIVE_THREE = "53" def get_trimmed_suffix(trimtype): return "_trimmed{0}.fastq".format(trimtype.value) # %load /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/count_filterer.py # standard libraries import logging # ccbb libraries from ccbbucsd.utilities.bio_seq_utilities import trim_seq from ccbbucsd.utilities.basic_fastq import FastqHandler, paired_fastq_generator from ccbbucsd.utilities.files_and_paths import transform_path __author__ = "Amanda Birmingham" __maintainer__ = "Amanda Birmingham" __email__ = "abirmingham@ucsd.edu" __status__ = "development" def get_filtered_file_suffix(): return "_len_filtered.fastq" def filter_pair_by_len(min_len, max_len, retain_len, output_dir, fw_fastq_fp, rv_fastq_fp): fw_fastq_handler = FastqHandler(fw_fastq_fp) rv_fastq_handler = FastqHandler(rv_fastq_fp) fw_out_handle, rv_out_handle = _open_output_file_pair(fw_fastq_fp, rv_fastq_fp, output_dir) counters = {"num_pairs": 0, "num_pairs_passing": 0} filtered_fastq_records = _filtered_fastq_generator(fw_fastq_handler, rv_fastq_handler, min_len, max_len, retain_len, counters) for fw_record, rv_record in filtered_fastq_records: fw_out_handle.writelines(fw_record.lines) rv_out_handle.writelines(rv_record.lines) fw_out_handle.close() rv_out_handle.close() return _summarize_counts(counters) def _filtered_fastq_generator(fw_fastq_handler, rv_fastq_handler, min_len, max_len, retain_len, counters): paired_fastq_records = paired_fastq_generator(fw_fastq_handler, rv_fastq_handler, True) for curr_pair_fastq_records in paired_fastq_records: counters["num_pairs"] += 1 _report_progress(counters["num_pairs"]) fw_record = curr_pair_fastq_records[0] fw_passing_seq = _check_and_trim_seq(_get_upper_seq(fw_record), min_len, max_len, retain_len, False) if fw_passing_seq is not None: rv_record = curr_pair_fastq_records[1] rv_passing_seq = _check_and_trim_seq(_get_upper_seq(rv_record), min_len, max_len, retain_len, True) if rv_passing_seq is not None: counters["num_pairs_passing"] += 1 fw_record.sequence = fw_passing_seq fw_record.quality = trim_seq(fw_record.quality, retain_len, False) rv_record.sequence = rv_passing_seq rv_record.quality = trim_seq(rv_record.quality, retain_len, True) yield fw_record, rv_record def _open_output_file_pair(fw_fastq_fp, rv_fastq_fp, output_dir): fw_fp = transform_path(fw_fastq_fp, output_dir, get_filtered_file_suffix()) rv_fp = transform_path(rv_fastq_fp, output_dir, get_filtered_file_suffix()) fw_handle = open(fw_fp, 'w') rv_handle = open(rv_fp, 'w') return fw_handle, rv_handle def _report_progress(num_fastq_pairs): if num_fastq_pairs % 100000 == 0: logging.debug("On fastq pair number {0}".format(num_fastq_pairs)) def _get_upper_seq(fastq_record): return fastq_record.sequence.upper() def _check_and_trim_seq(input_seq, min_len, max_len, retain_len, retain_5p_end): result = None seq_len = len(input_seq) if seq_len >= min_len and seq_len <= max_len: result = trim_seq(input_seq, retain_len, retain_5p_end) return result def _summarize_counts(counts_by_type): summary_pieces = [] sorted_keys = sorted(counts_by_type.keys()) # sort to ensure deterministic output ordering for curr_key in sorted_keys: curr_value = counts_by_type[curr_key] summary_pieces.append("{0}:{1}".format(curr_key, curr_value)) result = ",".join(summary_pieces) return result from ccbbucsd.utilities.parallel_process_fastqs import parallel_process_paired_reads, concatenate_parallel_results g_parallel_results = parallel_process_paired_reads(g_trimmed_fastqs_dir, get_trimmed_suffix(TrimType.FIVE_THREE), g_num_processors, filter_pair_by_len, [g_min_trimmed_grna_len, g_max_trimmed_grna_len, g_len_of_seq_to_match, g_filtered_fastas_dir]) print(concatenate_parallel_results(g_parallel_results))
notebooks/crispr/Dual CRISPR 2-Constuct Filter.ipynb
ucsd-ccbb/jupyter-genomics
mit
Let's go over the columns: - asof_date: the timeframe to which this data applies - timestamp: the simulated date upon which this data point is available to a backtest - open: opening price for the day indicated on asof_date - high: high price for the day indicated on asof_date - low: lowest price for the day indicated by asof_date - close: closing price for asof_date We've done much of the data processing for you. Fields like timestamp and sid are standardized across all our Store Datasets, so the datasets are easy to combine. We have standardized the sid across all our equity databases. We can select columns and rows with ease. Let's go plot it for fun below. 6500 rows is small enough to just convert right over to Pandas.
# Convert it over to a Pandas dataframe for easy charting vix_df = odo(dataset, pd.DataFrame) vix_df.plot(x='asof_date', y='close') plt.xlabel("As of Date (asof_date)") plt.ylabel("Close Price") plt.axis([None, None, 0, 100]) plt.title("VIX") plt.legend().set_visible(False)
notebooks/data/quandl.yahoo_index_vix/notebook.ipynb
quantopian/research_public
apache-2.0
<a id='pipeline'></a> Pipeline Overview Accessing the data in your algorithms & research The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows: Import the data set here from quantopian.pipeline.data.quandl import yahoo_index_vix Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline: pipe.add(yahoo_index_vix.close, 'close')
# Import necessary Pipeline modules from quantopian.pipeline import Pipeline from quantopian.research import run_pipeline from quantopian.pipeline.factors import AverageDollarVolume # For use in your algorithms # Using the full dataset in your pipeline algo from quantopian.pipeline.data.quandl import yahoo_index_vix
notebooks/data/quandl.yahoo_index_vix/notebook.ipynb
quantopian/research_public
apache-2.0
Now that we've imported the data, let's take a look at which fields are available for each dataset. You'll find the dataset, the available fields, and the datatypes for each of those fields.
print "Here are the list of available fields per dataset:" print "---------------------------------------------------\n" def _print_fields(dataset): print "Dataset: %s\n" % dataset.__name__ print "Fields:" for field in list(dataset.columns): print "%s - %s" % (field.name, field.dtype) print "\n" for data in (yahoo_index_vix,): _print_fields(data) print "---------------------------------------------------\n"
notebooks/data/quandl.yahoo_index_vix/notebook.ipynb
quantopian/research_public
apache-2.0
Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline. This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread: https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
# Let's see what this data looks like when we run it through Pipeline # This is constructed the same way as you would in the backtester. For more information # on using Pipeline in Research view this thread: # https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters pipe = Pipeline() pipe.add(yahoo_index_vix.open_.latest, 'open') pipe.add(yahoo_index_vix.close.latest, 'close') pipe.add(yahoo_index_vix.adjusted_close.latest, 'adjusted_close') pipe.add(yahoo_index_vix.high.latest, 'high') pipe.add(yahoo_index_vix.low.latest, 'low') pipe.add(yahoo_index_vix.volume.latest, 'volume') # The show_graph() method of pipeline objects produces a graph to show how it is being calculated. pipe.show_graph(format='png') # run_pipeline will show the output of your pipeline pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25') pipe_output
notebooks/data/quandl.yahoo_index_vix/notebook.ipynb
quantopian/research_public
apache-2.0
Taking what we've seen from above, let's see how we'd move that into the backtester.
# This section is only importable in the backtester from quantopian.algorithm import attach_pipeline, pipeline_output # General pipeline imports from quantopian.pipeline import Pipeline from quantopian.pipeline.factors import AverageDollarVolume # Import the datasets available # For use in your algorithms # Using the full dataset in your pipeline algo from quantopian.pipeline.data.quandl import yahoo_index_vix def make_pipeline(): # Create our pipeline pipe = Pipeline() # Add pipeline factors pipe.add(yahoo_index_vix.open_.latest, 'open') pipe.add(yahoo_index_vix.close.latest, 'close') pipe.add(yahoo_index_vix.adjusted_close.latest, 'adjusted_close') pipe.add(yahoo_index_vix.high.latest, 'high') pipe.add(yahoo_index_vix.low.latest, 'low') pipe.add(yahoo_index_vix.volume.latest, 'volume') return pipe def initialize(context): attach_pipeline(make_pipeline(), "pipeline") def before_trading_start(context, data): results = pipeline_output('pipeline')
notebooks/data/quandl.yahoo_index_vix/notebook.ipynb
quantopian/research_public
apache-2.0
Basic Concepts What is "learning from data"? In general Learning from Data is a scientific discipline that is concerned with the design and development of algorithms that allow computers to infer (from data) a model that allows compact representation (unsupervised learning) and/or good generalization (supervised learning). This is an important technology because it enables computational systems to adaptively improve their performance with experience accumulated from the observed data. Most of these algorithms are based on the iterative solution of a mathematical problem that involves data and model. If there was an analytical solution to the problem, this should be the adopted one, but this is not the case for most of the cases. So, the most common strategy for learning from data is based on solving a system of equations as a way to find a series of parameters of the model that minimizes a mathematical problem. This is called optimization. The most important technique for solving optimization problems is gradient descend. Preliminary: Nelder-Mead method for function minimization. The most simple thing we can try to minimize a function $f(x)$ would be to sample two points relatively near each other, and just repeatedly take a step down away from the largest value. This simple algorithm has a severe limitation: it can't get closer to the true minima than the step size. The Nelder-Mead method dynamically adjusts the step size based off the loss of the new point. If the new point is better than any previously seen value, it expands the step size to accelerate towards the bottom. Likewise if the new point is worse it contracts the step size to converge around the minima. The usual settings are to half the step size when contracting and double the step size when expanding. This method can be easily extended into higher dimensional examples, all that's required is taking one more point than there are dimensions. Then, the simplest approach is to replace the worst point with a point reflected through the centroid of the remaining n points. If this point is better than the best current point, then we can try stretching exponentially out along this line. On the other hand, if this new point isn't much better than the previous value, then we are stepping across a valley, so we shrink the step towards a better point. See "An Interactive Tutorial on Numerical Optimization": http://www.benfrederickson.com/numerical-optimization/ Gradient descend (for hackers): 1-D Let's suppose that we have a function $f: \Re \rightarrow \Re$. For example: $$f(x) = x^2$$ Our objective is to find the argument $x$ that minimizes this function (for maximization, consider $-f(x)$). To this end, the critical concept is the derivative. The derivative of $f$ of a variable $x$, $f'(x)$ or $\frac{\mathrm{d}f}{\mathrm{d}x}$, is a measure of the rate at which the value of the function changes with respect to the change of the variable. It is defined as the following limit: $$ f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} $$ The derivative specifies how to scale a small change in the input in order to obtain the corresponding change in the output: $$ f(x + h) \approx f(x) + h f'(x)$$
# numerical derivative at a point x def f(x): return x**2 def fin_dif(x, f, h = 0.00001): ''' This method returns the derivative of f at x by using the finite difference method ''' return (f(x+h) - f(x))/h x = 2.0 print "{:2.4f}".format(fin_dif(x,f))
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
It can be shown that the “centered difference formula" is better when computing numerical derivatives: $$ \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h} $$ The error in the "finite difference" approximation can be derived from Taylor's theorem and, assuming that $f$ is differentiable, is $O(h)$. In the case of “centered difference" the error is $O(h^2)$. The derivative tells how to chage $x$ in order to make a small improvement in $f$. Minimization Then, we can follow these steps to decrease the value of the function: Start from a random $x$ value. Compute the derivative $f'(x) = \lim_{h \rightarrow 0} \frac{f(x + h) - f(x - h)}{2h}$. Walk a small step (possibly weighted by the derivative module) in the opposite direction of the derivative, because we know that $f(x - h \mbox{ sign}(f'(x))$ is less than $f(x)$ for small enough $h$. The search for the minima ends when the derivative is zero because we have no more information about which direction to move. $x$ is a critical o stationary point if $f'(x)=0$. A minimum (maximum) is a critical point where $f(x)$ is lower (higher) than at all neighboring points. There is a third class of critical points: saddle points. If $f$ is a convex function, this should be the minimum (maximum) of our functions. In other cases it could be a local minimum (maximum) or a saddle point. There are two problems with numerical derivatives: + It is approximate. + It is very slow to evaluate (two function evaluations: $f(x + h) , f(x - h)$ ). Step size Usually, we multiply the derivative by a step size. This step size (often called alpha) has to be chosen carefully, as a value too small will result in a long computation time, while a value too large will not give you the right result (by overshooting) or even fail to converge. Analytical derivative Let's suppose now that we know the analytical derivative. This is only one function evaluation!
old_min = 0 temp_min = 15 step_size = 0.01 precision = 0.0001 def f(x): return x**2 - 6*x + 5 def f_derivative(x): import math return 2*x -6 mins = [] cost = [] while abs(temp_min - old_min) > precision: old_min = temp_min move = f_derivative(old_min) * step_size temp_min = old_min - move cost.append((3-temp_min)**2) mins.append(temp_min) # rounding the result to 2 digits because of the step size print "Local minimum occurs at {:3.6f}.".format(round(temp_min,2))
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
Exercise What happens if step_size=1.0?
# your solution
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
An important feature of gradient descent is that there should be a visible improvement over time. In the following example, we simply plotted the change in the value of the minimum against the iteration during which it was calculated. As we can see, the distance gets smaller over time, but barely changes in later iterations.
x = np.linspace(-10,20,100) y = x**2 - 6*x + 5 x, y = (zip(*enumerate(cost))) fig, ax = plt.subplots(1, 1) fig.set_facecolor('#EAEAF2') plt.plot(x,y, 'r-', alpha=0.7) plt.ylim([-10,150]) plt.gcf().set_size_inches((10,3)) plt.grid(True) plt.show x = np.linspace(-10,20,100) y = x**2 - 6*x + 5 fig, ax = plt.subplots(1, 1) fig.set_facecolor('#EAEAF2') plt.plot(x,y, 'r-') plt.ylim([-10,250]) plt.gcf().set_size_inches((10,3)) plt.grid(True) plt.plot(mins,cost,'o', alpha=0.3) ax.text(mins[-1], cost[-1]+20, 'End (%s steps)' % len(mins), ha='center', color=sns.xkcd_rgb['blue'], ) plt.show
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
From derivatives to gradient: $n$-dimensional function minimization. Let's consider a $n$-dimensional function $f: \Re^n \rightarrow \Re$. For example: $$f(\mathbf{x}) = \sum_{n} x_n^2$$ Our objective is to find the argument $\mathbf{x}$ that minimizes this function. The gradient of $f$ is the vector whose components are the $n$ partial derivatives of $f$. It is thus a vector-valued function. The gradient points in the direction of the greatest rate of increase of the function. $$\nabla {f} = (\frac{\partial f}{\partial x_1}, \dots, \frac{\partial f}{\partial x_n})$$
def f(x): return sum(x_i**2 for x_i in x) def fin_dif_partial_centered(x, f, i, h=1e-6): w1 = [x_j + (h if j==i else 0) for j, x_j in enumerate(x)] w2 = [x_j - (h if j==i else 0) for j, x_j in enumerate(x)] return (f(w1) - f(w2))/(2*h) def gradient_centered(x, f, h=1e-6): return[round(fin_dif_partial_centered(x,f,i,h), 10) for i,_ in enumerate(x)] x = [1.0,1.0,1.0] print '{:.6f}'.format(f(x)), gradient_centered(x,f)
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
The function we have evaluated, $f({\mathbf x}) = x_1^2+x_2^2+x_3^2$, is $3$ at $(1,1,1)$ and the gradient vector at this point is $(2,2,2)$. Then, we can follow this steps to maximize (or minimize) the function: Start from a random $\mathbf{x}$ vector. Compute the gradient vector. Walk a small step in the opposite direction of the gradient vector. It is important to be aware that this gradient computation is very expensive: if $\mathbf{x}$ has dimension $n$, we have to evaluate $f$ at $2*n$ points. How to use the gradient. $f(x) = \sum_i x_i^2$, takes its mimimum value when all $x$ are 0. Let's check it for $n=3$:
def euc_dist(v1,v2): import numpy as np import math v = np.array(v1)-np.array(v2) return math.sqrt(sum(v_i ** 2 for v_i in v))
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
Let's start by choosing a random vector and then walking a step in the opposite direction of the gradient vector. We will stop when the difference (in $\mathbf x$) between the new solution and the old solution is less than a tolerance value.
# choosing a random vector import random import numpy as np x = [random.randint(-10,10) for i in range(3)] x def step(x,grad,alpha): return [x_i - alpha * grad_i for x_i, grad_i in zip(x,grad)] tol = 1e-15 alpha = 0.01 while True: grad = gradient_centered(x,f) next_x = step(x,grad,alpha) if euc_dist(next_x,x) < tol: break x = next_x print [round(i,10) for i in x]
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
Choosing Alpha The step size, alpha, is a slippy concept: if it is too small we will slowly converge to the solution, if it is too large we can diverge from the solution. There are several policies to follow when selecting the step size: Constant size steps. In this case, the size step determines the precision of the solution. Decreasing step sizes. At each step, select the optimal step (the one that get the lower $f(\mathbf x)$). The last policy is good, but too expensive. In this case we would consider a fixed set of values:
step_size = [100, 10, 1, 0.1, 0.01, 0.001, 0.0001, 0.00001]
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
Learning from data In general, we have: A dataset ${(\mathbf{x},y)}$ of $n$ examples. A target function $f_\mathbf{w}$, that we want to minimize, representing the discrepancy between our data and the model we want to fit. The model is represented by a set of parameters $\mathbf{w}$. The gradient of the target function, $g_f$. In the most common case $f$ represents the errors from a data representation model $M$. For example, to fit the model clould be to find the optimal parameters $\mathbf{w}$ that minimize the following expression: $$ f_\mathbf{w} = \frac{1}{n} \sum_{i} (y_i - M(\mathbf{x}_i,\mathbf{w}))^2 $$ For example, $(\mathbf{x},y)$ can represent: $\mathbf{x}$: the behavior of a "Candy Crush" player; $y$: monthly payments. $\mathbf{x}$: sensor data about your car engine; $y$: probability of engine error. $\mathbf{x}$: finantial data of a bank customer; $y$: customer rating. If $y$ is a real value, it is called a regression problem. If $y$ is binary/categorical, it is called a classification problem. Let's suppose that our model is a one-dimensional linear model $M(\mathbf{x},\mathbf{w}) = w \cdot x $. Batch gradient descend We can implement gradient descend in the following way (batch gradient descend):
import numpy as np import random # f = 2x x = np.arange(10) y = np.array([2*i for i in x]) # f_target = 1/n Sum (y - wx)**2 def target_f(x,y,w): return np.sum((y - x * w)**2.0) / x.size # gradient_f = 2/n Sum 2wx**2 - 2xy def gradient_f(x,y,w): return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size def step(w,grad,alpha): return w - alpha * grad def BGD_multi_step(target_f, gradient_f, x, y, toler = 1e-6): alphas = [100, 10, 1, 0.1, 0.001, 0.00001] w = random.random() val = target_f(x,y,w) i = 0 while True: i += 1 gradient = gradient_f(x,y,w) next_ws = [step(w, gradient, alpha) for alpha in alphas] next_vals = [target_f(x,y,w) for w in next_ws] min_val = min(next_vals) next_w = next_ws[next_vals.index(min_val)] next_val = target_f(x,y,next_w) if (abs(val - next_val) < toler): return w else: w, val = next_w, next_val print '{:.6f}'.format(BGD_multi_step(target_f, gradient_f, x, y)) %%timeit BGD_multi_step(target_f, gradient_f, x, y) def BGD(target_f, gradient_f, x, y, toler = 1e-6, alpha=0.01): w = random.random() val = target_f(x,y,w) i = 0 while True: i += 1 gradient = gradient_f(x,y,w) next_w = step(w, gradient, alpha) next_val = target_f(x,y,next_w) if (abs(val - next_val) < toler): return w else: w, val = next_w, next_val print '{:.6f}'.format(BGD(target_f, gradient_f, x, y)) %%timeit BGD(target_f, gradient_f, x, y)
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
Stochastic Gradient Descend The last function evals the whole dataset $(\mathbf{x}_i,y_i)$ at every step. If the dataset is large, this strategy is too costly. In this case we will use a strategy called SGD (Stochastic Gradient Descend). When learning from data, the cost function is additive: it is computed by adding sample reconstruction errors. Then, we can compute the estimate the gradient (and move towards the minimum) by using only one data sample (or a small data sample). Thus, we will find the minimum by iterating this gradient estimation over the dataset. A full iteration over the dataset is called epoch. During an epoch, data must be used in a random order. If we apply this method we have some theoretical guarantees to find a good minimum: + SGD essentially uses the inaccurate gradient per iteration. Since there is no free food, what is the cost by using approximate gradient? The answer is that the convergence rate is slower than the gradient descent algorithm. + The convergence of SGD has been analyzed using the theories of convex minimization and of stochastic approximation: it converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum.
def in_random_order(data): import random indexes = [i for i,_ in enumerate(data)] random.shuffle(indexes) for i in indexes: yield data[i] import numpy as np import random def SGD(target_f, gradient_f, x, y, toler = 1e-6, epochs=100, alpha_0=0.01): data = zip(x,y) w = random.random() alpha = alpha_0 min_w, min_val = float('inf'), float('inf') epoch = 0 iteration_no_increase = 0 while epoch < epochs and iteration_no_increase < 100: val = target_f(x, y, w) if min_val - val > toler: min_w, min_val = w, val alpha = alpha_0 iteration_no_increase = 0 else: iteration_no_increase += 1 alpha *= 0.9 for x_i, y_i in in_random_order(data): gradient_i = gradient_f(x_i, y_i, w) w = w - (alpha * gradient_i) epoch += 1 return min_w print 'w: {:.6f}'.format(SGD(target_f, gradient_f, x, y))
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
Example: Stochastic Gradient Descent and Linear Regression The linear regression model assumes a linear relationship between data: $$ y_i = w_1 x_i + w_0 $$ Let's generate a more realistic dataset (with noise), where $w_1 = 2$ and $w_0 = 0$. The bias trick. It is a little cumbersome to keep track separetey of $w_i$, the feature weights, and $w_0$, the bias. A common used trick is to combine these parameters into a single structure that holds both of them by extending the vector $x$ with one additional dimension that always holds the constant $1$. With this dimension the model simplifies to a single multiply $f(\mathbf{x},\mathbf{w}) = \mathbf{w} \cdot \mathbf{x}$.
%reset import warnings warnings.filterwarnings('ignore') import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_regression from scipy import stats import random %matplotlib inline # x: input data # y: noisy output data x = np.random.uniform(0,1,20) # f = 2x + 0 def f(x): return 2*x + 0 noise_variance =0.1 noise = np.random.randn(x.shape[0])*noise_variance y = f(x) + noise fig, ax = plt.subplots(1, 1) fig.set_facecolor('#EAEAF2') plt.xlabel('$x$', fontsize=15) plt.ylabel('$f(x)$', fontsize=15) plt.plot(x, y, 'o', label='y') plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)') plt.ylim([0,2]) plt.gcf().set_size_inches((10,3)) plt.grid(True) plt.show # f_target = 1/n Sum (y - wx)**2 def target_f(x,y,w): return np.sum((y - x * w)**2.0) / x.size # gradient_f = 2/n Sum 2wx**2 - 2xy def gradient_f(x,y,w): return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size def in_random_order(data): indexes = [i for i,_ in enumerate(data)] random.shuffle(indexes) for i in indexes: yield data[i] def SGD(target_f, gradient_f, x, y, toler = 1e-6, epochs=100, alpha_0=0.01): data = zip(x,y) w = random.random() alpha = alpha_0 min_w, min_val = float('inf'), float('inf') iteration_no_increase = 0 w_cost = [] epoch = 0 while epoch < epochs and iteration_no_increase < 100: val = target_f(x, y, w) if min_val - val > toler: min_w, min_val = w, val alpha = alpha_0 iteration_no_increase = 0 else: iteration_no_increase += 1 alpha *= 0.9 for x_i, y_i in in_random_order(data): gradient_i = gradient_f(x_i, y_i, w) w = w - (alpha * gradient_i) w_cost.append(target_f(x,y,w)) epoch += 1 return min_w, np.array(w_cost) w, target_value = SGD(target_f, gradient_f, x, y) print 'w: {:.6f}'.format(w) fig, ax = plt.subplots(1, 1) fig.set_facecolor('#EAEAF2') plt.plot(x, y, 'o', label='t') plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)', alpha=0.5) plt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line', alpha=0.5, linestyle='--') plt.xlabel('input x') plt.ylabel('target t') plt.title('input vs. target') plt.ylim([0,2]) plt.gcf().set_size_inches((10,3)) plt.grid(True) plt.show fig, ax = plt.subplots(1, 1) fig.set_facecolor('#EAEAF2') plt.plot(np.arange(target_value.size), target_value, 'o', alpha = 0.2) plt.xlabel('Iteration') plt.ylabel('Cost') plt.grid() plt.gcf().set_size_inches((10,3)) plt.grid(True) plt.show()
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
Mini-batch Gradient Descent In code, general batch gradient descent looks something like this: python nb_epochs = 100 for i in range(nb_epochs): grad = evaluate_gradient(target_f, data, w) w = w - learning_rate * grad For a pre-defined number of epochs, we first compute the gradient vector of the target function for the whole dataset w.r.t. our parameter vector. Stochastic gradient descent (SGD) in contrast performs a parameter update for each training example and label: python nb_epochs = 100 for i in range(nb_epochs): np.random.shuffle(data) for sample in data: grad = evaluate_gradient(target_f, sample, w) w = w - learning_rate * grad Mini-batch gradient descent finally takes the best of both worlds and performs an update for every mini-batch of $n$ training examples: python nb_epochs = 100 for i in range(nb_epochs): np.random.shuffle(data) for batch in get_batches(data, batch_size=50): grad = evaluate_gradient(target_f, batch, w) w = w - learning_rate * grad Minibatch SGD has the advantage that it works with a slightly less noisy estimate of the gradient. However, as the minibatch size increases, the number of updates done per computation done decreases (eventually it becomes very inefficient, like batch gradient descent). There is an optimal trade-off (in terms of computational efficiency) that may vary depending on the data distribution and the particulars of the class of function considered, as well as how computations are implemented.
def get_batches(iterable, n = 1): current_batch = [] for item in iterable: current_batch.append(item) if len(current_batch) == n: yield current_batch current_batch = [] if current_batch: yield current_batch %reset import warnings warnings.filterwarnings('ignore') import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.datasets.samples_generator import make_regression from scipy import stats import random %matplotlib inline # x: input data # y: noisy output data x = np.random.uniform(0,1,2000) # f = 2x + 0 def f(x): return 2*x + 0 noise_variance =0.1 noise = np.random.randn(x.shape[0])*noise_variance y = f(x) + noise plt.plot(x, y, 'o', label='y') plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)') plt.xlabel('$x$', fontsize=15) plt.ylabel('$t$', fontsize=15) plt.ylim([0,2]) plt.title('inputs (x) vs targets (y)') plt.grid() plt.legend(loc=2) plt.gcf().set_size_inches((10,3)) plt.show() # f_target = 1/n Sum (y - wx)**2 def target_f(x,y,w): return np.sum((y - x * w)**2.0) / x.size # gradient_f = 2/n Sum 2wx**2 - 2xy def gradient_f(x,y,w): return 2 * np.sum(2*w*(x**2) - 2*x*y) / x.size def in_random_order(data): indexes = [i for i,_ in enumerate(data)] random.shuffle(indexes) for i in indexes: yield data[i] def get_batches(iterable, n = 1): current_batch = [] for item in iterable: current_batch.append(item) if len(current_batch) == n: yield current_batch current_batch = [] if current_batch: yield current_batch def SGD_MB(target_f, gradient_f, x, y, epochs=100, alpha_0=0.01): data = zip(x,y) w = random.random() alpha = alpha_0 min_w, min_val = float('inf'), float('inf') epoch = 0 while epoch < epochs: val = target_f(x, y, w) if val < min_val: min_w, min_val = w, val alpha = alpha_0 else: alpha *= 0.9 np.random.shuffle(data) for batch in get_batches(data, n = 100): x_batch = np.array(zip(*batch)[0]) y_batch = np.array(zip(*batch)[1]) gradient = gradient_f(x_batch, y_batch, w) w = w - (alpha * gradient) epoch += 1 return min_w w = SGD_MB(target_f, gradient_f, x, y) print 'w: {:.6f}'.format(w) plt.plot(x, y, 'o', label='t') plt.plot([0, 1], [f(0), f(1)], 'b-', label='f(x)', alpha=0.5) plt.plot([0, 1], [0*w, 1*w], 'r-', label='fitted line', alpha=0.5, linestyle='--') plt.xlabel('input x') plt.ylabel('target t') plt.ylim([0,2]) plt.title('input vs. target') plt.grid() plt.legend(loc=2) plt.gcf().set_size_inches((10,3)) plt.show()
1. Learning from data and optimization.ipynb
DeepLearningUB/EBISS2017
mit
The data goes all the way back to 1947 and is updated monthly. Blaze provides us with the first 10 rows of the data for display. Just to confirm, let's just count the number of rows in the Blaze expression:
fred_unrate.count()
notebooks/data/quandl.fred_unrate/notebook.ipynb
quantopian/research_public
apache-2.0
Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame
unrate_df = odo(fred_unrate, pd.DataFrame) unrate_df.plot(x='asof_date', y='value') plt.xlabel("As Of Date (asof_date)") plt.ylabel("Unemployment Rate") plt.title("United States Unemployment Rate") plt.legend().set_visible(False)
notebooks/data/quandl.fred_unrate/notebook.ipynb
quantopian/research_public
apache-2.0
Next we define a function to produce a rotor given euler angles
def R_euler(phi, theta,psi): Rphi = e**(-phi/2.*e12) Rtheta = e**(-theta/2.*e23) Rpsi = e**(-psi/2.*e12) return Rphi*Rtheta*Rpsi
docs/tutorials/euler-angles.ipynb
arsenovic/clifford
bsd-3-clause
For example, using this to create a rotation similar to that shown in the animation above,
R = R_euler(pi/4, pi/4, pi/4) R
docs/tutorials/euler-angles.ipynb
arsenovic/clifford
bsd-3-clause
Convert to Quaternions A Rotor in 3D space is a unit quaternion, and so we have essentially created a function that converts Euler angles to quaternions. All you need to do is interpret the bivectors as $i,j,$ and $k$'s. See Interfacing Other Mathematical Systems, for more on quaternions. Convert to Rotation Matrix The matrix representation for a rotation can defined as the result of rotating an ortho-normal frame. Rotating an ortho-normal frame can be done easily,
A = [e1,e2,e3] # initial ortho-normal frame B = [R*a*~R for a in A] # resultant frame after rotation B
docs/tutorials/euler-angles.ipynb
arsenovic/clifford
bsd-3-clause
The components of this frame are the rotation matrix, so we just enter the frame components into a matrix.
from numpy import array M = [float(b|a) for b in B for a in A] # you need float() due to bug in clifford M = array(M).reshape(3,3) M
docs/tutorials/euler-angles.ipynb
arsenovic/clifford
bsd-3-clause
Thats a rotation matrix. Convert a Rotation Matrix to a Rotor In 3 Dimenions, there is a simple formula which can be used to directly transform a rotations matrix into a rotor. For arbitrary dimensions you have to use a different algorithm (see clifford.tools.orthoMat2Versor() (docs)). Anyway, in 3 dimensions there is a closed form solution, as described in Sec. 4.3.3 of "Geometric Algebra for Physicists". Given a rotor $R$ which transforms an orthonormal frame $A={a_k}$ into $B={b_k}$ as such, $$b_k = Ra_k\tilde{R}$$ $R$ is given by $$R= \frac{1+a_kb_k}{|1+a_kb_k|}$$ So, if you want to convert from a rotation matrix into a rotor, start by converting the matrix M into a frame $B$.(You could do this with loop if you want.)
B = [M[0,0]*e1 + M[1,0]*e2 + M[2,0]*e3, M[0,1]*e1 + M[1,1]*e2 + M[2,1]*e3, M[0,2]*e1 + M[1,2]*e2 + M[2,2]*e3] B
docs/tutorials/euler-angles.ipynb
arsenovic/clifford
bsd-3-clause
Then implement the formula
A = [e1,e2,e3] R = 1+sum([A[k]*B[k] for k in range(3)]) R = R/abs(R) R
docs/tutorials/euler-angles.ipynb
arsenovic/clifford
bsd-3-clause
Import data Creates a dataframe (called "data") and fills it with data from a URL.
data = pd.read_csv("http://web_address.com/filename.csv")
templateTable.ipynb
merryjman/astronomy
gpl-3.0
Display part of the data table
data.head(3)
templateTable.ipynb
merryjman/astronomy
gpl-3.0
Make a scatter plot of two column's data
# Set variables for scatter plot # x = data.OneColumnName y = data.AnotherColumnName # make the graph plt.scatter(x,y) plt.title('title') plt.xlabel('label') plt.ylabel('label') # This actually shows the plot plt.show()
templateTable.ipynb
merryjman/astronomy
gpl-3.0
Make a histogram of one column's data
plt.hist(data.ColumnName, bins=10, range=[0,100])
templateTable.ipynb
merryjman/astronomy
gpl-3.0
Import the Fang et al. 2016 data
tab1 = pd.read_fwf('../data/Fang2016/Table_1+4_online.dat', na_values=['-99.00000000', '-9999.0', '99.000, 99.0']) df = tab1.rename(columns={'# Object_name':'Object_name'}) df.head() df.Object_name.values[0:30]
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
Ugh, the naming convention is non-standard in a way that is likely more work than it's worth to try to match to other catalogs. Whyyyyyyyyy. Try full-blown coordinate matching From astropy: http://docs.astropy.org/en/stable/coordinates/matchsep.html
ra1 = df.RAJ2000 dec1 = df.DEJ2000 ra2 = df_abc.RAdeg dec2 = df_abc.DEdeg from astropy.coordinates import SkyCoord from astropy import units as u c = SkyCoord(ra=ra1*u.degree, dec=dec1*u.degree) catalog = SkyCoord(ra=ra2*u.degree, dec=dec2*u.degree) idx, d2d, d3d = c.match_to_catalog_sky(catalog) plt.figure(figsize=(10,10)) plt.plot(ra1, dec1, '.', alpha=0.3) plt.plot(ra2, dec2, '.', alpha=0.3) plt.hist(d2d.to(u.arcsecond)/u.arcsecond, bins=np.arange(0,3, 0.1)); plt.yscale('log') plt.axvline(x=0.375,color='r', linestyle='dashed')
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
Ok, we'll accept all matches with better than 0.375 arcsecond separation.
boolean_matches = d2d.to(u.arcsecond).value < 0.375
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
How many matches are there?
boolean_matches.sum()
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
120 matches--- not bad. Only keep the subset of fang sources that also have K2
df['EPIC'] = '' matched_idx = idx[boolean_matches] matched_idx df.shape, df_abc.shape idx.shape df['EPIC'][boolean_matches] = df_abc['EPIC'].iloc[matched_idx].values fang_K2 = pd.merge(df_abc, df, how='left', on='EPIC') fang_K2.columns fang_K2[['Name_adopt', 'Object_name']][fang_K2.Object_name.notnull()].tail(10) fang_K2.to_csv('../data/Fang2016/Rebull_Fang_merge.csv', index=False) fang_K2
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
Great correspondence! Looks like there are 120 targets in both categories. Let's spot-check if they use similar temperatures:
plt.figure(figsize=(7, 7)) plt.plot(fang_K2.Teff, fang_K2.Tspec, 'o') plt.xlabel(r'$T_{\mathrm{eff}}$ Stauffer et al. 2016') plt.ylabel(r'$T_{\mathrm{spec}}$ Fang et al. 2016') plt.plot([3000, 6300], [3000, 6300], 'k--') plt.ylim(3000, 6300) plt.xlim(3000, 6300);
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
What's the scatter?
delta_Tspec = fang_K2.Teff - fang_K2.Tspec delta_Tspec = delta_Tspec.dropna() RMS_Tspec = np.sqrt((delta_Tspec**2.0).sum()/len(delta_Tspec)) print('{:0.0f}'.format(RMS_Tspec))
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
The authors disagree on temperature by about $\delta T \sim$ 100 K RMS. Let's make the figure we really want to make: K2 Amplitude versus spectroscopically measured filling factor of starspots $f_{spot}$. We expect that the plot will be a little noisy due to differences in temperature assumptions and such, but it is absolutely fundamental. First we need to convert the amplitude in magnitudes to a faction $\in [0,1]$. The $\Delta V$ in Stauffer et al. 2016 has negative values, so I'm not sure what it is! The Ampl from Rebull et al. 2016 is: Amplitude, in mag, of the 10th to the 90th percentile
fang_K2['flux_amp'] = 1.0 - 10**(fang_K2.Ampl/-2.5) plt.hist(fang_K2.flux_amp, bins=np.arange(0, 0.15, 0.005)); plt.hist(fang_K2.fs1.dropna(), bins=np.arange(0, 0.8, 0.03)); sns.set_context('talk') plt.figure(figsize=(7, 7)) plt.plot(fang_K2.fs1, fang_K2.flux_amp, '.') plt.plot([0,1], [0,1], 'k--') plt.xlim(0.0,0.2) plt.ylim(0,0.2) plt.xlabel('LAMOST-measured $f_{spot}$ \n Fang et al. 2016') plt.ylabel('K2-measured spot amplitude $A \in [0,1)$ \n Rebull et al. 2016') plt.xticks(np.arange(0, 0.21, 0.05)) plt.yticks(np.arange(0, 0.21, 0.05)) plt.savefig('K2_LAMOST_starspots_data.png', bbox_inches='tight', dpi=300); plt.figure(figsize=(7, 7)) plt.plot(fang_K2.fs1, fang_K2.flux_amp, '.') plt.plot([0,1], [0,1], 'k--') plt.xlim(0.0,0.5) plt.ylim(0,0.5) plt.xlabel('LAMOST-measured $f_{spot}$ \n Fang et al. 2016') plt.ylabel('K2-measured spot amplitude $A \in [0,1)$ \n Rebull et al. 2016') plt.xticks(np.arange(0, 0.51, 0.1)) plt.yticks(np.arange(0, 0.51, 0.1)) plt.savefig('K2_LAMOST_starspots_wide.png', bbox_inches='tight', dpi=300);
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
Awesome! The location of points indicate that starspots have a large longitudinally-symmetric component that evades detection in K2 amplitudes. What effects can cause / mimic this behavior? - Unresolved binarity could cause an errant TiO measurement, biasing the Fang et al. measurement. - Increased rotation (Rossby number) could make stronger or weaker dipolar magnetic fields - EW H$\alpha$ could be correlated, from an activity sense. -
fang_K2.columns fang_K2.beat.value_counts()
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
Crosstabs with discreate variables: Legend
plt.figure(figsize=(7, 7)) cross_tab = 'resc' c1 = fang_K2[cross_tab] == 'yes' c2 = fang_K2[cross_tab] == 'no' plt.plot(fang_K2.fs1[c1], fang_K2.flux_amp[c1], 'r.', label='{} = yes'.format(cross_tab)) plt.plot(fang_K2.fs1[c2], fang_K2.flux_amp[c2], 'b.', label='{} = no'.format(cross_tab)) plt.legend(loc='best') plt.plot([0,1], [0,1], 'k--') plt.xlim(-0.01,0.8) plt.ylim(0,0.15) plt.xlabel('LAMOST-measured $f_{spot}$ \n Fang et al. 2016') plt.ylabel('K2-measured spot amplitude $A \in [0,1)$ \n Rebull et al. 2016') #plt.xticks(np.arange(0, 0.51, 0.1)) #plt.yticks(np.arange(0, 0.51, 0.1)) plt.savefig('K2_LAMOST_starspots_crosstab.png', bbox_inches='tight', dpi=300);
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
Crosstabs with continuous variables: Colorbar
plt.figure(figsize=(7, 7)) cross_tab = 'Mass' cm = plt.cm.get_cmap('Blues') sc = plt.scatter(fang_K2.fs1, fang_K2.flux_amp, c=fang_K2[cross_tab], cmap=cm) cb = plt.colorbar(sc) #cb.set_label(r'$T_{spot}$ (K)') plt.plot([0,1], [0,1], 'k--') plt.xlim(-0.01,0.8) plt.ylim(0,0.15) plt.xlabel('LAMOST-measured $f_{spot}$ \n Fang et al. 2016') plt.ylabel('K2-measured spot amplitude $A \in [0,1)$ \n Rebull et al. 2016') #plt.xticks(np.arange(0, 0.51, 0.1)) #plt.yticks(np.arange(0, 0.51, 0.1)) plt.savefig('K2_LAMOST_starspots_cb.png', bbox_inches='tight', dpi=300);
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
What about inclination? $$ V = \frac{d}{t} = \frac{2 \pi R}{P} $$ $$ V \sin{i} = \frac{2 \pi R}{P} \sin{i}$$ $$ V \sin{i} \cdot \frac{P}{2 \pi R} = \sin{i}$$ $$ \arcsin{\lgroup V \sin{i} \cdot \frac{P}{2 \pi R} \rgroup} = i$$
import astropy.units as u sini = fang_K2.vsini * u.km/u.s * fang_K2.Per1* u.day /(2.0*np.pi *u.solRad) vec = sini.values.to(u.dimensionless_unscaled).value sns.distplot(vec[vec == vec], bins=np.arange(0,2, 0.1), kde=False) plt.axvline(1.0, color='k', linestyle='dashed') inclination = np.arcsin(vec)*180.0/np.pi sns.distplot(inclination[inclination == inclination], bins=np.arange(0,90.0, 5), kde=False); fang_K2['sini'] = vec plt.figure(figsize=(7, 7)) cross_tab = 'sini' cm = plt.cm.get_cmap('hot') sc = plt.scatter(fang_K2.fs1, fang_K2.flux_amp, c=fang_K2[cross_tab], cmap=cm) cb = plt.colorbar(sc) cb.set_label(r'$\sin{i}$') plt.plot([0,1], [0,1], 'k--') plt.xlim(-0.01,0.7) plt.ylim(0,0.15) plt.xlabel('LAMOST-measured $f_{spot}$ \n Fang et al. 2016') plt.ylabel('K2-measured spot amplitude $A \in [0,1)$ \n Rebull et al. 2016') #plt.xticks(np.arange(0, 0.51, 0.1)) #plt.yticks(np.arange(0, 0.51, 0.1)) plt.savefig('K2_LAMOST_starspots_cb.png', bbox_inches='tight', dpi=300);
notebooks/Rebull2016_extra.ipynb
BrownDwarf/ApJdataFrames
mit
K-means clustering Example adapted from here. Load dataset
iris = datasets.load_iris() X,y = iris.data[:,:2], iris.target
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Define and train model
num_clusters = 8 model = KMeans(n_clusters=num_clusters) model.fit(X)
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Extract the labels and the cluster centers
labels = model.labels_ cluster_centers = model.cluster_centers_ print cluster_centers
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Plot the clusters
plt.scatter(X[:,0], X[:,1],c=labels.astype(np.float)) plt.hold(True) plt.scatter(cluster_centers[:,0], cluster_centers[:,1], c = np.arange(num_clusters), marker = '^', s = 150) plt.show() plt.scatter(X[:,0], X[:,1],c=np.choose(y,[0,2,1]).astype(np.float)) plt.show()
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Gaussian Mixture Model Example taken from here. Define a visualization function
def make_ellipses(gmm, ax): """ Visualize the gaussians in a GMM as ellipses """ for n, color in enumerate('rgb'): v, w = np.linalg.eigh(gmm._get_covars()[n][:2, :2]) u = w[0] / np.linalg.norm(w[0]) angle = np.arctan2(u[1], u[0]) angle = 180 * angle / np.pi # convert to degrees v *= 9 ell = mpl.patches.Ellipse(gmm.means_[n, :2], v[0], v[1], 180 + angle, color=color) ell.set_clip_box(ax.bbox) ell.set_alpha(0.5) ax.add_artist(ell)
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Load dataset and make training and test splits
iris = datasets.load_iris() # Break up the dataset into non-overlapping training (75%) and testing # (25%) sets. skf = StratifiedKFold(iris.target, n_folds=4) # Only take the first fold. train_index, test_index = next(iter(skf)) X_train = iris.data[train_index] y_train = iris.target[train_index] X_test = iris.data[test_index] y_test = iris.target[test_index] n_classes = len(np.unique(y_train))
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Train and compare different GMMs
# Try GMMs using different types of covariances. classifiers = dict((covar_type, GMM(n_components=n_classes, covariance_type=covar_type, init_params='wc', n_iter=20)) for covar_type in ['spherical', 'diag', 'tied', 'full']) n_classifiers = len(classifiers) plt.figure(figsize=(2*3 * n_classifiers / 2, 2*6)) plt.subplots_adjust(bottom=.01, top=0.95, hspace=.15, wspace=.05, left=.01, right=.99) for index, (name, classifier) in enumerate(classifiers.items()): # Since we have class labels for the training data, we can # initialize the GMM parameters in a supervised manner. classifier.means_ = np.array([X_train[y_train == i].mean(axis=0) for i in xrange(n_classes)]) # Train the other parameters using the EM algorithm. classifier.fit(X_train) h = plt.subplot(2, n_classifiers / 2, index + 1) make_ellipses(classifier, h) for n, color in enumerate('rgb'): data = iris.data[iris.target == n] plt.scatter(data[:, 0], data[:, 1], 0.8, color=color, label=iris.target_names[n]) # Plot the test data with crosses for n, color in enumerate('rgb'): data = X_test[y_test == n] plt.plot(data[:, 0], data[:, 1], 'x', color=color) y_train_pred = classifier.predict(X_train) train_accuracy = np.mean(y_train_pred.ravel() == y_train.ravel()) * 100 plt.text(0.05, 0.9, 'Train accuracy: %.1f' % train_accuracy, transform=h.transAxes) y_test_pred = classifier.predict(X_test) test_accuracy = np.mean(y_test_pred.ravel() == y_test.ravel()) * 100 plt.text(0.05, 0.8, 'Test accuracy: %.1f' % test_accuracy, transform=h.transAxes) plt.xticks(()) plt.yticks(()) plt.title(name) plt.legend(loc='lower right', prop=dict(size=12)) plt.show()
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Hierarchical Agglomerative Clustering Example taken from here. Load and pre-process dataset
digits = datasets.load_digits(n_class=10) X = digits.data y = digits.target n_samples, n_features = X.shape np.random.seed(0) def nudge_images(X, y): # Having a larger dataset shows more clearly the behavior of the # methods, but we multiply the size of the dataset only by 2, as the # cost of the hierarchical clustering methods are strongly # super-linear in n_samples shift = lambda x: ndimage.shift(x.reshape((8, 8)), .3 * np.random.normal(size=2), mode='constant', ).ravel() X = np.concatenate([X, np.apply_along_axis(shift, 1, X)]) Y = np.concatenate([y, y], axis=0) return X, Y X, y = nudge_images(X, y)
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Visualize the clustering
def plot_clustering(X_red, X, labels, title=None): x_min, x_max = np.min(X_red, axis=0), np.max(X_red, axis=0) X_red = (X_red - x_min) / (x_max - x_min) plt.figure(figsize=(2*6, 2*4)) for i in range(X_red.shape[0]): plt.text(X_red[i, 0], X_red[i, 1], str(y[i]), color=plt.cm.spectral(labels[i] / 10.), fontdict={'weight': 'bold', 'size': 9}) plt.xticks([]) plt.yticks([]) if title is not None: plt.title(title, size=17) plt.axis('off') plt.tight_layout()
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Create a 2D embedding of the digits dataset
print("Computing embedding") X_red = manifold.SpectralEmbedding(n_components=2).fit_transform(X) print("Done.")
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
Train and visualize the clusters Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters. Average linkage minimizes the average of the distances between all observations of pairs of clusters.
from sklearn.cluster import AgglomerativeClustering for linkage in ('ward', 'average', 'complete'): clustering = AgglomerativeClustering(linkage=linkage, n_clusters=10) t0 = time() clustering.fit(X_red) print("%s : %.2fs" % (linkage, time() - t0)) plot_clustering(X_red, X, clustering.labels_, "%s linkage" % linkage) plt.show()
lecture6/ML-Anirban_Tutorial6.ipynb
Santara/ML-MOOC-NPTEL
gpl-3.0
These features are: sepal length in cm sepal width in cm petal length in cm petal width in cm Numerical features such as these are pretty straightforward: each sample contains a list of floating-point numbers corresponding to the features Categorical Features What if you have categorical features? For example, imagine there is data on the color of each iris: color in [red, blue, purple] You might be tempted to assign numbers to these features, i.e. red=1, blue=2, purple=3 but in general this is a bad idea. Estimators tend to operate under the assumption that numerical features lie on some continuous scale, so, for example, 1 and 2 are more alike than 1 and 3, and this is often not the case for categorical features. In fact, the example above is a subcategory of "categorical" features, namely, "nominal" features. Nominal features don't imply an order, whereas "ordinal" features are categorical features that do imply an order. An example of ordinal features would be T-shirt sizes, e.g., XL > L > M > S. One work-around for parsing nominal features into a format that prevents the classification algorithm from asserting an order is the so-called one-hot encoding representation. Here, we give each category its own dimension. The enriched iris feature set would hence be in this case: sepal length in cm sepal width in cm petal length in cm petal width in cm color=purple (1.0 or 0.0) color=blue (1.0 or 0.0) color=red (1.0 or 0.0) Note that using many of these categorical features may result in data which is better represented as a sparse matrix, as we'll see with the text classification example below. Using the DictVectorizer to encode categorical features When the source data is encoded has a list of dicts where the values are either strings names for categories or numerical values, you can use the DictVectorizer class to compute the boolean expansion of the categorical features while leaving the numerical features unimpacted:
measurements = [ {'city': 'Dubai', 'temperature': 33.}, {'city': 'London', 'temperature': 12.}, {'city': 'San Francisco', 'temperature': 18.}, ] from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer() vec vec.fit_transform(measurements).toarray() vec.get_feature_names()
notebooks/10.Case_Study-Titanic_Survival.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Derived Features Another common feature type are derived features, where some pre-processing step is applied to the data to generate features that are somehow more informative. Derived features may be based in feature extraction and dimensionality reduction (such as PCA or manifold learning), may be linear or nonlinear combinations of features (such as in polynomial regression), or may be some more sophisticated transform of the features. Combining Numerical and Categorical Features As an example of how to work with both categorical and numerical data, we will perform survival predicition for the passengers of the HMS Titanic. We will use a version of the Titanic (titanic3.xls) from here. We converted the .xls to .csv for easier manipulation but left the data is otherwise unchanged. We need to read in all the lines from the (titanic3.csv) file, set aside the keys from the first line, and find our labels (who survived or died) and data (attributes of that person). Let's look at the keys and some corresponding example lines.
import os import pandas as pd titanic = pd.read_csv(os.path.join('datasets', 'titanic3.csv')) print(titanic.columns)
notebooks/10.Case_Study-Titanic_Survival.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Here is a broad description of the keys and what they mean: pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd) survival Survival (0 = No; 1 = Yes) name Name sex Sex age Age sibsp Number of Siblings/Spouses Aboard parch Number of Parents/Children Aboard ticket Ticket Number fare Passenger Fare cabin Cabin embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) boat Lifeboat body Body Identification Number home.dest Home/Destination In general, it looks like name, sex, cabin, embarked, boat, body, and homedest may be candidates for categorical features, while the rest appear to be numerical features. We can also look at the first couple of rows in the dataset to get a better understanding:
titanic.head()
notebooks/10.Case_Study-Titanic_Survival.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
We clearly want to discard the "boat" and "body" columns for any classification into survived vs not survived as they already contain this information. The name is unique to each person (probably) and also non-informative. For a first try, we will use "pclass", "sibsp", "parch", "fare" and "embarked" as our features:
labels = titanic.survived.values features = titanic[['pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked']] features.head()
notebooks/10.Case_Study-Titanic_Survival.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
The data now contains only useful features, but they are not in a format that the machine learning algorithms can understand. We need to transform the strings "male" and "female" into binary variables that indicate the gender, and similarly for "embarked". We can do that using the pandas get_dummies function:
pd.get_dummies(features).head()
notebooks/10.Case_Study-Titanic_Survival.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
This transformation successfully encoded the string columns. However, one might argue that the class is also a categorical variable. We can explicitly list the columns to encode using the columns parameter, and include pclass:
features_dummies = pd.get_dummies(features, columns=['pclass', 'sex', 'embarked']) features_dummies.head(n=16) data = features_dummies.values import numpy as np np.isnan(data).any()
notebooks/10.Case_Study-Titanic_Survival.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
With all of the hard data loading work out of the way, evaluating a classifier on this data becomes straightforward. Setting up the simplest possible model, we want to see what the simplest score can be with DummyClassifier.
from sklearn.model_selection import train_test_split from sklearn.preprocessing import Imputer train_data, test_data, train_labels, test_labels = train_test_split( data, labels, random_state=0) imp = Imputer() imp.fit(train_data) train_data_finite = imp.transform(train_data) test_data_finite = imp.transform(test_data) np.isnan(train_data_finite).any() from sklearn.dummy import DummyClassifier clf = DummyClassifier('most_frequent') clf.fit(train_data_finite, train_labels) print("Prediction accuracy: %f" % clf.score(test_data_finite, test_labels))
notebooks/10.Case_Study-Titanic_Survival.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
<div class="alert alert-success"> <b>EXERCISE</b>: <ul> <li> Try executing the above classification, using LogisticRegression and RandomForestClassifier instead of DummyClassifier </li> <li> Does selecting a different subset of features help? </li> </ul> </div>
# %load solutions/10_titanic.py
notebooks/10.Case_Study-Titanic_Survival.ipynb
amueller/scipy-2017-sklearn
cc0-1.0
Set all graphics from matplotlib to display inline
#!pip install matplotlib import matplotlib.pyplot as plt %matplotlib inline #This lets your graph show you in your notebook df
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Display the names of the columns in the csv
df['name']
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Display the first 3 animals.
df.head(3)
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Sort the animals to see the 3 longest animals.
df.sort_values('length', ascending=False).head(3)
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs.
df['animal'].value_counts()
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Only select the dogs.
dog_df = df['animal'] == 'dog' df[dog_df]
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
Display all of the animals that are greater than 40 cm.
long_animals = df['length'] > 40 df[long_animals]
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit
'length' is the animal's length in cm. Create a new column called inches that is the length in inches. 1 inch = 2.54 cm
df['length_inches'] = df['length'] / 2.54 df
07/pandas-homework-hon-june13.ipynb
honjy/foundations-homework
mit