markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x. 图像预处理功能的实现 正规化 在如下的代码中,修改 normalize 函数,使之能够对输入的图像数据 ...
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ return x/255.0 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normali...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 t...
from sklearn import preprocessing def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ # TODO: Implement Function lb = preprocessing.LabelBinarizer() ...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Randomize Data As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. 随机打乱数据 正如你在上方探索数据部分所看到的,样本的顺序已经被随机打乱了。尽管再随机处理一次也没问题,不过对于该数据我们没必要再进行一次相关操作了。 Preprocess all the data and save it Running the code cell below wi...
""" DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. 检查点 这是你的首个检查点。因为预处理完的数据已经被保存到硬盘上了,所以如果你需要回顾或重启该 notebook,你可以在这里重新开始。
""" DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittest...
import tensorflow as tf # There are tensorflow-gpu settings, but gpu can not work becourse of the net is too big. from keras.backend.tensorflow_backend import set_session config = tf.ConfigProto() config.gpu_options.per_process_gpu_memory_fraction = 0.3 set_session(tf.Session(config=config)) def neural_net_image_...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor...
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple fo...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a ch...
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # TODO: Implement Function return tf.contrib.layers.flatten...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packag...
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_out...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Act...
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Full...
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers ...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be cal...
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Num...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy. 显示状态 修改 print_stats 函数来打印 loss 值及验证准确率。 使用全局的变量 valid_features 及 ...
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the...
# TODO: Tune Parameters epochs = 20 batch_size = 256 keep_probability = 0.5
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Train on a Single CIFAR-10 Batch Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section. ...
""" DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_label...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Fully Train the Model Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches. 完全训练该模型 因为你在单批 CIFAR-10 数据上已经得到了一个不错的准确率了,那你可以尝试在所有五批数据上进行训练。
""" DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batc...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. 检查点 该模型已经被存储到你的硬盘中。 测试模型 这部分将在测试数据集上测试你的模型。这边得到的准确率将作为你的最终准确率。你应该得到一个高于 50...
""" DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_clas...
MachineLearning(Advanced)/p5_image_classification/image_classification_ZH-CN.ipynb
StudyExchange/Udacity
mit
Elements Are Lists
root.tag len(root) for child in root: print(child)
learn_stem/python/dive-into-python-xml.ipynb
wgong/open_source_learning
apache-2.0
Attributes Are Dictonaries
root.attrib c4_att = root[4].attrib c4_att c4_att['rel'],c4_att['href']
learn_stem/python/dive-into-python-xml.ipynb
wgong/open_source_learning
apache-2.0
Searching
# find 1st matching entry tree.find('//{http://www.w3.org/2005/Atom}entry') # find all entry elements tree.findall('//{http://www.w3.org/2005/Atom}entry') # find all category elements tree.findall('//{http://www.w3.org/2005/Atom}category') # find all category element with attribute term="mp4" tree.findall('//{http:/...
learn_stem/python/dive-into-python-xml.ipynb
wgong/open_source_learning
apache-2.0
Generating XML
new_feed = etree.Element('{http://www.w3.org/2005/Atom}feed', attrib={'{http://www.w3.org/XML/1998/namespace}lang': 'en'}) print(etree.tostring(new_feed)) # add more element/text title = etree.SubElement(new_feed, 'title', attrib={'type':'html'}) print(etree.tounicode(new_feed)) title.text = 'Dive into Pyth...
learn_stem/python/dive-into-python-xml.ipynb
wgong/open_source_learning
apache-2.0
Note that the more global package <i>docplex</i> contains another subpackage <i>docplex.mp</i> that is dedicated to Mathematical Programming, another branch of optimization. Step 2: Model the data Next section defines the data of the problem.
from docplex.cp.model import * # List of possible truck configurations. Each tuple is (load, cost) with: # load: max truck load for this configuration, # cost: cost for loading the truck in this configuration TRUCK_CONFIGURATIONS = ((11, 2), (11, 2), (11, 2), (11, 3), (10, 3), (10, 3), (10, 4)) # List of custom...
examples/cp/jupyter/truck_fleet.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 3: Set up the prescriptive model Prepare data for modeling Next section extracts from problem data the parts that are frequently used in the modeling section.
nbTruckConfigs = len(TRUCK_CONFIGURATIONS) maxTruckConfigLoad = [tc[0] for tc in TRUCK_CONFIGURATIONS] truckCost = [tc[1] for tc in TRUCK_CONFIGURATIONS] maxLoad = max(maxTruckConfigLoad) nbOrders = len(CUSTOMER_ORDERS) nbCustomers = 1 + max(co[0] for co in CUSTOMER_ORDERS) volumes = [co[1] for co in CUSTOMER_ORDERS] ...
examples/cp/jupyter/truck_fleet.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Create CPO model
mdl = CpoModel(name="trucks")
examples/cp/jupyter/truck_fleet.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Define the decision variables
# Configuration of the truck for each delivery truckConfigs = integer_var_list(maxDeliveries, 0, nbTruckConfigs - 1, "truckConfigs") # In which delivery is an order where = integer_var_list(nbOrders, 0, maxDeliveries - 1, "where") # Load of a truck load = integer_var_list(maxDeliveries, 0, maxLoad, "load") # Number of ...
examples/cp/jupyter/truck_fleet.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the business constraints
# transitionCost[i] = transition cost between configurations i and i+1 for i in range(1, maxDeliveries): auxVars = (truckConfigs[i - 1], truckConfigs[i], transitionCost[i - 1]) mdl.add(allowed_assignments(auxVars, CONFIGURATION_TRANSITION_COST)) # Constrain the volume of the orders in each truck mdl.add(pack(l...
examples/cp/jupyter/truck_fleet.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Express the objective
# Objective: first criterion for minimizing the cost for configuring and loading trucks # second criterion for minimizing the number of deliveries cost = sum(transitionCost) + sum(element(truckConfigs[i], truckCost) * (load[i] != 0) for i in range(maxDeliveries)) mdl.add(minimize_static_lex([cost, nbDeliver...
examples/cp/jupyter/truck_fleet.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Solve with Decision Optimization solve service
# Search strategy: first assign order to truck mdl.set_search_phases([search_phase(where)]) # Solve model print("\nSolving model....") msol = mdl.solve(TimeLimit=20)
examples/cp/jupyter/truck_fleet.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
Step 4: Investigate the solution and then run an example analysis
if msol.is_solution(): print("Solution: ") ovals = msol.get_objective_values() print(" Configuration cost: {}, number of deliveries: {}".format(ovals[0], ovals[1])) for i in range(maxDeliveries): ld = msol.get_value(load[i]) if ld > 0: stdout.write(" Delivery {:2d}: confi...
examples/cp/jupyter/truck_fleet.ipynb
IBMDecisionOptimization/docplex-examples
apache-2.0
This time we are not going to generate the data but rather use real world annotated training examples.
# Dataset creation import numpy as np import math import random import csv from neon.datasets.dataset import Dataset class WorkoutDS(Dataset): # Number of features per example feature_count = None # Number of examples num_train_examples = None num_test_examples = None # Number of classes ...
ipython-analysis/exercise-cnn.ipynb
datastax-demos/Muvr-Analytics
bsd-3-clause
At first we want to inspect the class distribution of the training and test examples.
from ipy_table import * from operator import itemgetter import numpy as np train_dist = np.reshape(np.transpose(np.sum(dataset.y_train, axis=0)), (dataset.num_labels,1)) test_dist = np.reshape(np.transpose(np.sum(dataset.y_test, axis=0)), (dataset.num_labels,1)) train_ratio = train_dist / dataset.num_train_examples t...
ipython-analysis/exercise-cnn.ipynb
datastax-demos/Muvr-Analytics
bsd-3-clause
Let's have a look at the generated data. We will plot some of the examples of the different classes.
from matplotlib import pyplot, cm from pylab import * # Choose some random examples to plot from the training data number_of_examples_to_plot = 3 plot_ids = np.random.random_integers(0, dataset.num_train_examples - 1, number_of_examples_to_plot) print "Ids of plotted examples:",plot_ids # Retrieve a human readable l...
ipython-analysis/exercise-cnn.ipynb
datastax-demos/Muvr-Analytics
bsd-3-clause
Now we are going to create a neon model. We will start with a realy simple one layer preceptron having 500 hidden units.
from neon.backends import gen_backend from neon.layers import * from neon.models import MLP from neon.transforms import RectLin, Tanh, Logistic, CrossEntropy from neon.experiments import FitPredictErrorExperiment from neon.params import val_init from neon.util.persist import serialize # General settings max_epochs = 7...
ipython-analysis/exercise-cnn.ipynb
datastax-demos/Muvr-Analytics
bsd-3-clause
To check weather the network is learning something we will plot the weight matrices of the different training epochs.
import numpy as np import math from matplotlib import pyplot, cm from pylab import * from IPython.html import widgets from IPython.html.widgets import interact def closestSqrt(i): N = int(math.sqrt(i)) while True: M = int(i / N) if N * M == i: return N, M N -= 1 def...
ipython-analysis/exercise-cnn.ipynb
datastax-demos/Muvr-Analytics
bsd-3-clause
Let's also have a look at the confusion matrix for the test dataset.
from sklearn.metrics import confusion_matrix from ipy_table import * # confusion_matrix(y_true, y_pred) predicted, actual = model.predict_fullset(dataset, "test") y_pred = np.argmax(predicted.asnumpyarray(), axis = 0) y_true = np.argmax(actual.asnumpyarray(), axis = 0) confusion_mat = confusion_matrix(y_true, y_pr...
ipython-analysis/exercise-cnn.ipynb
datastax-demos/Muvr-Analytics
bsd-3-clause
ARIMA Example 1: Arima As can be seen in the graphs from Example 2, the Wholesale price index (WPI) is growing over time (i.e. is not stationary). Therefore an ARMA model is not a good specification. In this first example, we consider a model where the original time series is assumed to be integrated of order 1, so tha...
# Dataset wpi1 = requests.get('http://www.stata-press.com/data/r12/wpi1.dta').content data = pd.read_stata(BytesIO(wpi1)) data.index = data.t # Fit the model mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1)) res = mod.fit() print(res.summary())
examples/notebooks/statespace_sarimax_stata.ipynb
wzbozon/statsmodels
bsd-3-clause
Thus the maximum likelihood estimates imply that for the process above, we have: $$ \Delta y_t = 0.1050 + 0.8740 \Delta y_{t-1} - 0.4206 \epsilon_{t-1} + \epsilon_{t} $$ where $\epsilon_{t} \sim N(0, 0.5226)$. Finally, recall that $c = (1 - \phi_1) \beta_0$, and here $c = 0.1050$ and $\phi_1 = 0.8740$. To compare with ...
# Dataset data = pd.read_stata(BytesIO(wpi1)) data.index = data.t data['ln_wpi'] = np.log(data['wpi']) data['D.ln_wpi'] = data['ln_wpi'].diff() # Graph data fig, axes = plt.subplots(1, 2, figsize=(15,4)) # Levels axes[0].plot(data.index._mpl_repr(), data['wpi'], '-') axes[0].set(title='US Wholesale Price Index') # L...
examples/notebooks/statespace_sarimax_stata.ipynb
wzbozon/statsmodels
bsd-3-clause
To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model: python mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1)) The order argument is a tuple of the form (AR specification, Integration order, MA specific...
# Fit the model mod = sm.tsa.statespace.SARIMAX(data['ln_wpi'], trend='c', order=(1,1,1)) res = mod.fit() print(res.summary())
examples/notebooks/statespace_sarimax_stata.ipynb
wzbozon/statsmodels
bsd-3-clause
ARIMA Example 3: Airline Model In the previous example, we included a seasonal effect in an additive way, meaning that we added a term allowing the process to depend on the 4th MA lag. It may be instead that we want to model a seasonal effect in a multiplicative way. We often write the model then as an ARIMA $(p,d,q) \...
# Dataset air2 = requests.get('http://www.stata-press.com/data/r12/air2.dta').content data = pd.read_stata(BytesIO(air2)) data.index = pd.date_range(start=datetime(data.time[0], 1, 1), periods=len(data), freq='MS') data['lnair'] = np.log(data['air']) # Fit the model mod = sm.tsa.statespace.SARIMAX(data['lnair'], order...
examples/notebooks/statespace_sarimax_stata.ipynb
wzbozon/statsmodels
bsd-3-clause
Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a nu...
# Dataset friedman2 = requests.get('http://www.stata-press.com/data/r12/friedman2.dta').content data = pd.read_stata(BytesIO(friedman2)) data.index = data.time # Variables endog = data.ix['1959':'1981', 'consump'] exog = sm.add_constant(data.ix['1959':'1981', 'm2']) # Fit the model mod = sm.tsa.statespace.SARIMAX(end...
examples/notebooks/statespace_sarimax_stata.ipynb
wzbozon/statsmodels
bsd-3-clause
ARIMA Postestimation: Example 1 - Dynamic Forecasting Here we describe some of the post-estimation capabilities of Statsmodels' SARIMAX. First, using the model from example, we estimate the parameters using data that excludes the last few observations (this is a little artificial as an example, but it allows considerin...
# Dataset raw = pd.read_stata(BytesIO(friedman2)) raw.index = raw.time data = raw.ix[:'1981'] # Variables endog = data.ix['1959':, 'consump'] exog = sm.add_constant(data.ix['1959':, 'm2']) nobs = endog.shape[0] # Fit the model mod = sm.tsa.statespace.SARIMAX(endog.ix[:'1978-01-01'], exog=exog.ix[:'1978-01-01'], order...
examples/notebooks/statespace_sarimax_stata.ipynb
wzbozon/statsmodels
bsd-3-clause
We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978:Q1), the two are the same.
# Graph fig, ax = plt.subplots(figsize=(9,4)) npre = 4 ax.set(title='Personal consumption', xlabel='Date', ylabel='Billions of dollars') # Plot data points data.ix['1977-07-01':, 'consump'].plot(ax=ax, style='o', label='Observed') # Plot predictions predict.predicted_mean.ix['1977-07-01':].plot(ax=ax, style='r--', la...
examples/notebooks/statespace_sarimax_stata.ipynb
wzbozon/statsmodels
bsd-3-clause
Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better.
# Prediction error # Graph fig, ax = plt.subplots(figsize=(9,4)) npre = 4 ax.set(title='Forecast error', xlabel='Date', ylabel='Forecast - Actual') # In-sample one-step-ahead predictions and 95% confidence intervals predict_error = predict.predicted_mean - endog predict_error.ix['1977-10-01':].plot(ax=ax, label='One-...
examples/notebooks/statespace_sarimax_stata.ipynb
wzbozon/statsmodels
bsd-3-clause
The following function will not work with the original VM for the Short course. To use, install s3fs package (via conda or pip). It is a file system interface for AWS S3 buckets and provides a nice interface similar to unix/ftp command line arguments.
def open_nexrad_file(filename, io='radx'): """ Open file using pyart nexrad archive method. Parameters ---------- filename: str Radar filename to open. io: str Py-ART open method. If radx then file is opened via Radx otherwise via native Py-ART function. Using Ra...
5b_PyART_visualization.ipynb
gamaanderson/2017-AMS-Short-Course-on-Open-Source-Radar-Software
bsd-2-clause
Py-ART Colormaps Retrieve the names of colormaps and the colormap list dictionary. The colormaps are registered with matplotlib and can be accessed by inserting 'pyart_' in front of any name.
cm_names = pyart.graph.cm._cmapnames cms = pyart.graph.cm.cmap_d nrows = len(cm_names) gradient = np.linspace(0, 1, 256) gradient = np.vstack((gradient, gradient)) # Create a figure and axes instance fig, axes = plt.subplots(nrows=nrows, figsize=(5,10)) fig.subplots_adjust(top=0.95, bottom=0.01, left=0.2, right=0.99)...
5b_PyART_visualization.ipynb
gamaanderson/2017-AMS-Short-Course-on-Open-Source-Radar-Software
bsd-2-clause
The RadarDisplay This is the most commonly used class designed for surface-based scanning radar Plot a NEXRAD file
nexf = "data/KILN20140429_231254_V06" nexr = pyart.io.read(nexf) nexd = pyart.graph.RadarDisplay(nexr) nexr.fields.keys() fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(16, 12)) nexd.plot('reflectivity', sweep=1, cmap='pyart_NWSRef', vmin=0., vmax=55., mask_outside=False, ax=ax[0, 0]) nexd.plot_range_rings([50, 10...
5b_PyART_visualization.ipynb
gamaanderson/2017-AMS-Short-Course-on-Open-Source-Radar-Software
bsd-2-clause
There are many keyword values we can employe to refine the plot Keywords exist for title, labels, colorbar, along with others. In addition, there are many methods that can be employed. For example, pull out a constructed RHI at a given azimuth.
nexd.plot_azimuth_to_rhi('reflectivity', 305., cmap='pyart_NWSRef', vmin=0., vmax=55.) nexd.set_limits((0., 150.), (0., 15.))
5b_PyART_visualization.ipynb
gamaanderson/2017-AMS-Short-Course-on-Open-Source-Radar-Software
bsd-2-clause
Py-ART RHI Not only can we construct an RHI from a PPI volume, but RHI scans may be plotted as well.
rhif = "data/noxp_rhi_140610232635.RAWHJFH" rhir = pyart.io.read(rhif) rhid = pyart.graph.RadarDisplay(rhir) rhid.plot_rhi('reflectivity', 0, vmin=-5.0, vmax=70,) rhid.set_limits(xlim=(0, 50), ylim=(0, 15))
5b_PyART_visualization.ipynb
gamaanderson/2017-AMS-Short-Course-on-Open-Source-Radar-Software
bsd-2-clause
Py-ART RadarMapDisplay or RadarMapDisplayCartopy This converts the x-y coordinates to latitude and longitude overplotting on a map Let us see what version we have. The first is works on Py-ART which uses a standard definition. For other packages that may not the second method should work
pyart_ver = pyart.__version__ import pkg_resources pyart_ver2 = pkg_resources.get_distribution("arm_pyart").version if int(pyart_ver.split('.')[1]) == 8: print("8") nexmap = pyart.graph.RadarMapDisplayCartopy(nexr) else: print("7") nexmap = pyart.graph.RadarMapDisplay(nexr) limits = [-87., -82., 37.,...
5b_PyART_visualization.ipynb
gamaanderson/2017-AMS-Short-Course-on-Open-Source-Radar-Software
bsd-2-clause
Use what you have learned! Using all that you have learned, make a two panel plot of reflectivity and doppler velocity using the data file from an RHI of NOXP data/noxp_rhi_140610232635.RAWHJFH. Use Cartopy to overlay the plots on a map of Austrailia and play around with differing colormaps and axes limits! Solution T...
# %load solution.py
5b_PyART_visualization.ipynb
gamaanderson/2017-AMS-Short-Course-on-Open-Source-Radar-Software
bsd-2-clause
Author: Thomas Cokelaer Jan 2018 Local time execution: about 10 minutes In this notebook, we will simulate fastq reads and inject CNVs. We will then look at the sensitivity (proportion of true positive by the sum of positives) of sequana_coverage. We use the data and strategy described in section 3.2 of "CNOGpro: det...
!sequana_coverage --download-reference FN433596
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
Simulated FastQ data Installation: conda install art Simulation of data coverage 100X -l: length of the reads -f: coverage -m: mean size of fragments -s: standard deviation of fragment size -ss: type of hiseq This takes a few minutes to produce
! art_illumina -sam -i FN433596.fa -p -l 100 -ss HS20 -f 20 -m 500 -s 40 -o paired_dat -f 100
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
Creating the BAM (mapping) and BED files
# no need for the *aln and *sam, let us remove them to save space !rm -f paired*.aln paired_dat.sam !sequana_mapping --reference FN433596.fa --file1 paired_dat1.fq --file2 paired_dat2.fq 1>out 2>err
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
This uses bwa and samtools behind the scene. Then, we will convert the resulting BAM file (FN433596.fasta.sorted.bam) into a BED file once for all. To do so, we use bioconvert (http://bioconvert.readthedocs.io) that uses bedtools behind the scene:
# bioconvert FN433596.fa.sorted.bam simulated.bed -f # or use e.g. bedtools: !bedtools genomecov -d -ibam FN433596.fa.sorted.bam > simulated.bed
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
sequana_coverage We execute sequana_coverage to find the ROI (region of interest). We should a few detections (depends on the threshold and length of the genome of course). Later, we will inject events as long as 8000 bases. So, we should use at least 16000 bases for the window parameter length. As shown in the window...
!sequana_coverage --input simulated.bed --reference FN433596.fa -w 20001 -o --level WARNING -C .5 !cp report/*/*/rois.csv rois_noise_20001.csv # An instance of coverage signal (yours may be slightly different) from IPython.display import Image Image("coverage.png")
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
The false positives
%pylab inline # Here is a convenient function to plot the ROIs in terms of sizes # and max zscore def plot_results(file_roi, choice="max"): import pandas as pd roi = pd.read_csv(file_roi) #"rois_cnv_deletion.csv") roi = roi.query("start>100 and end<3043210") plot(roi["size"], roi["{}_zscore".format(ch...
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
Most of the detected events have a zscore close to the chosen thresholds (-4 and 4). Moreover, most events have a size below 50. So for the detection of CNVs with size above let us say 2000, the False positives is (FP = 0). More simulations would be required to get a more precise idea of the FP for short CNVs but the...
import random import pandas as pd def create_deletion(): df = pd.read_csv("simulated.bed", sep="\t", header=None) positions = [] sizes = [] for i in range(80): # the + and -4000 shift are there to guarantee the next # CNV does not overlap with the previous one since # CNV length...
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
Deleted regions are all detected
# call this only once !!!! positions_deletion, sizes_deletion = create_deletion() !sequana_coverage --input cnv_deletion.bed -o -w 20001 --level WARNING !cp report/*/*/rois.csv rois_cnv_deleted.csv rois_deletion = plot_results("rois_cnv_deleted.csv") # as precise as 2 base positions but for safety, we put precision ...
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
duplicated regions
positions_duplicated, sizes_duplicated = create_duplicated() !sequana_coverage --input cnv_duplicated.bed -o -w 40001 --level ERROR -C .3 --no-html --no-multiqc !cp report/*/*/rois.csv rois_cnv_duplicated_40001.csv rois_duplicated = plot_results("rois_cnv_duplicated_40001.csv", choice="max")
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
Same results with W=20000,40000,60000,100000 but recovered CN is better with larger W
rois_duplicated = plot_results("rois_cnv_duplicated_20000.csv", choice="max")
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
Note that you may see events with negative zscore. Those are false detection due to the presence of two CNVs close to each other. This can be avoided by increasing the window size e.g. to 40000
check_found(positions_duplicated, sizes_duplicated, rois_duplicated, precision=5)
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
Mixes of duplicated and deleted regions
positions_mix, sizes_mix = create_cnvs_mixed() !sequana_coverage --input cnv_mixed.bed -o -w 40001 --level ERROR --no-multiqc --no-html --cnv-clustering 1000 !cp report/*/*/rois.csv rois_cnv_mixed.csv Image("coverage_with_cnvs.png") rois_mixed = plot_results("rois_cnv_mixed.csv", choice="max") # note that here we ...
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
Some events (about 1%) may be labelled as not found but visual inspection will show that there are actually detected. This is due to a starting position being offset due to noise data set that interfer with the injected CNVs. Conclusions with simulated data and no CNV injections, sequana coverage detects some events t...
roi = plot_results("rois_noise_20001.csv") what is happening here is that we detect many events close to the threshold. So for instance all short events on the left hand side have z-score close to 4, which is our threshold. By pure chance, we get longer events of 40 or 50bp. This is quite surprinsing and wanted to...
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
With 50 simulations, we get 826 events. (100 are removed because on the edge of the origin of replication), which means about 16 events per simulation. The max length is 90. None of the long events (above 50) appear at the same position (distance by more than 500 bases at least) so long events are genuine false positi...
roi = plot_results("100_simulated_rois.csv", choice="mean") roi = plot_results("100_simulated_rois.csv", choice="max")
coverage/05-sensitivity/sensitivity.ipynb
sequana/resources
bsd-3-clause
Check for missing data
#Checking for missing data NAs = pd.concat([train.isnull().sum(), test.isnull().sum()], axis=1, keys=['Train', 'Test']) NAs[NAs.sum(axis=1) > 0]
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Helper functions
# Prints R2 and RMSE scores def get_score(prediction, lables): print('R2: {}'.format(r2_score(prediction, lables))) print('RMSE: {}'.format(np.sqrt(mean_squared_error(prediction, lables)))) # Shows scores for train and validation sets def train_test(estimator, x_trn, x_tst, y_trn, y_tst): predictio...
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Removing outliers
sns.lmplot(x='GrLivArea', y='SalePrice', data=train) train = train[train.GrLivArea < 4500] sns.lmplot(x='GrLivArea', y='SalePrice', data=train)
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Splitting to features and labels and deleting variables I don't need
# Spliting to features and lables train_labels = train.pop('SalePrice') features = pd.concat([train, test], keys=['train', 'test']) # Deleting features that are more than 50% missing features.drop(['PoolQC', 'MiscFeature', 'FireplaceQu', 'Fence', 'Alley'], axis=1, inplace=True) features.shape
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Filling missing values
# MSZoning NA in pred. filling with most popular values features['MSZoning'] = features['MSZoning'].fillna(features['MSZoning'].mode()[0]) # LotFrontage NA in all. I suppose NA means 0 features['LotFrontage'] = features['LotFrontage'].fillna(features['LotFrontage'].mean()) # MasVnrType NA in all. filling with most p...
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Log transformation
# Our SalesPrice is skewed right (check plot below). I'm logtransforming it. ax = sns.distplot(train_labels) ## Log transformation of labels train_labels = np.log(train_labels) ## Now it looks much better ax = sns.distplot(train_labels)
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Converting categorical features with order to numerical Converting categorical variables with choices: Ex, Gd, TA, FA and Po def cat2numCondition(x): if x == 'Ex': return 5 if x == 'Gd': return 4 if x == 'TA': return 3 if x == 'Fa': return 2 if x == 'Po': retu...
def num2cat(x): return str(x) features['MSSubClass_str'] = features['MSSubClass'].apply(num2cat) features.pop('MSSubClass') features.shape
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Converting categorical features to binary
# Getting Dummies from all other categorical vars for col in features.dtypes[features.dtypes == 'object'].index: for_dummy = features.pop(col) features = pd.concat([features, pd.get_dummies(for_dummy, prefix=col)], axis=1) features.shape features.head()
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Overfitting columns
#features.drop('MSZoning_C (all)',axis=1)
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Splitting train and test features
### Splitting features train_features = features.loc['train'].drop('Id', axis=1).select_dtypes(include=[np.number]).values test_features = features.loc['test'].drop('Id', axis=1).select_dtypes(include=[np.number]).values
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Splitting to train and validation sets
### Splitting x_train, x_test, y_train, y_test = train_test_split(train_features, train_labels, test_size=0.1, random_state=200)
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Modeling 1. Gradient Boosting Regressor
GBR = GradientBoostingRegressor(n_estimators=12000, learning_rate=0.05, max_depth=3, max_features='sqrt', min_samples_leaf=15, min_samples_split=10, loss='huber') GBR.fit(x_train, y_train) train_test(GBR, x_train, x_test, y_train, y_test) # Average R2 score and standart deviation of 5-fold cr...
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
2. LASSO regression
lasso = make_pipeline(RobustScaler(), Lasso(alpha =0.0005, random_state=1)) lasso.fit(x_train, y_train) train_test(lasso, x_train, x_test, y_train, y_test) # Average R2 score and standart deviation of 5-fold cross-validation scores = cross_val_score(lasso, train_features, train_labels, cv=5) print("Accuracy: %0.2f (...
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
3. Elastic Net Regression
ENet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.0005, l1_ratio=.9, random_state=3)) ENet.fit(x_train, y_train) train_test(ENet, x_train, x_test, y_train, y_test) # Average R2 score and standart deviation of 5-fold cross-validation scores = cross_val_score(ENet, train_features, train_labels, cv=5) print("Accu...
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Averaging models
# Retraining models on all train data GBR.fit(train_features, train_labels) lasso.fit(train_features, train_labels) ENet.fit(train_features, train_labels) def averaginModels(X, train, labels, models=[]): for model in models: model.fit(train, labels) predictions = np.column_stack([ model.predict...
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Submission
test_id = test.Id test_submit = pd.DataFrame({'Id': test_id, 'SalePrice': test_y}) test_submit.shape test_submit.head() test_submit.to_csv('house_price_pred_avg_gbr_lasso_enet.csv', index=False)
Script/SKlearn models.ipynb
maviator/Kaggle_home_price_prediction
mit
Testing the median filter with a fixed window length of 16.
median = plt.figure(figsize=(30,20)) for x in range(1, 5): for y in range(1, 6): plt.subplot(5, 5, x + (y-1)*4) wavenum = (x-1) + (y-1)*4 functions.medianSinPlot(wavenum, 15) plt.suptitle('Median filtered Sine Waves with window length 15', fontsize = 60) plt.xlabel(("Wave num...
MedianFilter/Python/01. Basic Tests Median Filter/basic median filter with window length 16.ipynb
ktakagaki/kt-2015-DSPHandsOn
gpl-2.0
Summary with higher wave numbers (n=10), the filter makes the signal even worse with a phase (amplitude) reversal! a bit of ailiasing, would benefit from more sample points Graphic Export
pp=PdfPages( 'median sin window length 15.pdf' ) pp.savefig( median ) pp.close()
MedianFilter/Python/01. Basic Tests Median Filter/basic median filter with window length 16.ipynb
ktakagaki/kt-2015-DSPHandsOn
gpl-2.0
Compressed ACL Now, assume that we want to compress this ACL to make it more manageable. We do the following operations: Merge the two BFD permit statements on lines 20-30 into one statement using the range directive. Remove the BGP session on line 80 because it has been decommissioned Remove lines 180 and 250 because...
compressed_acl = """ ip access-list acl 10 deny icmp any any redirect 20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 range 3784 3785 ! 30 MERGED WITH LINE ABOVE 40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp 50 permit tcp 11.36.216.176/32 11.36.216.179/32 eq bgp 60 permit tcp 204.1...
jupyter_notebooks/Safely refactoring ACLs and firewall rules.ipynb
batfish/pybatfish
apache-2.0
The challenge for us is to find out if and how this compressed ACL differs from the original. That is, is there is traffic that is treated differently by the two ACLs, and if so, which lines are responsible for the difference. This task is difficult to get right through manual reasoning alone, which is why we developed...
# Import packages %run startup.py bf = Session(host="localhost") # Initialize a snapshot with the original ACL original_snapshot = bf.init_snapshot_from_text( original_acl, platform="cisco-nx", snapshot_name="original", overwrite=True) # Initialize a snapshot with the compressed ACL compressed_sna...
jupyter_notebooks/Safely refactoring ACLs and firewall rules.ipynb
batfish/pybatfish
apache-2.0
The compareFilters question compares two filters and returns pairs of lines, one from each filter, that match the same flow(s) but treat them differently. If it reports no output, the filters are guaranteed to be identical. The analysis is exhaustive and considers all possible flows. As we can see from the output above...
smaller_acls = """ ip access-list deny-icmp-redirect 10 deny icmp any any redirect ip access-list permit-bfd 20 permit udp 117.186.185.0/24 range 49152 65535 117.186.185.0/24 range 3784 3785 ip access-list permit-bgp-session 40 permit tcp 11.36.216.170/32 11.36.216.169/32 eq bgp 50 permit tcp 11.36.216.176/32...
jupyter_notebooks/Safely refactoring ACLs and firewall rules.ipynb
batfish/pybatfish
apache-2.0
Given the split ACLs above, one analysis may be to figure out if each untrusted source subnet was included in a smaller ACL. Otherwise, we have lost protection that was present in the original ACL. We can accomplish this analysis via the findMatchingFilterLines question, as shown below. Once we are satisfied with anal...
# Initialize a snapshot with the smaller ACLs smaller_snapshot = bf.init_snapshot_from_text( smaller_acls, platform="cisco-nx", snapshot_name="smaller", overwrite=True) # All untrusted subnets untrusted_source_subnets = ["54.0.0.0/8", "163.157.0.0/16", ...
jupyter_notebooks/Safely refactoring ACLs and firewall rules.ipynb
batfish/pybatfish
apache-2.0
<h2> Create ML dataset by sampling using BigQuery </h2> <p> Sample the BigQuery table publicdata.samples.natality to create a smaller dataset of approximately 10,000 training and 3,000 evaluation records. Restrict your samples to data after the year 2000. </p>
# TODO
quests/endtoendml/labs/2_sample.ipynb
turbomanage/training-data-analyst
apache-2.0
Preprocess data using Pandas Carry out the following preprocessing operations: Add extra rows to simulate the lack of ultrasound. Change the plurality column to be one of the following strings: <pre> ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'] </pre> Remove rows where any of the imp...
## TODO
quests/endtoendml/labs/2_sample.ipynb
turbomanage/training-data-analyst
apache-2.0
<h2> Write out </h2> <p> In the final versions, we want to read from files, not Pandas dataframes. So, write the Pandas dataframes out as CSV files. Using CSV files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shufflin...
traindf.to_csv('train.csv', index=False, header=False) evaldf.to_csv('eval.csv', index=False, header=False) %%bash wc -l *.csv head *.csv tail *.csv
quests/endtoendml/labs/2_sample.ipynb
turbomanage/training-data-analyst
apache-2.0
Si llamamos a la función una vez...
funcion()
More/DefaultParametersInPython_ES.ipynb
aaossa/Dear-Notebooks
gpl-3.0
... todo funciona como lo suponemos, pero y si probamos otra vez...
funcion() funcion()
More/DefaultParametersInPython_ES.ipynb
aaossa/Dear-Notebooks
gpl-3.0
... ok? No funciona como lo supondriamos. Esto también podemos extenderlo a clases, donde es comun usar parámetros por defecto:
class Clase: def __init__(self, lista=[]): self.lista = lista self.lista.append(1) print("Lista de la clase: {}".format(self.lista)) # Instanciamos dos objetos A = Clase() B = Clase() # Modificamos el parametro en una A.lista.append(5) # What?? print(A.lista) print(B.lista)
More/DefaultParametersInPython_ES.ipynb
aaossa/Dear-Notebooks
gpl-3.0
Investigando nuestro código Veamos un poco qué está pasando en nuestro código:
# Instanciemos algunos objetos A = Clase() B = Clase() C = Clase(lista=["GG"]) # Usaremos esta isntancia como control print("\nLos objetos son distintos!") print("id(A): {} \nid(B): {} \nid(C): {}".format(id(A), id(B), id(C))) print("\nPero la lista es la misma para A y para B :O") print("id(A.lista): {} \nid(B.lista...
More/DefaultParametersInPython_ES.ipynb
aaossa/Dear-Notebooks
gpl-3.0
¿Qué está pasando? D: En Python, las funciones son objetos del tipo callable, es decir, que son llamables, ejecutan una operación.
# De hecho, tienen atributos... def funcion(lista=[]): lista.append(5) # En la funcion "funcion"... print("{}".format(funcion.__defaults__)) # ... si la invocamos... funcion() # ahora tenemos... print("{}".format(funcion.__defaults__)) # Si vemos como quedo el metodo "__init__" de la clase Clase... print(...
More/DefaultParametersInPython_ES.ipynb
aaossa/Dear-Notebooks
gpl-3.0
El código que define a función es evaluado una vez y dicho valor evaluado es el que se usa en cada llamado posterior. Por lo tanto, al modificar el valor de un parámetro por defecto que es mutable (list, dict, etc.) se modifica el valor por defecto para el siguiente llamado. ¿Cómo evitar esto? Una solución simple es us...
class Clase: def __init__(self, lista=None): # Version "one-liner": self.lista = lista if lista is not None else list() # En su version extendida: if lista is not None: self.lista = lista else: self.lista = list()
More/DefaultParametersInPython_ES.ipynb
aaossa/Dear-Notebooks
gpl-3.0
Pick one of these to explore re: below models
# Look only at train IDs features = df.columns.values X = train_id_dummies y = df['ord_del'] # Non Delay Specific features = df.columns.values target_cols = ['temp','precipiation', 'visability','windspeed','humidity','cloudcover', 'is_bullet','is_limited','t_northbound', 'd_monday','d_tuesday','...
06initial_analysis.ipynb
readywater/caltrain-predict
mit
Run Decision Trees, Prune, and consider False Positives
from sklearn.tree import DecisionTreeClassifier TreeClass = DecisionTreeClassifier( max_depth = 2, min_samples_leaf = 5) TreeClass.fit(X,y) from sklearn.cross_validation import cross_val_score scores = cross_val_score(TreeClass, X, y, cv=10) print(scores.mean()) # Score = More is better...
06initial_analysis.ipynb
readywater/caltrain-predict
mit
As a check, consider Feature selection
from sklearn import feature_selection pvals = feature_selection.f_regression(X,y)[1] sorted(zip(X.columns.values,np.round(pvals,4)),key=lambda x:x[1],reverse=True) X_lr=df[['windspeed','t_northbound','precipiation','d_friday']] # localize your search around the maximum value you found c_list = np.logspace(-1,1,21) c...
06initial_analysis.ipynb
readywater/caltrain-predict
mit
Find the Principal Components
X = only_delay[['temp','precipiation', 'visability','windspeed','humidity','cloudcover', 'is_bullet','is_limited','t_northbound', 'd_monday','d_tuesday','d_wednesday','d_thursday','d_friday','d_saturday']] from sklearn.decomposition import PCA clf = PCA(.99) X_trans = clf.fit_transform(X) X_tran...
06initial_analysis.ipynb
readywater/caltrain-predict
mit
Seeing if I can get anything interesting out of KNN given above Lecture 10, look at Confusion matrix and ROC curve. Fiddle with the thresholds and AUC
print df['windspeed'].max() print df['windspeed'].min() df['windspeed_st'] = df['windspeed'].apply(lambda x:x/15.0) # Ballparking X_reg = df[['precipiation','d_friday','t_northbound','windspeed_st']] y_reg = df['is_delay'] from sklearn import cross_validation from sklearn import neighbors, metrics kf = cross_valid...
06initial_analysis.ipynb
readywater/caltrain-predict
mit
Cross Validation and Random Forest
from sklearn.ensemble import RandomForestClassifier from sklearn.cross_validation import cross_val_score RFClass = RandomForestClassifier(n_estimators = 10000, max_features = 4, # You can set it to a number or 'sqrt', 'log2', etc min_samples_leaf = 5, ...
06initial_analysis.ipynb
readywater/caltrain-predict
mit