repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
content
stringlengths
335
154k
timothydmorton/isochrones
notebooks/batch-demo.ipynb
mit
import numpy as np from isochrones import get_ichrone bands = ['J', 'H', 'K', 'G', 'BP', 'RP'] mist = get_ichrone('mist', bands=bands) from itertools import product primary_masses = [0.8, 1.0] mass_ratios = [0.5, 0.9] feh_grid = [-0.25, 0.0] age = 9.7 distance = 500 AV = 0. m1, m2, feh, name = zip(*[(m, q*m, f, f'{m:.2f}_{q*m:0.2f}_{f:0.2f}') for m, q, f in product(primary_masses, mass_ratios, feh_grid)]) df = mist.generate_binary(m1, m2, age, feh, distance=distance, AV=AV, accurate=True) # add uncertainties for each band uncs = {'J': 0.02, 'H': 0.02, 'K':0.02, 'G': 0.002, 'BP': 0.002, 'RP':0.002} for b in bands: df[f'{b}_mag_unc'] = uncs[b] # Add parallax & uncertainty df['parallax'] = 1000/distance df['parallax_unc'] = 0.02 df.index = name """ Explanation: Simulate/fit/analyze in batch mode Generate observed binary properties End of explanation """ from isochrones.catalog import StarCatalog from isochrones.priors import FlatPrior catalog = StarCatalog(df, bands=bands, props=['parallax']) catalog.set_prior(AV=FlatPrior((0, 0.0001)), age=FlatPrior((8.5, 10))) """ Explanation: Use a StarCatalog to organize data End of explanation """ from multiprocessing import Pool def fit_model(mod): print(mod.mnest_basename) mod.fit(verbose=True) return mod.derived_samples pool = Pool(processes=8) # e.g. samples = pool.map(fit_model, catalog.iter_models(N=2)) """ Explanation: Fit models Here is a snippet to fit all the models (using the convenient .iter_models() API); this could easily be adapted into a cluster script (e.g., using schwimmbad) to run thousands of fits. End of explanation """ cols = ['mass_0', 'mass_1', 'age', 'feh', 'AV'] qs = [0.05, 0.16, 0.5, 0.84, 0.95] for name, samps in zip(catalog.df.index, samples): print(name) print(samps[cols].quantile(qs)) from corner import corner corner(samples[-1][['mass_0', 'mass_1', 'age', 'feh', 'distance']]); corner(samples[-1][['J_mag', 'K_mag', 'G_mag', 'BP_mag', 'RP_mag']]); """ Explanation: Analyze samples A StarCatalog could probably have some convenience methods for some of this stuff, but this for now. End of explanation """
phoebe-project/phoebe2-docs
development/tutorials/distributions.ipynb
gpl-3.0
#!pip install -I "phoebe>=2.4,<2.5" import phoebe logger = phoebe.logger() b = phoebe.default_binary() b.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101)) """ Explanation: Distributions Distributions are mostly useful when using samplers (which we'll see in the next tutorial on solving the inverse problem) - but can also be useful to propagate any set of distributions (whether those be uncertainties in the literature, etc) through the forward model. Setup Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab). End of explanation """ b.get_adjustable_parameters() """ Explanation: Adding Distributions Distributions can be attached to most any FloatParameter in the Bundle. To see a list of these available parameters, we can call b.get_adjustable_parameters. Note the exclude_constrained option which defaults to True: we can set distributions on constrained parameters (for priors, for example), but those will not be able to be sampled from in the forward model or while fitting. We'll come back to this in the next tutorial when looking at priors. End of explanation """ b.add_distribution(qualifier='teff', component='primary', value=phoebe.gaussian(6000,100), distribution='mydist') """ Explanation: add_distribution is quite flexible and accepts several different syntaxes to add multiple distributions in one line. Here we'll just attach a distribution to a single parameter at a time. Just like when calling add_dataset or add_compute, add_distribution optionally takes a distribution tag -- and in the cases of distributions, we can attach distributions to multiple parameters with the same distribution tag. The values of the DistributionParameters are distl distribution objects -- the most common of which are conveniently available at the top-level of PHOEBE: phoebe.gaussian phoebe.gaussian_around phoebe.uniform phoebe.uniform_around For an overview of the different available types as they apply in PHOEBE, see Advanced: Distribution Types. Now let's attach a gaussian distribution on the temperature of the primary star. End of explanation """ print(b.get_distribution(distribution='mydist')) """ Explanation: As you probably can expect by now, we also have methods to: get_distribution rename_distribution remove_distribution End of explanation """ b.add_distribution(qualifier='incl', component='binary', value=phoebe.uniform(80,90), distribution='mydist') print(b.get_distribution(distribution='mydist')) """ Explanation: Now let's add another distribution, with the same distribution tag, to the inclination of the binary. End of explanation """ print(b.filter(context='distribution')) print(b.get_parameter(context='distribution', qualifier='incl')) print(b.get_parameter(context='distribution', qualifier='incl').tags) """ Explanation: Accessing & Plotting Distributions The parameters we've created and attached are DistributionParameters and live in context='distribution', with all other tags matching the parameter they're referencing. For example, let's filter and look at the distributions we've added. End of explanation """ b.get_value(context='distribution', qualifier='incl') """ Explanation: The "value" of the parameter, is the distl distributon object itself. End of explanation """ _ = b.get_value(context='distribution', qualifier='incl').plot(show=True) """ Explanation: And because of that, we can call any method on the distl object, including plotting the distribution. End of explanation """ _ = b.plot_distribution_collection(distribution='mydist', show=True) """ Explanation: If we want to see how multiple individual distributions interact and are correlated with each other via a corner plot, we can access the combined "distribution collection" from any number of distribution tags. This may not be terribly useful now, but is very useful when trying to create multivariate priors. b.get_distribution_collection b.plot_distribution_collection End of explanation """ b.sample_distribution_collection(distribution='mydist') """ Explanation: Sampling Distributions We can also sample from these distributions - either manually by calling sample on the distl or in bulk by respecting any covariances in the "distributon collection" via: b.sample_distribution_collection End of explanation """ changed_params = b.sample_distribution_collection(distribution='mydist', set_value=True) print(changed_params) """ Explanation: By default this just returns a dictionary with the twigs and sampled values. But if we wanted, we could have these applied immediately to the face-values by passing set_value=True, in which case a ParameterSet of changed parameters (including those via constraints) is returned instead. End of explanation """ print(b.get_parameter(qualifier='sample_from', context='compute')) """ Explanation: Propagating Distributions through Forward Model Lastly, we can have PHOEBE automatically draw from a "distribution collection" multiple times and expose the distribution of the model itself. End of explanation """ b.set_value('sample_from', value='mydist') print(b.filter(qualifier='sample*')) """ Explanation: Once sample_from is set, sample_num and sample_mode are exposed as visible parameters End of explanation """ b.run_compute(irrad_method='none') _ = b.plot(show=True) """ Explanation: Now when we call run_compute, 10 different instances of the forward model will be computed from 10 random draws from the "distribution collection" but only the median and 1-sigma uncertainties will be exposed in the model. End of explanation """
tensorflow/neural-structured-learning
g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2019 The TensorFlow Neural Structured Learning Authors End of explanation """ !pip install --quiet neural-structured-learning """ Explanation: Adversarial regularization for image classification <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/neural-structured-learning/g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview In this tutorial, we will explore the use of adversarial learning (Goodfellow et al., 2014) for image classification using the Neural Structured Learning (NSL) framework. The core idea of adversarial learning is to train a model with adversarially-perturbed data (called adversarial examples) in addition to the organic training data. To the human eye, these adversarial examples look the same as the original but the perturbation will cause the model to be confused and make incorrect predictions or classifications. The adversarial examples are constructed to intentionally mislead the model into making wrong predictions or classifications. By training with such examples, the model learns to be robust against adversarial perturbation when making predictions. In this tutorial, we illustrate the following procedure of applying adversarial learning to obtain robust models using the Neural Structured Learning framework: Create a neural network as a base model. In this tutorial, the base model is created with the tf.keras functional API; this procedure is compatible with models created by tf.keras sequential and subclassing APIs as well. For more information on Keras models in TensorFlow, see this documentation. Wrap the base model with the AdversarialRegularization wrapper class, which is provided by the NSL framework, to create a new tf.keras.Model instance. This new model will include the adversarial loss as a regularization term in its training objective. Convert examples in the training data to feature dictionaries. Train and evaluate the new model. Recap for Beginners There is a corresponding video explanation on adversarial learning for image classification part of the TensorFlow Neural Structured Learning Youtube series. Below, we have summarized the key concepts explained in this video, expanding on the explanation provided in the Overview section above. The NSL framework jointly optimizes both image features and structured signals to help neural networks better learn. However, what if there is no explicit structure available to train the neural network? This tutorial explains one approach involving the creation of adversarial neighbors (modified from the original sample) to dynamically construct a structure. Firstly, adversarial neighbors are defined as modified versions of the sample image applied with small perturbations that mislead a neural net into outputting inaccurate classifications. These carefully designed perturbations are typically based on the reverse gradient direction and are meant to confuse the neural net during training. Humans may not be able to tell the difference between a sample image and it's generated adversarial neighbor. However, to the neural net, the applied perturbations are effective at leading to an inaccurate conclusion. Generated adversarial neighbors are then connected to the sample, therefore dynamically constructing a structure edge by edge. Using this connection, neural nets learn to maintain the similarities between the sample and the adversarial neighbors while avoiding confusion resulting from misclassifications, thus improving the overall neural network's quality and accuracy. The code segment below is a high-level explanation of the steps involved while the rest of this tutorial goes into further depth and technicality. Read and prepare the data. Load the MNIST dataset and normalize the feature values to stay in the range [0,1] ``` import neural_structured_learning as nsl (x_train, y_train), (x_train, y_train) = tf.keras.datasets.mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 ``` Build the neural network. A Sequential Keras base model is used for this example. model = tf.keras.Sequential(...) Configure the adversarial model. Including the hyperparameters: multiplier applied on the adversarial regularization, empirically chosen differ values for step size/learning rate. Invoke adversarial regularization with a wrapper class around the constructed neural network. adv_config = nsl.configs.make_adv_reg_config(multiplier=0.2, adv_step_size=0.05) adv_model = nsl.keras.AdversarialRegularization(model, adv_config) Conclude with the standard Keras workflow: compile, fit, evaluate. adv_model.compile(optimizer='adam', loss='sparse_categorizal_crossentropy', metrics=['accuracy']) adv_model.fit({'feature': x_train, 'label': y_train}, epochs=5) adv_model.evaluate({'feature': x_test, 'label': y_test}) What you see here is adversarial learning enabled in 2 steps and 3 simple lines of code. This is the simplicity of the neural structured learning framework. In the following sections, we expand upon this procedure. Setup Install the Neural Structured Learning package. End of explanation """ import matplotlib.pyplot as plt import neural_structured_learning as nsl import numpy as np import tensorflow as tf import tensorflow_datasets as tfds """ Explanation: Import libraries. We abbreviate neural_structured_learning to nsl. End of explanation """ class HParams(object): def __init__(self): self.input_shape = [28, 28, 1] self.num_classes = 10 self.conv_filters = [32, 64, 64] self.kernel_size = (3, 3) self.pool_size = (2, 2) self.num_fc_units = [64] self.batch_size = 32 self.epochs = 5 self.adv_multiplier = 0.2 self.adv_step_size = 0.2 self.adv_grad_norm = 'infinity' HPARAMS = HParams() """ Explanation: Hyperparameters We collect and explain the hyperparameters (in an HParams object) for model training and evaluation. Input/Output: input_shape: The shape of the input tensor. Each image is 28-by-28 pixels with 1 channel. num_classes: There are a total of 10 classes, corresponding to 10 digits [0-9]. Model architecture: conv_filters: A list of numbers, each specifying the number of filters in a convolutional layer. kernel_size: The size of 2D convolution window, shared by all convolutional layers. pool_size: Factors to downscale the image in each max-pooling layer. num_fc_units: The number of units (i.e., width) of each fully-connected layer. Training and evaluation: batch_size: Batch size used for training and evaluation. epochs: The number of training epochs. Adversarial learning: adv_multiplier: The weight of adversarial loss in the training objective, relative to the labeled loss. adv_step_size: The magnitude of adversarial perturbation. adv_grad_norm: The norm to measure the magnitude of adversarial perturbation. End of explanation """ datasets = tfds.load('mnist') train_dataset = datasets['train'] test_dataset = datasets['test'] IMAGE_INPUT_NAME = 'image' LABEL_INPUT_NAME = 'label' """ Explanation: MNIST dataset The MNIST dataset contains grayscale images of handwritten digits (from '0' to '9'). Each image shows one digit at low resolution (28-by-28 pixels). The task involved is to classify images into 10 categories, one per digit. Here we load the MNIST dataset from TensorFlow Datasets. It handles downloading the data and constructing a tf.data.Dataset. The loaded dataset has two subsets: train with 60,000 examples, and test with 10,000 examples. Examples in both subsets are stored in feature dictionaries with the following two keys: image: Array of pixel values, ranging from 0 to 255. label: Groundtruth label, ranging from 0 to 9. End of explanation """ def normalize(features): features[IMAGE_INPUT_NAME] = tf.cast( features[IMAGE_INPUT_NAME], dtype=tf.float32) / 255.0 return features def convert_to_tuples(features): return features[IMAGE_INPUT_NAME], features[LABEL_INPUT_NAME] def convert_to_dictionaries(image, label): return {IMAGE_INPUT_NAME: image, LABEL_INPUT_NAME: label} train_dataset = train_dataset.map(normalize).shuffle(10000).batch(HPARAMS.batch_size).map(convert_to_tuples) test_dataset = test_dataset.map(normalize).batch(HPARAMS.batch_size).map(convert_to_tuples) """ Explanation: To make the model numerically stable, we normalize the pixel values to [0, 1] by mapping the dataset over the normalize function. After shuffling training set and batching, we convert the examples to feature tuples (image, label) for training the base model. We also provide a function to convert from tuples to dictionaries for later use. End of explanation """ def build_base_model(hparams): """Builds a model according to the architecture defined in `hparams`.""" inputs = tf.keras.Input( shape=hparams.input_shape, dtype=tf.float32, name=IMAGE_INPUT_NAME) x = inputs for i, num_filters in enumerate(hparams.conv_filters): x = tf.keras.layers.Conv2D( num_filters, hparams.kernel_size, activation='relu')( x) if i < len(hparams.conv_filters) - 1: # max pooling between convolutional layers x = tf.keras.layers.MaxPooling2D(hparams.pool_size)(x) x = tf.keras.layers.Flatten()(x) for num_units in hparams.num_fc_units: x = tf.keras.layers.Dense(num_units, activation='relu')(x) pred = tf.keras.layers.Dense(hparams.num_classes)(x) model = tf.keras.Model(inputs=inputs, outputs=pred) return model base_model = build_base_model(HPARAMS) base_model.summary() """ Explanation: Base model Our base model will be a neural network consisting of 3 convolutional layers follwed by 2 fully-connected layers (as defined in HPARAMS). Here we define it using the Keras functional API. Feel free to try other APIs or model architectures (e.g. subclassing). Note that the NSL framework does support all three types of Keras APIs. End of explanation """ base_model.compile( optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['acc']) base_model.fit(train_dataset, epochs=HPARAMS.epochs) results = base_model.evaluate(test_dataset) named_results = dict(zip(base_model.metrics_names, results)) print('\naccuracy:', named_results['acc']) """ Explanation: Next we train and evaluate the base model. End of explanation """ adv_config = nsl.configs.make_adv_reg_config( multiplier=HPARAMS.adv_multiplier, adv_step_size=HPARAMS.adv_step_size, adv_grad_norm=HPARAMS.adv_grad_norm ) """ Explanation: We can see that the base model achieves 99% accuracy on the test set. We will see how robust it is in Robustness Under Adversarial Perturbations below. Adversarial-regularized model Here we show how to incorporate adversarial training into a Keras model with a few lines of code, using the NSL framework. The base model is wrapped to create a new tf.Keras.Model, whose training objective includes adversarial regularization. First, we create a config object with all relevant hyperparameters using the helper function nsl.configs.make_adv_reg_config. End of explanation """ base_adv_model = build_base_model(HPARAMS) adv_model = nsl.keras.AdversarialRegularization( base_adv_model, label_keys=[LABEL_INPUT_NAME], adv_config=adv_config ) train_set_for_adv_model = train_dataset.map(convert_to_dictionaries) test_set_for_adv_model = test_dataset.map(convert_to_dictionaries) """ Explanation: Now we can wrap a base model with AdversarialRegularization. Here we create a new base model (base_adv_model), so that the existing one (base_model) can be used in later comparison. The returned adv_model is a tf.keras.Model object, whose training objective includes a regularization term for the adversarial loss. To compute that loss, the model has to have access to the label information (feature label), in addition to regular input (feature image). For this reason, we convert the examples in the datasets from tuples back to dictionaries. And we tell the model which feature contains the label information via the label_keys parameter. End of explanation """ adv_model.compile( optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['acc']) adv_model.fit(train_set_for_adv_model, epochs=HPARAMS.epochs) results = adv_model.evaluate(test_set_for_adv_model) named_results = dict(zip(adv_model.metrics_names, results)) print('\naccuracy:', named_results['sparse_categorical_accuracy']) """ Explanation: Next we compile, train, and evaluate the adversarial-regularized model. There might be warnings like "Output missing from loss dictionary," which is fine because the adv_model doesn't rely on the base implementation to calculate the total loss. End of explanation """ reference_model = nsl.keras.AdversarialRegularization( base_model, label_keys=[LABEL_INPUT_NAME], adv_config=adv_config) reference_model.compile( optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['acc']) """ Explanation: We can see that the adversarial-regularized model also performs very well (99% accuracy) on the test set. Robustness under Adversarial perturbations Now we compare the base model and the adversarial-regularized model for robustness under adversarial perturbation. We will use the AdversarialRegularization.perturb_on_batch function for generating adversarially perturbed examples. And we would like the generation based on the base model. To do so, we wrap the base model with AdversarialRegularization. Note that as long as we don't invoke training (Model.fit), the learned variables in the model won't change and the model is still the same one as in section Base Model. End of explanation """ models_to_eval = { 'base': base_model, 'adv-regularized': adv_model.base_model } metrics = { name: tf.keras.metrics.SparseCategoricalAccuracy() for name in models_to_eval.keys() } """ Explanation: We collect in a dictionary the models to be evaluted, and also create a metric object for each of the models. Note that we take adv_model.base_model in order to have the same input format (not requiring label information) as the base model. The learned variables in adv_model.base_model are the same as those in adv_model. End of explanation """ perturbed_images, labels, predictions = [], [], [] for batch in test_set_for_adv_model: perturbed_batch = reference_model.perturb_on_batch(batch) # Clipping makes perturbed examples have the same range as regular ones. perturbed_batch[IMAGE_INPUT_NAME] = tf.clip_by_value( perturbed_batch[IMAGE_INPUT_NAME], 0.0, 1.0) y_true = perturbed_batch.pop(LABEL_INPUT_NAME) perturbed_images.append(perturbed_batch[IMAGE_INPUT_NAME].numpy()) labels.append(y_true.numpy()) predictions.append({}) for name, model in models_to_eval.items(): y_pred = model(perturbed_batch) metrics[name](y_true, y_pred) predictions[-1][name] = tf.argmax(y_pred, axis=-1).numpy() for name, metric in metrics.items(): print('%s model accuracy: %f' % (name, metric.result().numpy())) """ Explanation: Here is the loop to generate perturbed examples and to evaluate models with them. We save the perturbed images, labels, and predictions for visualization in the next section. End of explanation """ batch_index = 0 batch_image = perturbed_images[batch_index] batch_label = labels[batch_index] batch_pred = predictions[batch_index] batch_size = HPARAMS.batch_size n_col = 4 n_row = (batch_size + n_col - 1) // n_col print('accuracy in batch %d:' % batch_index) for name, pred in batch_pred.items(): print('%s model: %d / %d' % (name, np.sum(batch_label == pred), batch_size)) plt.figure(figsize=(15, 15)) for i, (image, y) in enumerate(zip(batch_image, batch_label)): y_base = batch_pred['base'][i] y_adv = batch_pred['adv-regularized'][i] plt.subplot(n_row, n_col, i+1) plt.title('true: %d, base: %d, adv: %d' % (y, y_base, y_adv)) plt.imshow(tf.keras.utils.array_to_img(image), cmap='gray') plt.axis('off') plt.show() """ Explanation: We can see that the accuracy of the base model drops dramatically (from 99% to about 50%) when the input is perturbed adversarially. On the other hand, the accuracy of the adversarial-regularized model only degrades a little (from 99% to 95%). This demonstrates the effectiveness of adversarial learning on improving model's robustness. Examples of adversarially-perturbed images Here we take a look at the adversarially-perturbed images. We can see that the perturbed images still show digits recognizable by human, but can successfully fool the base model. End of explanation """
mne-tools/mne-tools.github.io
0.24/_downloads/d12911920e4d160c9fd8c97cffdda6b7/time_frequency_erds.ipynb
bsd-3-clause
# Authors: Clemens Brunner <clemens.brunner@gmail.com> # Felix Klotzsche <klotzsche@cbs.mpg.de> # # License: BSD-3-Clause """ Explanation: Compute and visualize ERDS maps This example calculates and displays ERDS maps of event-related EEG data. ERDS (sometimes also written as ERD/ERS) is short for event-related desynchronization (ERD) and event-related synchronization (ERS) :footcite:PfurtschellerLopesdaSilva1999. Conceptually, ERD corresponds to a decrease in power in a specific frequency band relative to a baseline. Similarly, ERS corresponds to an increase in power. An ERDS map is a time/frequency representation of ERD/ERS over a range of frequencies :footcite:GraimannEtAl2002. ERDS maps are also known as ERSP (event-related spectral perturbation) :footcite:Makeig1993. In this example, we use an EEG BCI data set containing two different motor imagery tasks (imagined hand and feet movement). Our goal is to generate ERDS maps for each of the two tasks. First, we load the data and create epochs of 5s length. The data set contains multiple channels, but we will only consider C3, Cz, and C4. We compute maps containing frequencies ranging from 2 to 35Hz. We map ERD to red color and ERS to blue color, which is customary in many ERDS publications. Finally, we perform cluster-based permutation tests to estimate significant ERDS values (corrected for multiple comparisons within channels). End of explanation """ import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import TwoSlopeNorm import pandas as pd import seaborn as sns import mne from mne.datasets import eegbci from mne.io import concatenate_raws, read_raw_edf from mne.time_frequency import tfr_multitaper from mne.stats import permutation_cluster_1samp_test as pcluster_test """ Explanation: As usual, we import everything we need. End of explanation """ fnames = eegbci.load_data(subject=1, runs=(6, 10, 14)) raw = concatenate_raws([read_raw_edf(f, preload=True) for f in fnames]) raw.rename_channels(lambda x: x.strip('.')) # remove dots from channel names events, _ = mne.events_from_annotations(raw, event_id=dict(T1=2, T2=3)) """ Explanation: First, we load and preprocess the data. We use runs 6, 10, and 14 from subject 1 (these runs contains hand and feet motor imagery). End of explanation """ tmin, tmax = -1, 4 event_ids = dict(hands=2, feet=3) # map event IDs to tasks epochs = mne.Epochs(raw, events, event_ids, tmin - 0.5, tmax + 0.5, picks=('C3', 'Cz', 'C4'), baseline=None, preload=True) """ Explanation: Now we can create 5s epochs around events of interest. End of explanation """ freqs = np.arange(2, 36) # frequencies from 2-35Hz vmin, vmax = -1, 1.5 # set min and max ERDS values in plot baseline = [-1, 0] # baseline interval (in s) cnorm = TwoSlopeNorm(vmin=-1, vcenter=0, vmax=1.5) # min, center, and max ERDS kwargs = dict(n_permutations=100, step_down_p=0.05, seed=1, buffer_size=None, out_type='mask') # for cluster test """ Explanation: Here we set suitable values for computing ERDS maps. End of explanation """ tfr = tfr_multitaper(epochs, freqs=freqs, n_cycles=freqs, use_fft=True, return_itc=False, average=False, decim=2) tfr.crop(tmin, tmax).apply_baseline(baseline, mode="percent") for event in event_ids: # select desired epochs for visualization tfr_ev = tfr[event] fig, axes = plt.subplots(1, 4, figsize=(12, 4), gridspec_kw={"width_ratios": [10, 10, 10, 1]}) for ch, ax in enumerate(axes[:-1]): # for each channel # positive clusters _, c1, p1, _ = pcluster_test(tfr_ev.data[:, ch], tail=1, **kwargs) # negative clusters _, c2, p2, _ = pcluster_test(tfr_ev.data[:, ch], tail=-1, **kwargs) # note that we keep clusters with p <= 0.05 from the combined clusters # of two independent tests; in this example, we do not correct for # these two comparisons c = np.stack(c1 + c2, axis=2) # combined clusters p = np.concatenate((p1, p2)) # combined p-values mask = c[..., p <= 0.05].any(axis=-1) # plot TFR (ERDS map with masking) tfr_ev.average().plot([ch], cmap="RdBu", cnorm=cnorm, axes=ax, colorbar=False, show=False, mask=mask, mask_style="mask") ax.set_title(epochs.ch_names[ch], fontsize=10) ax.axvline(0, linewidth=1, color="black", linestyle=":") # event if ch != 0: ax.set_ylabel("") ax.set_yticklabels("") fig.colorbar(axes[0].images[-1], cax=axes[-1]) fig.suptitle(f"ERDS ({event})") plt.show() """ Explanation: Finally, we perform time/frequency decomposition over all epochs. End of explanation """ df = tfr.to_data_frame(time_format=None) df.head() """ Explanation: Similar to ~mne.Epochs objects, we can also export data from ~mne.time_frequency.EpochsTFR and ~mne.time_frequency.AverageTFR objects to a :class:Pandas DataFrame &lt;pandas.DataFrame&gt;. By default, the time column of the exported data frame is in milliseconds. Here, to be consistent with the time-frequency plots, we want to keep it in seconds, which we can achieve by setting time_format=None: End of explanation """ df = tfr.to_data_frame(time_format=None, long_format=True) # Map to frequency bands: freq_bounds = {'_': 0, 'delta': 3, 'theta': 7, 'alpha': 13, 'beta': 35, 'gamma': 140} df['band'] = pd.cut(df['freq'], list(freq_bounds.values()), labels=list(freq_bounds)[1:]) # Filter to retain only relevant frequency bands: freq_bands_of_interest = ['delta', 'theta', 'alpha', 'beta'] df = df[df.band.isin(freq_bands_of_interest)] df['band'] = df['band'].cat.remove_unused_categories() # Order channels for plotting: df['channel'] = df['channel'].cat.reorder_categories(('C3', 'Cz', 'C4'), ordered=True) g = sns.FacetGrid(df, row='band', col='channel', margin_titles=True) g.map(sns.lineplot, 'time', 'value', 'condition', n_boot=10) axline_kw = dict(color='black', linestyle='dashed', linewidth=0.5, alpha=0.5) g.map(plt.axhline, y=0, **axline_kw) g.map(plt.axvline, x=0, **axline_kw) g.set(ylim=(None, 1.5)) g.set_axis_labels("Time (s)", "ERDS (%)") g.set_titles(col_template="{col_name}", row_template="{row_name}") g.add_legend(ncol=2, loc='lower center') g.fig.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.08) """ Explanation: This allows us to use additional plotting functions like :func:seaborn.lineplot to plot confidence bands: End of explanation """ df_mean = (df.query('time > 1') .groupby(['condition', 'epoch', 'band', 'channel'])[['value']] .mean() .reset_index()) g = sns.FacetGrid(df_mean, col='condition', col_order=['hands', 'feet'], margin_titles=True) g = (g.map(sns.violinplot, 'channel', 'value', 'band', n_boot=10, palette='deep', order=['C3', 'Cz', 'C4'], hue_order=freq_bands_of_interest, linewidth=0.5) .add_legend(ncol=4, loc='lower center')) g.map(plt.axhline, **axline_kw) g.set_axis_labels("", "ERDS (%)") g.set_titles(col_template="{col_name}", row_template="{row_name}") g.fig.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.3) """ Explanation: Having the data as a DataFrame also facilitates subsetting, grouping, and other transforms. Here, we use seaborn to plot the average ERDS in the motor imagery interval as a function of frequency band and imagery condition: End of explanation """
Kaggle/learntools
notebooks/deep_learning_intro/raw/tut5.ipynb
apache-2.0
#$HIDE_INPUT$ # Setup plotting import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') # Set Matplotlib defaults plt.rc('figure', autolayout=True) plt.rc('axes', labelweight='bold', labelsize='large', titleweight='bold', titlesize=18, titlepad=10) import pandas as pd red_wine = pd.read_csv('../input/dl-course-data/red-wine.csv') # Create training and validation splits df_train = red_wine.sample(frac=0.7, random_state=0) df_valid = red_wine.drop(df_train.index) # Split features and target X_train = df_train.drop('quality', axis=1) X_valid = df_valid.drop('quality', axis=1) y_train = df_train['quality'] y_valid = df_valid['quality'] """ Explanation: Introduction There's more to the world of deep learning than just dense layers. There are dozens of kinds of layers you might add to a model. (Try browsing through the Keras docs for a sample!) Some are like dense layers and define connections between neurons, and others can do preprocessing or transformations of other sorts. In this lesson, we'll learn about a two kinds of special layers, not containing any neurons themselves, but that add some functionality that can sometimes benefit a model in various ways. Both are commonly used in modern architectures. Dropout The first of these is the "dropout layer", which can help correct overfitting. In the last lesson we talked about how overfitting is caused by the network learning spurious patterns in the training data. To recognize these spurious patterns a network will often rely on very a specific combinations of weight, a kind of "conspiracy" of weights. Being so specific, they tend to be fragile: remove one and the conspiracy falls apart. This is the idea behind dropout. To break up these conspiracies, we randomly drop out some fraction of a layer's input units every step of training, making it much harder for the network to learn those spurious patterns in the training data. Instead, it has to search for broad, general patterns, whose weight patterns tend to be more robust. <figure style="padding: 1em;"> <img src="https://i.imgur.com/a86utxY.gif" width="600" alt="An animation of a network cycling through various random dropout configurations."> <figcaption style="textalign: center; font-style: italic"><center>Here, 50% dropout has been added between the two hidden layers.</center></figcaption> </figure> You could also think about dropout as creating a kind of ensemble of networks. The predictions will no longer be made by one big network, but instead by a committee of smaller networks. Individuals in the committee tend to make different kinds of mistakes, but be right at the same time, making the committee as a whole better than any individual. (If you're familiar with random forests as an ensemble of decision trees, it's the same idea.) Adding Dropout In Keras, the dropout rate argument rate defines what percentage of the input units to shut off. Put the Dropout layer just before the layer you want the dropout applied to: keras.Sequential([ # ... layers.Dropout(rate=0.3), # apply 30% dropout to the next layer layers.Dense(16), # ... ]) Batch Normalization The next special layer we'll look at performs "batch normalization" (or "batchnorm"), which can help correct training that is slow or unstable. With neural networks, it's generally a good idea to put all of your data on a common scale, perhaps with something like scikit-learn's StandardScaler or MinMaxScaler. The reason is that SGD will shift the network weights in proportion to how large an activation the data produces. Features that tend to produce activations of very different sizes can make for unstable training behavior. Now, if it's good to normalize the data before it goes into the network, maybe also normalizing inside the network would be better! In fact, we have a special kind of layer that can do this, the batch normalization layer. A batch normalization layer looks at each batch as it comes in, first normalizing the batch with its own mean and standard deviation, and then also putting the data on a new scale with two trainable rescaling parameters. Batchnorm, in effect, performs a kind of coordinated rescaling of its inputs. Most often, batchnorm is added as an aid to the optimization process (though it can sometimes also help prediction performance). Models with batchnorm tend to need fewer epochs to complete training. Moreover, batchnorm can also fix various problems that can cause the training to get "stuck". Consider adding batch normalization to your models, especially if you're having trouble during training. Adding Batch Normalization It seems that batch normalization can be used at almost any point in a network. You can put it after a layer... layers.Dense(16, activation='relu'), layers.BatchNormalization(), ... or between a layer and its activation function: layers.Dense(16), layers.BatchNormalization(), layers.Activation('relu'), And if you add it as the first layer of your network it can act as a kind of adaptive preprocessor, standing in for something like Sci-Kit Learn's StandardScaler. Example - Using Dropout and Batch Normalization Let's continue developing the Red Wine model. Now we'll increase the capacity even more, but add dropout to control overfitting and batch normalization to speed up optimization. This time, we'll also leave off standardizing the data, to demonstrate how batch normalization can stabalize the training. End of explanation """ from tensorflow import keras from tensorflow.keras import layers model = keras.Sequential([ layers.Dense(1024, activation='relu', input_shape=[11]), layers.Dropout(0.3), layers.BatchNormalization(), layers.Dense(1024, activation='relu'), layers.Dropout(0.3), layers.BatchNormalization(), layers.Dense(1024, activation='relu'), layers.Dropout(0.3), layers.BatchNormalization(), layers.Dense(1), ]) """ Explanation: When adding dropout, you may need to increase the number of units in your Dense layers. End of explanation """ model.compile( optimizer='adam', loss='mae', ) history = model.fit( X_train, y_train, validation_data=(X_valid, y_valid), batch_size=256, epochs=100, verbose=0, ) # Show the learning curves history_df = pd.DataFrame(history.history) history_df.loc[:, ['loss', 'val_loss']].plot(); """ Explanation: There's nothing to change this time in how we set up the training. End of explanation """
WenboTien/Crime_data_analysis
exploratory_data_analysis/.ipynb_checkpoints/UCIrvine_Crime_data_analysis-checkpoint.ipynb
mit
df = pd.read_csv('../datasets/UCIrvineCrimeData.csv'); df = df.replace('?',np.NAN) features = [x for x in df.columns if x not in ['state', 'community', 'communityname', 'county' , 'ViolentCrimesPerPop']] """ Explanation: Read the CSV We use pandas read_csv(path/to/csv) method to read the csv file. Next, replace the missing values with np.NaN i.e. Not a Number. This way we can count the number of missing values per column. End of explanation """ df.isnull().sum() """ Explanation: Find the number of missing values in every column End of explanation """ df.dropna() """ Explanation: Eliminating samples or features with missing values One of the easiest ways to deal with missing values is to simply remove the corresponding features(columns) or samples(rows) from the dataset entirely. Rows with missing values can be easily dropped via the dropna method. End of explanation """ df.dropna(axis=1); """ Explanation: Similarly, we can drop columns that have atleast one NaN in any row by setting the axis argument to 1: End of explanation """ #only drop rows where all columns are null df.dropna(how='all'); # drop rows that have not at least 4 non-NaN values df.dropna(thresh=4); # only drop rows where NaN appear in specific columns (here :'community') df.dropna(subset=['community']); """ Explanation: The dropna() method supports additional parameters that can come in handy. End of explanation """ imr = Imputer(missing_values='NaN', strategy='mean', axis=0) imr = imr.fit(df[features]) imputed_data = imr.transform(df[features]); """ Explanation: Imputing missing values Often, the removal of samples or dropping of entire feature columns is simply not feasible, because we might lost too much valuable data. In this case, we can use different interpolation techniques to estimate the missing values from the othere training samples in our dataset. One of the most common interpolation technique is mean interpolation, where we simply replace the missing value by the mean value of the entire feature column. A convenient way to achieve this is using the Imputer class from the scikit-learn as shown in the following code. End of explanation """ #df = df.drop(["communityname", "state", "county", "community"], axis=1) X, y = imputed_data, df['ViolentCrimesPerPop'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0); """ Explanation: Sklearn fundamentals A convenient way to randomly partition the dataset into a separate test & training dataset is to use the train_test_split function from scikit-learn's cross_validation submodule End of explanation """ class SBS(): def __init__(self, estimator, features, scoring=r2_score, test_size=0.25, random_state=1): self.scoring = scoring self.estimator = estimator self.features = features self.test_size = test_size self.random_state = random_state def fit(self, X, y): X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = self.test_size, random_state = self.random_state) dim = X_train.shape[1] self.indices_ = tuple(range(dim)) self.subsets_ = [self.indices_] score = self._calc_score(X_train, y_train, X_test, y_test, self.indices_) self.scores_ = [score] while dim > self.features: scores = [] subsets = [] for p in combinations(self.indices_, r=dim-1): score = self._calc_score(X_train, y_train, X_test, y_test, p) scores.append(score) subsets.append(p) best = np.argmax(score) self.indices_ = subsets[best] self.subsets_.append(self.indices_) dim -= 1 self.scores_.append(scores[best]) print self.scores_ self.k_score_ = self.scores_[-1] return self def transform(self, X): return X[:, self.indices_] def _calc_score(self, X_train, y_train, X_test, y_test, indices): self.estimator.fit(X_train[:, indices], y_train) y_pred = self.estimator.predict(X_test[:, indices]) score = self.scoring(y_test, y_pred) return score clf = LinearRegression() sbs = SBS(clf, features=1) sbs.fit(X_train, y_train) k_feat = [len(k) for k in sbs.subsets_] plt.plot(k_feat, sbs.scores_, marker='o') plt.ylim([-1, 1]) plt.ylabel('Accuracy') plt.xlabel('Number of Features') plt.grid() plt.show() """ Explanation: First, we assigned the NumPy array representation of features columns to the variable X, and we assigned the predicted variable to the variable y. Then we used the train_test_split function to randomly split X and y into separate training & test datasets. By setting test_size=0.3 we assigned 30 percent of samples to X_test and the remaining 70 percent to X_train. Sequential Feature Selection algorithm : Sequential Backward Algorithm(SBS) Sequential feature selection algorithms are a family of greedy search algorithms that can reduce an initial d-dimensional feature space into a k-dimensional feature subspace where k < d. The idea is to select the most relevant subset of features to improve computational efficieny and reduce generalization error End of explanation """
nehal96/Deep-Learning-ND-Exercises
Transfer-Learning/Transfer_Learning.ipynb
mit
from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm vgg_dir = 'tensorflow_vgg/' # Make sure vgg exists if not isdir(vgg_dir): raise Exception("VGG directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(vgg_dir + "vgg16.npy"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar: urlretrieve( 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy', vgg_dir + 'vgg16.npy', pbar.hook) else: print("Parameter file already exists!") """ Explanation: Transfer Learning Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture. <img src="assets/cnnarchitecture.jpg" width=700px> VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes. You can read more about transfer learning from the CS231n course notes. Pretrained VGGNet We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash. git clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell. End of explanation """ import tarfile dataset_folder_path = 'flower_photos' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile('flower_photos.tar.gz'): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar: urlretrieve( 'http://download.tensorflow.org/example_images/flower_photos.tgz', 'flower_photos.tar.gz', pbar.hook) if not isdir(dataset_folder_path): with tarfile.open('flower_photos.tar.gz') as tar: tar.extractall() tar.close() """ Explanation: Flower power Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial. End of explanation """ import os import numpy as np import tensorflow as tf from tensorflow_vgg import vgg16 from tensorflow_vgg import utils data_dir = 'flower_photos/' contents = os.listdir(data_dir) classes = [each for each in contents if os.path.isdir(data_dir + each)] """ Explanation: ConvNet Codes Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier. Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code): ``` self.conv1_1 = self.conv_layer(bgr, "conv1_1") self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2") self.pool1 = self.max_pool(self.conv1_2, 'pool1') self.conv2_1 = self.conv_layer(self.pool1, "conv2_1") self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2") self.pool2 = self.max_pool(self.conv2_2, 'pool2') self.conv3_1 = self.conv_layer(self.pool2, "conv3_1") self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2") self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3") self.pool3 = self.max_pool(self.conv3_3, 'pool3') self.conv4_1 = self.conv_layer(self.pool3, "conv4_1") self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2") self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3") self.pool4 = self.max_pool(self.conv4_3, 'pool4') self.conv5_1 = self.conv_layer(self.pool4, "conv5_1") self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2") self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3") self.pool5 = self.max_pool(self.conv5_3, 'pool5') self.fc6 = self.fc_layer(self.pool5, "fc6") self.relu6 = tf.nn.relu(self.fc6) ``` So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use with tf.Session() as sess: vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer, feed_dict = {input_: images} codes = sess.run(vgg.relu6, feed_dict=feed_dict) End of explanation """ # Set the batch size higher if you can fit in in your GPU memory batch_size = 10 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here vgg = vgg16.Vgg16() input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) with tf.name_scope("content_vgg"): vgg.build(input_) for each in classes: print("Starting {} images".format(each)) class_path = data_dir + each files = os.listdir(class_path) for ii, file in enumerate(files, 1): # Add images to the current batch # utils.load_image crops the input images for us, from the center img = utils.load_image(os.path.join(class_path, file)) batch.append(img.reshape((1, 224, 224, 3))) labels.append(each) # Running the batch through the network to get the codes if ii % batch_size == 0 or ii == len(files): # Image batch to pass to VGG network images = np.concatenate(batch) # TODO: Get the values from the relu6 layer of the VGG network feed_dict = {input_: images} codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict) # Here I'm building an array of the codes if codes is None: codes = codes_batch else: codes = np.concatenate((codes, codes_batch)) # Reset to start building the next batch batch = [] print('{} images processed'.format(ii)) # write codes to file with open('codes', 'w') as f: codes.tofile(f) # write labels to file import csv with open('labels', 'w') as f: writer = csv.writer(f, delimiter='\n') writer.writerow(labels) """ Explanation: Below I'm running images through the VGG network in batches. Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values). End of explanation """ # read codes and labels from file import csv with open('labels') as f: reader = csv.reader(f, delimiter='\n') labels = np.array([each for each in reader]).squeeze() with open('codes') as f: codes = np.fromfile(f, dtype=np.float32) codes = codes.reshape((len(labels), -1)) """ Explanation: Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. End of explanation """ from sklearn.preprocessing import LabelBinarizer lb = LabelBinarizer() lb.fit(labels) labels_vecs = lb.transform(labels) """ Explanation: Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels. End of explanation """ from sklearn.model_selection import StratifiedShuffleSplit sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) train_idx, val_idx = next(sss.split(codes, labels)) half_val_len = int(len(val_idx) / 2) val_idx, test_idx = val_idx[ :half_val_len], val_idx[half_val_len: ] train_x, train_y = codes[train_idx], labels_vecs[train_idx] val_x, val_y = codes[val_idx], labels_vecs[val_idx] test_x, test_y = codes[test_idx], labels_vecs[test_idx] print("Train shapes (x, y):", train_x.shape, train_y.shape) print("Validation shapes (x, y):", val_x.shape, val_y.shape) print("Test shapes (x, y):", test_x.shape, test_y.shape) """ Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn. You can create the splitter like so: ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) Then split the data with splitter = ss.split(x, y) ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide. Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets. End of explanation """ inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]]) labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]]) # TODO: Classifier layers and operations fc = tf.contrib.layers.fully_connected(inputs_, 512) logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Operations for validation/test accuracy predicted = tf.nn.softmax(logits) correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) """ Explanation: If you did it right, you should see these sizes for the training sets: Train shapes (x, y): (2936, 4096) (2936, 5) Validation shapes (x, y): (367, 4096) (367, 5) Test shapes (x, y): (367, 4096) (367, 5) Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network. Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost. End of explanation """ def get_batches(x, y, n_batches=10): """ Return a generator that yields batches from arrays x and y. """ batch_size = len(x)//n_batches for ii in range(0, n_batches*batch_size, batch_size): # If we're not on the last batch, grab data with size batch_size if ii != (n_batches-1)*batch_size: X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] # On the last batch, grab the rest of the data else: X, Y = x[ii:], y[ii:] # I love generators yield X, Y """ Explanation: Batches! Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data. End of explanation """ epochs = 10 iteration = 0 saver = tf.train.Saver() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in get_batches(train_x, train_y): feed = {inputs_: x, labels_: y} loss, _ = sess.run([cost, optimizer], feed_dict=feed) print("Epoch: {}/{}".format(e+1, epochs), "Iteration: {}".format(iteration), "Training loss: {:.5f}".format(loss)) iteration += 1 if iteration % 5 == 0: feed = {inputs_: val_x, labels_: val_y} val_acc = sess.run(accuracy, feed_dict=feed) print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Validation Acc: {:.4f}".format(val_acc)) saver.save(sess, "checkpoints/flowers.ckpt") """ Explanation: Training Here, we'll train the network. Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own! End of explanation """ with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: test_x, labels_: test_y} test_acc = sess.run(accuracy, feed_dict=feed) print("Test accuracy: {:.4f}".format(test_acc)) %matplotlib inline import matplotlib.pyplot as plt from scipy.ndimage import imread """ Explanation: Testing Below you see the test accuracy. You can also see the predictions returned for images. End of explanation """ test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg' test_img = imread(test_img_path) plt.imshow(test_img) # Run this cell if you don't have a vgg graph built if 'vgg' in globals(): print('"vgg" object already exists. Will not create again.') else: #create vgg with tf.Session() as sess: input_ = tf.placeholder(tf.float32, [None, 224, 224, 3]) vgg = vgg16.Vgg16() vgg.build(input_) with tf.Session() as sess: img = utils.load_image(test_img_path) img = img.reshape((1, 224, 224, 3)) feed_dict = {input_: img} code = sess.run(vgg.relu6, feed_dict=feed_dict) saver = tf.train.Saver() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: code} prediction = sess.run(predicted, feed_dict=feed).squeeze() plt.imshow(test_img) plt.barh(np.arange(5), prediction) _ = plt.yticks(np.arange(5), lb.classes_) """ Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. End of explanation """
mne-tools/mne-tools.github.io
dev/_downloads/33d5dd5786fed13908838e94d55ac785/90_compute_covariance.ipynb
bsd-3-clause
import os.path as op import mne from mne.datasets import sample """ Explanation: Computing a covariance matrix Many methods in MNE, including source estimation and some classification algorithms, require covariance estimations from the recordings. In this tutorial we cover the basics of sensor covariance computations and construct a noise covariance matrix that can be used when computing the minimum-norm inverse solution. For more information, see minimum_norm_estimates. End of explanation """ data_path = sample.data_path() raw_empty_room_fname = op.join( data_path, 'MEG', 'sample', 'ernoise_raw.fif') raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname) raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(raw_fname) raw.set_eeg_reference('average', projection=True) raw.info['bads'] += ['EEG 053'] # bads + 1 more """ Explanation: Source estimation method such as MNE require a noise estimations from the recordings. In this tutorial we cover the basics of noise covariance and construct a noise covariance matrix that can be used when computing the inverse solution. For more information, see minimum_norm_estimates. End of explanation """ raw_empty_room.info['bads'] = [ bb for bb in raw.info['bads'] if 'EEG' not in bb] raw_empty_room.add_proj( [pp.copy() for pp in raw.info['projs'] if 'EEG' not in pp['desc']]) noise_cov = mne.compute_raw_covariance( raw_empty_room, tmin=0, tmax=None) """ Explanation: The definition of noise depends on the paradigm. In MEG it is quite common to use empty room measurements for the estimation of sensor noise. However if you are dealing with evoked responses, you might want to also consider resting state brain activity as noise. First we compute the noise using empty room recording. Note that you can also use only a part of the recording with tmin and tmax arguments. That can be useful if you use resting state as a noise baseline. Here we use the whole empty room recording to compute the noise covariance (tmax=None is the same as the end of the recording, see :func:mne.compute_raw_covariance). Keep in mind that you want to match your empty room dataset to your actual MEG data, processing-wise. Ensure that filters are all the same and if you use ICA, apply it to your empty-room and subject data equivalently. In this case we did not filter the data and we don't use ICA. However, we do have bad channels and projections in the MEG data, and, hence, we want to make sure they get stored in the covariance object. End of explanation """ events = mne.find_events(raw) epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.5, baseline=(-0.2, 0.0), decim=3, # we'll decimate for speed verbose='error') # and ignore the warning about aliasing """ Explanation: Now that you have the covariance matrix in an MNE-Python object you can save it to a file with :func:mne.write_cov. Later you can read it back using :func:mne.read_cov. You can also use the pre-stimulus baseline to estimate the noise covariance. First we have to construct the epochs. When computing the covariance, you should use baseline correction when constructing the epochs. Otherwise the covariance matrix will be inaccurate. In MNE this is done by default, but just to be sure, we define it here manually. End of explanation """ noise_cov_baseline = mne.compute_covariance(epochs, tmax=0) """ Explanation: Note that this method also attenuates any activity in your source estimates that resemble the baseline, if you like it or not. End of explanation """ noise_cov.plot(raw_empty_room.info, proj=True) noise_cov_baseline.plot(epochs.info, proj=True) """ Explanation: Plot the covariance matrices Try setting proj to False to see the effect. Notice that the projectors in epochs are already applied, so proj parameter has no effect. End of explanation """ noise_cov_reg = mne.compute_covariance(epochs, tmax=0., method='auto', rank=None) """ Explanation: How should I regularize the covariance matrix? The estimated covariance can be numerically unstable and tends to induce correlations between estimated source amplitudes and the number of samples available. The MNE manual therefore suggests to regularize the noise covariance matrix (see cov_regularization_math), especially if only few samples are available. Unfortunately it is not easy to tell the effective number of samples, hence, to choose the appropriate regularization. In MNE-Python, regularization is done using advanced regularization methods described in :footcite:p:EngemannGramfort2015. For this the 'auto' option can be used. With this option cross-validation will be used to learn the optimal regularization: End of explanation """ evoked = epochs.average() evoked.plot_white(noise_cov_reg, time_unit='s') """ Explanation: This procedure evaluates the noise covariance quantitatively by how well it whitens the data using the negative log-likelihood of unseen data. The final result can also be visually inspected. Under the assumption that the baseline does not contain a systematic signal (time-locked to the event of interest), the whitened baseline signal should be follow a multivariate Gaussian distribution, i.e., whitened baseline signals should be between -1.96 and 1.96 at a given time sample. Based on the same reasoning, the expected value for the :term:global field power (GFP) &lt;GFP&gt; is 1 (calculation of the GFP should take into account the true degrees of freedom, e.g. ddof=3 with 2 active SSP vectors): End of explanation """ noise_covs = mne.compute_covariance( epochs, tmax=0., method=('empirical', 'shrunk'), return_estimators=True, rank=None) evoked.plot_white(noise_covs, time_unit='s') """ Explanation: This plot displays both, the whitened evoked signals for each channels and the whitened :term:GFP. The numbers in the GFP panel represent the estimated rank of the data, which amounts to the effective degrees of freedom by which the squared sum across sensors is divided when computing the whitened :term:GFP. The whitened :term:GFP also helps detecting spurious late evoked components which can be the consequence of over- or under-regularization. Note that if data have been processed using signal space separation (SSS) :footcite:TauluEtAl2005, gradiometers and magnetometers will be displayed jointly because both are reconstructed from the same SSS basis vectors with the same numerical rank. This also implies that both sensor types are not any longer statistically independent. These methods for evaluation can be used to assess model violations. Additional introductory materials can be found here. For expert use cases or debugging the alternative estimators can also be compared (see ex-evoked-whitening): End of explanation """ evoked_meg = evoked.copy().pick('meg') noise_cov['method'] = 'empty_room' noise_cov_baseline['method'] = 'baseline' evoked_meg.plot_white([noise_cov_baseline, noise_cov], time_unit='s') """ Explanation: This will plot the whitened evoked for the optimal estimator and display the :term:GFP for all estimators as separate lines in the related panel. Finally, let's have a look at the difference between empty room and event related covariance, hacking the "method" option so that their types are shown in the legend of the plot. End of explanation """
AtmaMani/pyChakras
udemy_ml_bootcamp/Machine Learning Sections/Support-Vector-Machines/Support Vector Machines with Python.ipynb
mit
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline """ Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a> Support Vector Machines with Python Welcome to the Support Vector Machines with Python Lecture Notebook! Remember to refer to the video lecture for the full background information on the code here! Import Libraries End of explanation """ from sklearn.datasets import load_breast_cancer cancer = load_breast_cancer() """ Explanation: Get the Data We'll use the built in breast cancer dataset from Scikit Learn. We can get with the load function: End of explanation """ cancer.keys() """ Explanation: The data set is presented in a dictionary form: End of explanation """ print(cancer['DESCR']) cancer['feature_names'] """ Explanation: We can grab information and arrays out of this dictionary to set up our data frame and understanding of the features: End of explanation """ df_feat = pd.DataFrame(cancer['data'],columns=cancer['feature_names']) df_feat.info() cancer['target'] df_target = pd.DataFrame(cancer['target'],columns=['Cancer']) """ Explanation: Set up DataFrame End of explanation """ df.head() """ Explanation: Now let's actually check out the dataframe! End of explanation """ from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(df_feat, np.ravel(df_target), test_size=0.30, random_state=101) """ Explanation: Exploratory Data Analysis We'll skip the Data Viz part for this lecture since there are so many features that are hard to interpret if you don't have domain knowledge of cancer or tumor cells. In your project you will have more to visualize for the data. Train Test Split End of explanation """ from sklearn.svm import SVC model = SVC() model.fit(X_train,y_train) """ Explanation: Train the Support Vector Classifier End of explanation """ predictions = model.predict(X_test) from sklearn.metrics import classification_report,confusion_matrix print(confusion_matrix(y_test,predictions)) print(classification_report(y_test,predictions)) """ Explanation: Predictions and Evaluations Now let's predict using the trained model. End of explanation """ param_grid = {'C': [0.1,1, 10, 100, 1000], 'gamma': [1,0.1,0.01,0.001,0.0001], 'kernel': ['rbf']} from sklearn.model_selection import GridSearchCV """ Explanation: Woah! Notice that we are classifying everything into a single class! This means our model needs to have it parameters adjusted (it may also help to normalize the data). We can search for parameters using a GridSearch! Gridsearch Finding the right parameters (like what C or gamma values to use) is a tricky task! But luckily, we can be a little lazy and just try a bunch of combinations and see what works best! This idea of creating a 'grid' of parameters and just trying out all the possible combinations is called a Gridsearch, this method is common enough that Scikit-learn has this functionality built in with GridSearchCV! The CV stands for cross-validation which is the GridSearchCV takes a dictionary that describes the parameters that should be tried and a model to train. The grid of parameters is defined as a dictionary, where the keys are the parameters and the values are the settings to be tested. End of explanation """ grid = GridSearchCV(SVC(),param_grid,refit=True,verbose=3) """ Explanation: One of the great things about GridSearchCV is that it is a meta-estimator. It takes an estimator like SVC, and creates a new estimator, that behaves exactly the same - in this case, like a classifier. You should add refit=True and choose verbose to whatever number you want, higher the number, the more verbose (verbose just means the text output describing the process). End of explanation """ # May take awhile! grid.fit(X_train,y_train) """ Explanation: What fit does is a bit more involved then usual. First, it runs the same loop with cross-validation, to find the best parameter combination. Once it has the best combination, it runs fit again on all data passed to fit (without cross-validation), to built a single new model using the best parameter setting. End of explanation """ grid.best_params_ grid.best_estimator_ """ Explanation: You can inspect the best parameters found by GridSearchCV in the best_params_ attribute, and the best estimator in the best_estimator_ attribute: End of explanation """ grid_predictions = grid.predict(X_test) print(confusion_matrix(y_test,grid_predictions)) print(classification_report(y_test,grid_predictions)) """ Explanation: Then you can re-run predictions on this grid object just like you would with a normal model. End of explanation """
MTG/sms-tools
notebooks/E10-1-Music-piece.ipynb
agpl-3.0
import sys, os sys.path.append('../software/models/') import utilFunctions as UF # read sounds chosen and perform the analysis ### your code here """ Explanation: Exercise 10-1: Music piece combining sound transformations The aim of this exercise is to extend what you did in Exercise 8 by having no limitations on the sounds used and the analysis models and transformations applied. This is an exercise to be creative and make some interesting music. The end result should be a short music piece (about a minute long) combining various transformed sounds using the tools explained in class. In A8, you explored some transformations using the HPS model, but there were many limitations in that assignment: You used a single analysis model, HPS. Now you can use any model presented in class or combinations of them. You applied just one transformation to each sound file. Now you can combine and add several transformations of the same sound. A single type of analysis and transformation was done for the whole sound. Now you can divide the sound into sections and use different analysis and transformations to each part. The transformation was applied in a single pass. Now you can perform multiple passes of the same sound through different kinds of transformations. In this exercise, you can explore the potential of all the algorithms presented in class and come up with more creative transformations and combinations of them. You can combine sounds, models, and transformations, in any way you want. You can create your own mix of transformed sounds to make your own music composition! Some of these transformations might require you to change and modify some code. Feel free to dig in! Part 1 Choose several sounds from freesound.org that have a good scope for being analyzed with the techniques described in class and good potential for creative transformations. As with E8, you can even upload your own sounds to Freesound and use them in the exercise. The sounds chosen should be naturally produced (any sound that was not synthesized). e.g. acoustic instrument sounds, speech, nature sounds, and ambient sounds, to name a few. Heavily processed natural sounds are acceptable, but refrain from using them unless justified. Perform the analysis and check thart the synthesis without transformations is good. End of explanation """ # perform the transformations ### your code here """ Explanation: Part 2 Perform different transformations on each of the sounds, or parts of them. Then mix them together (overlapping sounds is allowed) using Audacity or a similar tool to create a single audio file. The only constraints are that you should only use models that you studied in the class (STFT, Sinusoidal, SPR, HPR, SPS, HPS) for analysis. This is a music piece you just composed with transformations on sound samples! End of explanation """ # 3.1 mix the sounds and explain your choices ### your code here """ Explanation: Part 3 Mix the sounds together (overlapping sounds is allowed). Be as creative as you can. You can repeat the sounds you have chosen. Give a description of what you did, giving the Freesound link to the sound files you started from, explaining the analysis and transformation you did to each of them, and explaining how you mixed them to obtain the final piece. The description should be like a script from which the evaluator can have a clear idea of the process that you followed. No need to give all the details, thus no need to have a script from which the composition could be regenerated, but the description should be clear enough to understand the whole process. End of explanation """
DSSatPitt/katz-python-workshop
intro-to-python/participant.ipynb
cc0-1.0
# open the source CSV file csv = open("cars.csv") # create a list with the column names. we assume the first row contiains them. # we strip the carriage return (if there is one) from the line, then split values on the commas. # Note: this uses a nifty python feature called 'list comprehension' to do it in one line column_names = [i for i in csv.readline().strip().split(',')] # read the rest of the file into a matrix (a list of lists). Use the same strip and split methods. data = [line.strip().split(',') for line in csv.readlines()] # now, try to infer the data types of each column from the values in the first row. # the testing here shows some string methods, like isspace(), isalpha(), isdigit(). # we'll save these data type assumptions because we'll use them later in a report. column_datatypes = [] for value in data[0]: if len(value) < 1 or value.isspace(): column_datatypes.append('string') elif value.isalpha(): column_datatypes.append('string') elif '.' in value or value.isdigit(): column_datatypes.append('numeric') else: column_datatypes.append('string') # now let's do some basic reporting on the csv # overall stats of the file: print("this csv file has " + str(len(column_names)) + " columns and " + str(len(data)) + " rows.") # loop over each column name, do some different things depending on whether we've inferred # it contains string or numeric values. we declare certain variables with 'False' so even if # we can't fill them we can test them without an error. for i, value in enumerate(column_names): average_value = False highest_value = False lowest_value = False # if it's a numeric column, we'll get all the values for this column out of our data matrix, # convert them to float (remember they are all strings by default), and then get the average, # high, and low values. If there's an error doing this, just get the values as strings if column_datatypes[i] == 'numeric': try: column_values = [float(data[j][i]) for j in range(len(data))] average_value = sum(column_values)/len(column_values) highest_value = sorted(column_values)[-1] lowest_value = sorted(column_values)[0] except ValueError: column_values = [data[j][i] for j in range(len(data))] else: column_values = [data[j][i] for j in range(len(data))] # the set function removes duplicates from a list, so taking its length is equivilent # to the number of unique values unique_value_count = len(set(column_values)) # now we start printing. First just the field name. The simple way of formatting a string # is with the + operator. Note: we add one to the index because we don't want our list # to start with zero. print(str(i+1) + ". \"" + value + "\"") # next the type we think it is, and the number of unique values # Note: using the + style of string formatting all non-string values have to be cast to strings print("\t{0} ({1} of {2} unique)".format(column_datatypes[i], unique_value_count, len(data))) # now different details if it's numeric and successfully converted to float, if it's # numeric, and didnt', and otherwise we assume it's a string. # Note: also showing a different, more powerful string formatting method here if column_datatypes[i] == 'numeric': if average_value: print("\taverage value: {0:g}".format(average_value)) print("\tlowest value: {0:g}".format(lowest_value)) print("\thighest value: {0:g}".format(highest_value)) else: print("\tNOTE: problems converting values to float!") else: print("\tfirst value: {0:s}".format(column_values[0])) print("\tlast value: {0:s}".format(column_values[-1])) """ Explanation: Introduction to Python This tutorial was originally drawn from Scipy Lecture Notes by this list of contributors. I've continued to modify it as I use it. This work is CC-BY. Author: Aaron L. Brenner Python is a programming language, as are C, Fortran, BASIC, PHP, etc. Some specific features of Python are as follows: an interpreted (as opposed to compiled) language. Contrary to e.g. C or Fortran, one does not compile Python code before executing it. In addition, Python can be used interactively: many Python interpreters are available, from which commands and scripts can be executed. a free software released under an open-source license: Python can be used and distributed free of charge, even for building commercial software. multi-platform: Python is available for all major operating systems, Windows, Linux/Unix, MacOS X, most likely your mobile phone OS, etc. a very readable language with clear non-verbose syntax* a language for which a large variety of high-quality packages are available for various applications, from web frameworks to scientific computing. a language very easy to interface with other languages, in particular C and C++. Some other features of the language are illustrated just below. For example, Python is an object-oriented language, with dynamic typing (the same variable can contain objects of different types during the course of a program). See https://www.python.org/about/ for more information about distinguishing features of Python. Some Key Learning and Reference Resources If you are interested in moving forward with learning Python, it is worth your time to get acquainted with all of these resources. The tutorial will step you through more Python, and you should be familiar with the basics of the Python language and its standard library. Python documentation home https://docs.python.org/3/ Tutorial https://docs.python.org/3/tutorial/index.html Python Language Reference https://docs.python.org/3/reference/index.html#reference-index The Python Standard Library https://docs.python.org/3/library/index.html#library-index Additional Learning Resources The Python Cookbook, 3rd Edition - This is one of many various 'cookbooks'. These can be very useful not only for seeing solutions to common problems, but also as a way to read brief examples of ideomatic code. Reading code snippets in this way can be a great compliment to language reference documentation and traditional tutorials. http://chimera.labs.oreilly.com/books/1230000000393/ Also, don't be embarrased to Google your questions! Try some variation of python [thing] example Let's dive in* with an example that does something (kind of) useful: * hat tip to Mark Pilgrim This is a script that inspects a CSV data file and reports on some summary characteristics. Take a minute to read over the code before running it. Don't worry if you don't understand all of what's happening. We'll step through some of this code in more detail as we learn the basics of python. For now, just try to get a feel for what a complete script looks like. After you've read it over, go ahead and execute it. End of explanation """ 1 + 1 """ Explanation: After you've run this as-is, change the name of the CSV file from cars.csv to cities.csv. Run it again. First Steps Follow along with the instructor in typing instructions: Two variables a and b have been defined above. Note that one does not declare the type of an variable before assigning its value. In addition, the type of a variable may change, in the sense that at one point in time it can be equal to a value of a certain type, and a second point in time, it can be equal to a value of a different type. b was first equal to an integer, but it became equal to a string when it was assigned the value ’hello’. But you can see that type often matters, as when we try to print an integer in the midst of a string. Basic Types Numerical Types Python supports the following numerical, scalar types. Integer End of explanation """ a = len(column_names) type(a) """ Explanation: Remember how we saw integers in our CSV script example? The number of columns and number of rows are integers End of explanation """ c = 2.1 type(c) type(average_value) """ Explanation: Floats Note: most decimal fractions cannot be represented exactly as binary fractions, and certain operations using floats may lead to surprising results. For more details, start here. End of explanation """ 3 > 4 test = (3 > 4) test type(test) """ Explanation: Booleans End of explanation """ float(1) """ Explanation: A Python shell can therefore replace your pocket calculator, with the basic arithmetic operations +, -, *, /, % (modulo) natively implemented. Try some things here or follow along with the instructor's examples: Type conversion (casting): End of explanation """ # this is a comment. We might say, for example, that we're setting the value of Pi: pi = 3.14 pie = 'pumpkin' # and this is an in-line comment. Setting the value of pie. """ Explanation: Comments Commenting code is good practice and is extremely helpful to help others understand your code. And, often, to help you understand code that you've written earlier. In python, everything following the hash/pound sign # is a comment. Comments can either be their own line(s), or in-line. Use in-line comments sparingly. End of explanation """ l = ['red', 'blue', 'green', 'black', 'white'] type(l) """ Explanation: Exercise Add three of the same single digit integers (e.g. 1 + 1 + 1). Is the result what you expected? Next, add three of the same tenths digit, which are floats (e.g. .1 + .1 + .1). Is the result what you expected? What happened with those floats? How could we avoid this? There is an explanation and some suggestions in this python documentaion on floating points. Containers Python provides many efficient types of containers, in which collections of data can be stored. Lists A list is an ordered collection of objects, that may have different types. For example: End of explanation """ column_names """ Explanation: And remember in our CSV script example, we used lots of lists! There was a list to store the column names, which we assumed were in the first row of data: End of explanation """ data """ Explanation: And then each row of the CSV was itself a list, and all the rows were another list. So we used a list of lists, or a matrix. End of explanation """ column_names[0] column_names[-1] """ Explanation: Indexing: accessing individual objects contained in the list: End of explanation """ column_names[1:3] """ Explanation: Indexing starts at 0, not 1! Slicing: obtaining sublists of regularly-spaced elements: End of explanation """ column_names[0] = 'LOOK AT ME!' column_names """ Explanation: Note that l[start:stop] contains the elements with indices i such as start&lt;= i &lt; stop (i ranging from start to stop-1). Therefore, l[start:stop] has (stop - start) elements. Slicing syntax: l[start:stop:stride] Lists are mutable objects and can be modified: End of explanation """ l = [3, -200, 'hello'] l """ Explanation: The elements of a list may have different types: End of explanation """ L = ['red', 'blue', 'green', 'black', 'white'] L.append('pink') L L.pop() # removes and returns the last item L """ Explanation: Python offers a large panel of functions to modify lists, or query them. Here are a few examples; for more details, see https://docs.python.org/tutorial/datastructures.html#more-on-lists Add and remove elements: End of explanation """ L.extend(['pink', 'purple']) # extend L, in-place L L = L[:-2] L """ Explanation: Add a list to the end of a list with extend() End of explanation """ r = L[::-1] r r.reverse() # in-place r """ Explanation: Two ways to reverse a list: End of explanation """ r + L """ Explanation: Concatenate lists: End of explanation """ sorted(r) # new object r r.sort() #in-place r """ Explanation: Sort: End of explanation """ s = 'Hello, how are you?' s = "Hi, what's up" s = '''Hello, # tripling the quotes allows the how are you''' # the string to span more than one line s = """Hi, what's up?""" """ Explanation: Exercise We used sorted() in our CSV example a few times. That, along with list indexes, helped us get the lowest and highest value. Here's what it looked like: highest_value = sorted(column_values)[-1] lowest_value = sorted(column_values)[0] Try creating your own list of unsorted items. Can you replicate the highest and lowest value expressions above? What happens if your list is not made up of numbers? Methods The notation r.method() (e.g. r.append(3) and L.pop()) is our first example of object-oriented programming (OOP). Being a list, the object r has a method function that is called using the notation .methodname(). We will talk about functions later in this tutorial. When you're using jupyter, to see all the different methods available to a variable, type a period after the variable name and hit the tab key. Strings We've already seen strings a few times. Python supports many different string syntaxes (single, double or triple quotes): End of explanation """ s = 'Hi, what's up?' """ Explanation: Double quotes are crucial when you have a quote in the string: End of explanation """ a = "hello" a[0] a[1] a[-1] """ Explanation: The newline character is \n, and the tab character is \t. Strings are collections like lists. Hence they can be indexed and sliced, using the same syntax and rules. Indexing strings: End of explanation """ a = "hello, world!" a[2] = 'z' a.replace('l', 'z', 1) a.replace('l', 'z') """ Explanation: Accents and special characters can also be handled in strings because since Python 3, the string type handles unicode (UTF-8) by default.(For a lot more on Unicode, character encoding, and how it relates to python, see https://docs.python.org/3/howto/unicode.html). A string is an immutable object and it is not possible to modify its contents. If you want to modify a string, you'll create a new string from the original one (or use a method that returns a new string). End of explanation """ 'An integer: {0} ; a float: {1} ; another string: {2} '.format(1, 0.1, 'string') i = 102 filename = 'processing_of_dataset_{0}.txt'.format(i) filename """ Explanation: Strings have many useful methods, such as a.replace as seen above. Remember the a. object-oriented notation and use tab completion or help(str) to search for new methods. Exercise We used a few string methods in the CSV example at the start. See them in this chunk of code? if len(value) &lt; 1 or value.isspace(): column_datatypes.append('string') elif value.isalpha(): column_datatypes.append('string') elif '.' in value or value.isdigit(): Now you try. Create a new variable and assign it with a string. Then, try a few of python's string methods to see how you can return different versions of your string, or test whether it has certain characteristics. String formatting: We also saw string formatting in the CSV example, when we printed some of the reporting: print("\t{0} ({1} of {2} unique)".format(column_datatypes[i], unique_value_count, len(data))) End of explanation """ tel = {'emmanuelle': 5752, 'sebastian': 5578} tel['francis'] = 5915 tel tel['sebastian'] tel.keys() tel.values() 'francis' in tel """ Explanation: Dictionaries A dictionary is basically an efficient table that maps keys to values. It is an unordered container: End of explanation """ d = {'a':1, 'b':2, 3:'hello'} d """ Explanation: It can be used to conveniently store and retrieve values associated with a name (a string for a date, a name, etc.). See https://docs.python.org/tutorial/datastructures.html#dictionaries for more information. A dictionary can have keys (resp. values) with different types: End of explanation """ a = [1, 2, 3] b = a a b a is b b[1] = "hi!" a """ Explanation: The assignment operator Python library reference says: Assignment statements are used to (re)bind names to values and to modify attributes or items of mutable objects. In short, it works as follows (simple assignment): 1. an expression on the right hand side is evaluated, the corresponding object is created/obtained 2. a name on the left hand side is assigned, or bound, to the r.h.s. object Things to note: * a single object can have several names bound to it: End of explanation """ if 2**2 == 4: print('Obviously') """ Explanation: Control Flow Controls the order in which the code is executed. Conditional Expressions if &lt;THING&gt; Evaluates to false: * any number equal to zero (0,0.0) * an empty container (list, dictionary) * False, None Evaluates to True: * everything else a == b Tests equality: if/elif/else End of explanation """ b = [1,2,3] 2 in b 5 in b """ Explanation: a in b For any collection b check to see if b contains a: End of explanation """ a = 10 if a == 1: print(1) elif a == 2: print(2) else: print("A lot") """ Explanation: Blocks are delimited by indentation Type the following lines in your Python interpreter, and be careful to respect the indentation depth. The Jupyter Notebook automatically increases the indentation depth after a coln : sign; to decrease the indentation depth, go four spaces to the left with the Backspace key. Press the Enter key twice to leave the logical block. End of explanation """ for i in range(4): print(i) """ Explanation: for/range Iterating with an index: End of explanation """ for word in ('cool', 'powerful', 'readable'): print('Python is %s ' % word) """ Explanation: But most often, it is more readable to iterate over values: End of explanation """ vowels = 'aeiouy' for i in 'powerful': if i in vowels: print(i) message = "Hello how are you?" message.split() # returns a list for word in message.split(): print(word) """ Explanation: Advanced iteration Iterate over any sequence You can iterate over any sequence (string, list, keys in a dictionary, lines in a file, ...): End of explanation """ words = ['cool', 'powerful', 'readable'] for i in range(0, len(words)): print(i, words[i]) """ Explanation: Few languages (in particular, languages for scientific computing) allow to loop over anything but integers/indices. With Python it is possible to loop exactly over the objects of interest without bothering with indices you often don’t care about. This feature can often be used to make code more readable. It is not safe to modify the sequence you are iterating over. Keeping track of enumeration number Common task is to iterate over a sequence while keeping track of the item number. * Could use while loop with a counter as above. Or a for loop: End of explanation """ for index, item in enumerate(words): print(index, item) """ Explanation: But, Python provides a built-in function - enumerate - for this: End of explanation """ d = {'a': 1, 'b':1.2, 'c':"hi"} for key, val in sorted(d.items()): print('Key: %s has value: %s ' % (key, val)) """ Explanation: When looping over a dictionary use .items(): End of explanation """ [i**2 for i in range(4)] """ Explanation: The ordering of a dictionary in random, thus we use sorted() which will sort on the keys. Exercise Countdown to blast off Write code that uses a loop to print a count down from 10 to 1, followed by printing the string "blast off!". There's more than one way to do this, so figure out what works for you, drawing on things we've already learned. List Comprehensions End of explanation """ l = [] for i in range(4): l.append(i) l """ Explanation: Same as: End of explanation """ [10 - i for i in range(10)] """ Explanation: Now that you've done the countdown exercise above, consider how you could have used a list comprehension in a solution: End of explanation """ def test(): print('in test function') test() """ Explanation: Defining Functions Function definition End of explanation """ def disk_area(radius): return 3.14 * radius * radius disk_area(1.5) """ Explanation: Function blocks must be indented as other control flow blocks Return statement Functions can optionally return values: End of explanation """ def double_it(x): return x * 2 double_it(3) double_it() """ Explanation: By default, functions return None. Note the syntax to define a function: * the def keyword; * is followed by the function’s name, then * the arguments of the function are given between parentheses followed by a colon. * the function body; * and return object for optionally returning values. Parameters Mandatory parameters (positional arguments): End of explanation """ def double_it(x=2): return x * 2 double_it() double_it(3) """ Explanation: Optional parameters (keyword or named arguments) End of explanation """ # We're defining a global variable for pi, and it's actually a special kind of global # because we intend it to be constant (i.e. it's value doesn't change). There's a convention # of using uppercase in naming constants. See https://www.python.org/dev/peps/pep-0008/#constants PI = 3.14159 def disk_area(radius): return PI * radius * radius disk_area(1.5) """ Explanation: Keyword arguments allow you to specify default values. Default values are evaluated when the function is defined, not when it is called. This can be problematic when using mutable types (e.g. dictionary or list) and modifying them in the function body, since the modifications will be persistent across invocations of the function. Global variables Variables declared outside the function can be referenced within the function: End of explanation """ def funcname(params): """Concise one-line sentence describing the function. Extended summary which can contain multiple paragraphs. """ # function body pass """ Explanation: Docstrings Documentation about what the function does and its parameters. General convention: End of explanation """ funcname? """ Explanation: There's a great help feature build into Jupyter: type a question mark after any object or function to get quick access to its docstring. Try it: End of explanation """ import os os """ Explanation: Docstring guidelines For the sake of standardization, the Docstring Conventions webpage documents the semantics and conventions associated with Python docstrings. Also, the Numpy and Scipy modules have defined a precise standard for documenting scientific functions, that you may want to follow for your own functions, with a Parameters section, an Examples section, etc. See http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard and http://projects.scipy.org/numpy/browser/trunk/doc/example.py#L37 Create your own function Use any features of Python that we've already worked on, or play with something new from the documentation. Write a function, and include a docstring explaining the function's purpose. Test your function by executing it. If your function uses parameters, try calling the function a few times with different parameters. Define the function here: Call the function here: Importing modules Importing objects from modules Modules let you use code that doesn't reside within the notebook and might be part of the standard library. (more on this later) End of explanation """ os.listdir('.') """ Explanation: Methods Methods are functions attached to objects. You’ve seen these in our examples on lists, dictionaries, strings, etc... End of explanation """ from os import listdir listdir('.') """ Explanation: And also: End of explanation """ import pandas as pd """ Explanation: Using alias: End of explanation """ import pandas as pd pd.Series([0,1,2,3,4,5,6,7,8,9]) """ Explanation: Modules are thus a good way to organize code in a hierarchical way. Actually, all the data science tools we are going to use are modules: End of explanation """ long_line = "Here is a very very long line \ ... that we break in two parts." """ Explanation: Good Practices Use meaningful object names Indentation: no choice! Indenting is compulsory in Python! Every command block following a colon bears an additional indentation level with respect to the previous line with a colon. One must therefore indent after def f(): or while:. At the end of such logical blocks, one decreases the indentation depth (and re-increases it if a new block is entered, etc.) Strict respect of indentation is the price to pay for getting rid of { or ; characters that delineate logical blocks in other languages. Improper indentation leads to errors such as: ``` IndentationError: unexpected indent (test.py, line 2) `` All this indentation business can be a bit confusing in the beginning. However, with the clear indentation, and in the absence of extra characters, the resulting code is very nice to read compared to other languages. * **Indentation depth**: Inside your text editor, you may choose to indent with any positive number of spaces (1, 2, 3, 4, ...). However, it is considered good practice to **indent with 4 spaces**. You may configure your editor to map theTab` key to a 4-space indentation. In Python(x,y), the editor is already configured this way. * Style guidelines Long lines: you should not write very long lines that span over more than (e.g.) 80 characters. Long lines can be broken with the \ character End of explanation """ a = 1 # yes a=1 # too cramped """ Explanation: Spaces Write well-spaced code: put whitespaces after commas, around arithmetic operators, etc.: End of explanation """ f = open('workfile.txt', 'w') # opens the workfile file in writing mode type(f) f.write('This is a test \nand another test') f.close() # always use close() after opening a file! Very important! """ Explanation: A certain number of rules for writing “beautiful” code (and more importantly using the same conventions as anybody else!) are given in the PEP-8: Style Guide for Python Code. Input and Output We write or read strings to/from files (other types must be converted to strings). To write in a file: End of explanation """ f = open('workfile.txt', 'r') s = f.read() print(s) f.close() """ Explanation: To read from a file: End of explanation """ f = open('workfile.txt', 'r') for line in f: print(line) f.close() """ Explanation: For more details: https://docs.python.org/tutorial/inputoutput.html Iterating over a file End of explanation """ import os os.getcwd() """ Explanation: File modes: * r: Read-only * w: Write-only - Note: This will create a new file or overwrite an existing file * a: Append to a file * r+: Read and Write For more information about file modes read the documentation for the open() function. https://docs.python.org/3.5/library/functions.html#open The Standard Library Reference documentation for this section: * The Python Standard Library documentation: https://docs.python.org/library/index.html * Python Essential Reference, David Beazley, Addison-Wesley Professional os module: operating system functionality "A portable way of using operating system dependent functionality.” **Directory and file manipulation Get the current directory: End of explanation """ os.listdir(os.curdir) """ Explanation: List a directory: End of explanation """ os.mkdir('junkdir') 'junkdir' in os.listdir(os.curdir) """ Explanation: Make a directory: End of explanation """ os.rename('junkdir', 'foodir') 'junkdir' in os.listdir(os.curdir) 'foodir' in os.listdir(os.curdir) os.rmdir('foodir') #remove directory 'foodir' in os.listdir(os.curdir) """ Explanation: Rename the directory: End of explanation """ fp = open('junk.txt', 'w') fp.close() 'junk.txt' in os.listdir(os.curdir) os.remove('junk.txt') 'junk.txt' in os.listdir(os.curdir) """ Explanation: Delete a file: End of explanation """ import glob glob.glob('*.txt') """ Explanation: glob: Pattern matching on files The glob module provides convenient file pattern matching. Find all files ending in .txt: End of explanation """ 1/0 d = {1:1, 2:2} d[3] l = [1, 2, 3] l[4] l.foobar """ Explanation: Exception handling in Python It is likely that you have raised Exceptions if you have typed all the previous commands of the tutorial. For example, you may have raised an exception if you entered a command with a typo. Exceptions are raised by different kinds of errors arising when executing Python code. In your own code, you may also catch errors, or define custom error types. You may want to look at the descriptions of the built-in Exceptions when looking for the right exception type. Exceptions Exceptions are raised by errors in Python: End of explanation """ while True: try: x = int(input('Please enter a number: ')) break except ValueError: print('That was no valid number. Try again...') x """ Explanation: Catching exceptions try/except End of explanation """ try: x = int(input('Please enter a number: ')) finally: print('Thank you for your input.') """ Explanation: try/finally End of explanation """ def filter_name(name): try: name = name.encode('ascii') except UnicodeError as e: if name == 'Gaël': print("OK, Gaël") else: raise e return name filter_name("Gaël") filter_name('Stéfan') """ Explanation: Raising exceptions Capturing and reraising an exception: End of explanation """ def achilles_arrow(x): if abs(x - 1) < 1e-3: raise StopIteration x = 1 - (1-x)/2. return x x = 0 while True: try: x = achilles_arrow(x) except StopIteration: break x """ Explanation: Exceptions to pass messages between parts of the code: End of explanation """
akutuzov/webvectors
preprocessing/rusvectores_tutorial.ipynb
gpl-3.0
import wget udpipe_url = 'https://rusvectores.org/static/models/udpipe_syntagrus.model' text_url = 'https://rusvectores.org/static/henry_sobolya.txt' modelfile = wget.download(udpipe_url) textfile = wget.download(text_url) """ Explanation: RusVectōrēs: семантические модели для русского языка Елизавета Кузьменко, Андрей Кутузов В этом тьюториале мы рассмотрим возможности использования веб-сервиса RusVectōrēs и векторных семантических моделей, которые этот веб-сервис предоставляет пользователям. Наша задача -- от "сырого" текста (т.е. текста без всякой предварительной обработки) прийти к данным, которые мы можем передать векторной модели и получить от неё интересующий нас результат. Тьюториал состоит из трех частей: * в первой части мы научимся осуществлять предобработку текстовых файлов так, чтобы в дальнейшем они могли использованы в качестве входных данных для моделей RusVectōrēs. * во второй части мы научимся работать с векторными моделями и выполнять простые операции над векторами слов, такие как "найти семантические аналоги", "сложить вектора двух слов", "вычислить коэффициент близости между двумя векторами слов". * в третьей части мы научимся обращаться к сервису RusVectōrēs через API. Мы рекомендуем использовать Python3, работоспособность тьюториала для Python2 не гарантируется. 1. Предобработка текстовых данных Функциональность RusVectōrēs позволяет пользователям делать запрос к моделям с одним конкретным словом или с несколькими словами. С помощью сервиса можно также анализировать отношения между бóльшим количеством слов. Но что делать, если вы хотите обработать очень большую коллекцию текстов или ваша задача не решается при помощи конкретных единичных запросов к серверу, которые можно сделать вручную? В этом случае можно скачать одну из наших моделей, а затем обрабатывать с её помощью тексты локально на вашем компьютере. Однако в этом случае необходимо, чтобы данные, которые подаются на вход модели, были в том же формате, что и данные, на которых эта модель была натренирована. Вы можете использовать наши готовые скрипты, чтобы из сырого текста получить текст в формате, который можно подавать на вход модели. Скрипты лежат здесь. Как следует из их названия, один из скриптов использует для предобработки UDPipe, а другой Mystem. Оба скрипта используют стандартные потоки ввода и вывода, принимают на вход текст, выдают тот же текст, только лемматизированный и с частеречными тэгами. Если же вы хотите детально во всем разобраться и понять, например, в чем разница между UDPipe и Mystem, то читайте далее :) Предобработка текстов для тренировки моделей осуществлялась следующим образом: * лемматизация и удаление стоп-слов; * приведение лемм к нижнему регистру; * добавление частеречного тэга для каждого слова. Особого внимания заслуживает последний пункт предобработки. Как можно видеть из таблицы с описанием моделей, частеречные тэги для слов в различных моделях принадлежат к разным тагсетам. Первые модели использовали тагсет Mystem, затем мы перешли на Universal POS tags. В моделях на базе fastText частеречные тэги не используются вовсе. Давайте попробуем воссоздать процесс предобработки текста на примере рассказа О. Генри "Русские соболя". Для предобработки мы предлагаем использовать UDPipe, чтобы сразу получить частеречную разметку в виде Universal POS-tags. Сначала установим обертку UDPipe для Python: pip install ufal.udpipe UDPipe использует предобученные модели для лемматизации и тэггинга. Вы можете использовать нашу модель или обучить свою. Чтобы загружать файлы, можно использовать реализацию wget в виде питоновской библиотеки: pip install wget End of explanation """ def num_replace(word): newtoken = 'x' * len(word) return newtoken def clean_token(token, misc): out_token = token.strip().replace(' ', '') if token == 'Файл' and 'SpaceAfter=No' in misc: return None return out_token def clean_lemma(lemma, pos): out_lemma = lemma.strip().replace(' ', '').replace('_', '').lower() if '|' in out_lemma or out_lemma.endswith('.jpg') or out_lemma.endswith('.png'): return None if pos != 'PUNCT': if out_lemma.startswith('«') or out_lemma.startswith('»'): out_lemma = ''.join(out_lemma[1:]) if out_lemma.endswith('«') or out_lemma.endswith('»'): out_lemma = ''.join(out_lemma[:-1]) if out_lemma.endswith('!') or out_lemma.endswith('?') or out_lemma.endswith(',') \ or out_lemma.endswith('.'): out_lemma = ''.join(out_lemma[:-1]) return out_lemma """ Explanation: Перед лемматизацией и тэггингом, наши модели были очищены от пунктуации и возможных ошибок при помощи фильтров Татьяны Шавриной. Вы можете увидеть вспомогательные функции для очистки текста в скрипте для препроцессинга. Мы возьмем из этого скрипта три функции num_replace, clean_token и clean_lemma для очистки от знаков препинания и различных типографических артефактов. End of explanation """ def process(pipeline, text='Строка', keep_pos=True, keep_punct=False): entities = {'PROPN'} named = False memory = [] mem_case = None mem_number = None tagged_propn = [] # обрабатываем текст, получаем результат в формате conllu: processed = pipeline.process(text) # пропускаем строки со служебной информацией: content = [l for l in processed.split('\n') if not l.startswith('#')] # извлекаем из обработанного текста леммы, тэги и морфологические характеристики tagged = [w.split('\t') for w in content if w] for t in tagged: if len(t) != 10: continue (word_id, token, lemma, pos, xpos, feats, head, deprel, deps, misc) = t token = clean_token(token, misc) lemma = clean_lemma(lemma, pos) if not lemma or not token: continue if pos in entities: if '|' not in feats: tagged_propn.append('%s_%s' % (lemma, pos)) continue morph = {el.split('=')[0]: el.split('=')[1] for el in feats.split('|')} if 'Case' not in morph or 'Number' not in morph: tagged_propn.append('%s_%s' % (lemma, pos)) continue if not named: named = True mem_case = morph['Case'] mem_number = morph['Number'] if morph['Case'] == mem_case and morph['Number'] == mem_number: memory.append(lemma) if 'SpacesAfter=\\n' in misc or 'SpacesAfter=\s\\n' in misc: named = False past_lemma = '::'.join(memory) memory = [] tagged_propn.append(past_lemma + '_PROPN ') else: named = False past_lemma = '::'.join(memory) memory = [] tagged_propn.append(past_lemma + '_PROPN ') tagged_propn.append('%s_%s' % (lemma, pos)) else: if not named: if pos == 'NUM' and token.isdigit(): # Заменяем числа на xxxxx той же длины lemma = num_replace(token) tagged_propn.append('%s_%s' % (lemma, pos)) else: named = False past_lemma = '::'.join(memory) memory = [] tagged_propn.append(past_lemma + '_PROPN ') tagged_propn.append('%s_%s' % (lemma, pos)) if not keep_punct: tagged_propn = [word for word in tagged_propn if word.split('_')[1] != 'PUNCT'] if not keep_pos: tagged_propn = [word.split('_')[0] for word in tagged_propn] return tagged_propn """ Explanation: Приступим к собственно предобработке текста. Её можно настроить для своей задачи. Так, например, вы можете не использовать части речи или оставить пунктуацию. Если частеречные тэги вам не нужны, в функции ниже выставьте keep_pos=False. Если вам необходимо сохранить знаки пунктуации, выставьте keep_punct=True. End of explanation """ from ufal.udpipe import Model, Pipeline import os import re import sys def tag_ud(text='Текст нужно передать функции в виде строки!', modelfile='udpipe_syntagrus.model'): udpipe_model_url = 'https://rusvectores.org/static/models/udpipe_syntagrus.model' udpipe_filename = udpipe_model_url.split('/')[-1] if not os.path.isfile(modelfile): print('UDPipe model not found. Downloading...', file=sys.stderr) wget.download(udpipe_model_url) print('\nLoading the model...', file=sys.stderr) model = Model.load(modelfile) process_pipeline = Pipeline(model, 'tokenize', Pipeline.DEFAULT, Pipeline.DEFAULT, 'conllu') print('Processing input...', file=sys.stderr) for line in text: # line = unify_sym(line.strip()) # здесь могла бы быть ваша функция очистки текста output = process(process_pipeline, text=line) print(' '.join(output)) text = open(textfile, 'r', encoding='utf-8').read() tag_ud(text=text, modelfile=modelfile) """ Explanation: Загружаем модель UDPipe, читаем текстовый файл и обрабатываем его. В файле должен содержаться необработанный текст (одно предложение на строку или один абзац на строку). Этот текст токенизируется, лемматизируется и размечается по частям речи с использованием UDPipe. На выход подаётся последовательность разделенных пробелами лемм с частями речи ("зеленый_NOUN трамвай_NOUN"). End of explanation """ from pymystem3 import Mystem def tag_mystem(text='Текст нужно передать функции в виде строки!'): m = Mystem() processed = m.analyze(text) tagged = [] for w in processed: try: lemma = w["analysis"][0]["lex"].lower().strip() pos = w["analysis"][0]["gr"].split(',')[0] pos = pos.split('=')[0].strip() tagged.append(lemma.lower() + '_' + pos) except KeyError: continue # я здесь пропускаю знаки препинания, но вы можете поступить по-другому return tagged processed_mystem = tag_mystem(text=text) print(processed_mystem[:10]) """ Explanation: UDPipe позволяет нам распознавать имена собственные, и несколько идущих подряд имен мы можем склеить в одно. Вместо UDPipe возможно использовать и Mystem (удобнее использовать pymystem для Python), хотя Mystem имена собственные не распознает. Для того чтобы работать с последними моделями RusVectōrēs, понадобится сконвертировать тэги Mystem в UPOS. Кроме того, в данный момент мы не используем Mystem в своей работе, поэтому его совместимость с новыми моделями не гарантируется. Сначала нужно установить библиотеку pymystem: pip install pymystem3 Затем импортируем эту библиотеку и анализируем с её помощью текст: End of explanation """ import requests import re url = 'https://raw.githubusercontent.com/akutuzov/universal-pos-tags/4653e8a9154e93fe2f417c7fdb7a357b7d6ce333/ru-rnc.map' mapping = {} r = requests.get(url, stream=True) for pair in r.text.split('\n'): pair = re.sub('\s+', ' ', pair, flags=re.U).split(' ') if len(pair) > 1: mapping[pair[0]] = pair[1] print(mapping) """ Explanation: Как видно, тэги Mystem отличаются от Universal POS-tags, поэтому следующим шагом должна быть их конвертация в Universal Tags. Вы можете воспользоваться вот этой таблицей конвертации: End of explanation """ def tag_mystem(text='Текст нужно передать функции в виде строки!'): m = Mystem() processed = m.analyze(text) tagged = [] for w in processed: try: lemma = w["analysis"][0]["lex"].lower().strip() pos = w["analysis"][0]["gr"].split(',')[0] pos = pos.split('=')[0].strip() if pos in mapping: tagged.append(lemma + '_' + mapping[pos]) # здесь мы конвертируем тэги else: tagged.append(lemma + '_X') # на случай, если попадется тэг, которого нет в маппинге except KeyError: continue # я здесь пропускаю знаки препинания, но вы можете поступить по-другому return tagged processed_mystem = tag_mystem(text=text) print(processed_mystem[:10]) """ Explanation: Теперь усовершенствуем нашу функцию tag_mystem: End of explanation """ import sys import gensim, logging logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO) """ Explanation: Теперь частеречные тэги в тексте, проанализированном при помощи Mystem, сравнимы с тэгами Unversal POS (хотя сам анализ оказался разным)! При необходимости вы можете также произвести для Mystem точно такую же предобработку текста, которая выше была описана для UDPipe. Вы также можете удалить стоп-слова, воспользовавшись, например, списком стоп-слов в библиотеке NLTK или на основании того, что слово было распознано как функциональная часть речи (именно так производилась фильтрация в новых моделях). Итак, в ходе этой части тьюториала мы научились от "сырого текста" приходить к лемматизированному тексту с частеречными тэгами, который уже можно подавать на вход модели! Теперь приступим непосредственно к работе с векторными моделями. 2. Работа с векторными моделями при помощи библиотеки Gensim Прежде чем переходить к работе непосредственно с RusVectōrēs, мы посмотрим на то, как работать с дистрибутивными моделями при помощи существующих библиотек. Для работы с эмбеддингами слов существуют различные библиотеки: gensim, keras, tensorflow, pytorch. Мы будем работать с библиотекой gensim, ведь в основе нашего сервера именно она и используется. Gensim - изначально библиотека для тематического моделирования текстов. Однако помимо различных алгоритмов для topic modeling в ней реализованы на python и алгоритмы из тулкита word2vec (который в оригинале был написан на C++). Прежде всего, если gensim у вас на компьютере не установлен, нужно его установить: pip install gensim Gensim регулярно обновляется, так что не будет лишним удостовериться, что у вас установлена последняя версия, а при необходимости проапдейтить библиотеку: pip install gensim --upgrade или pip install gensim -U При подготовке этого тьюториала использовался gensim версии 3.7.0. Поскольку обучение и загрузка моделей могут занимать продолжительное время, иногда бывает полезно вести лог событий. Для этого используется стандартная питоновская библиотека logging. End of explanation """ import zipfile model_url = 'http://vectors.nlpl.eu/repository/11/180.zip' m = wget.download(model_url) model_file = model_url.split('/')[-1] with zipfile.ZipFile(model_file, 'r') as archive: stream = archive.open('model.bin') model = gensim.models.KeyedVectors.load_word2vec_format(stream, binary=True) """ Explanation: Работа с моделью Для своих индивидуальных нужд и экспериментов бывает полезно самому натренировать модель на нужных данных и с нужными параметрами. Но для каких-то общих целей модели уже есть как для русского языка, так и для английского. Модели для русского скачать можно здесь - https://rusvectores.org/ru/models/ Существуют несколько форматов, в которых могут храниться модели. Во-первых, данные могут храниться в нативном формате word2vec, при этом модель может быть бинарной или не бинарной. Для загрузки модели в формате word2vec в классе KeyedVectors (в котором хранится большинство относящихся к дистрибутивным моделям функций) существует функция load_word2vec_format, а бинарность модели можно указать в аргументе binary (внизу будет пример). Помимо этого, модель можно хранить и в собственном формате gensim, для этого существует класс Word2Vec с функцией load. Поскольку модели бывают разных форматов, то для них написаны разные функции загрузки; бывает полезно учитывать это в своем скрипте. Наш код определяет тип модели по её расширению, но вообще файл с моделью может называться как угодно, жестких ограничений для расширения нет. Для новых моделей мы перешли на загрузку с использованием инфраструктуры Nordic Language Processing Laboratory. На практике это означает, что теперь по клику на модель вы скачиваете zip-архив с уникальным числовым идентификатором (например, 180.zip). Внутри архива всегда находится файл meta.json, содержащий в структурированном и стандартном виде информацию о модели и корпусе, на котором она обучена. word2vec-модели лежат в архивах сразу в двух word2vec-форматах: бинарном model.bin (удобен для быстрой загрузки) и текстовом model.txt (удобен для просмотра человеком). Давайте скачаем новейшую модель для русского языка, созданную на основе Национального корпуса русского языка (НКРЯ), и загрузим в её в память. Распаковывать скачанный архив для обычных моделей не нужно, так как его содержимое прочитается при помощи специальной инструкции: End of explanation """ words = ['день_NOUN', 'ночь_NOUN', 'человек_NOUN', 'семантика_NOUN', 'студент_NOUN', 'студент_ADJ'] """ Explanation: Модели fasttext в новой версии gensim загружаются при помощи следующей команды:: gensim.models.KeyedVectors.load("model.model") Перед загрузкой скачанный архив с моделью fasttext необходимо распаковать. Определить необходимый для загрузки файл несложно, чаще всего это файл с расширением .model (остальные файлы из архива должны быть в той же папке). Вернемся к нашей модели, созданной на основе НКРЯ. Скажем, нам интересны такие слова (пример для русского языка): End of explanation """ for word in words: # есть ли слово в модели? Может быть, и нет if word in model: print(word) # выдаем 10 ближайших соседей слова: for i in model.most_similar(positive=[word], topn=10): # слово + коэффициент косинусной близости print(i[0], i[1]) print('\n') else: # Увы! print(word + ' is not present in the model') """ Explanation: Попросим у модели 10 ближайших соседей для каждого слова и коэффициент косинусной близости для каждого: End of explanation """ print(model.similarity('человек_NOUN', 'обезьяна_NOUN')) """ Explanation: Находим косинусную близость пары слов: End of explanation """ print(model.doesnt_match('яблоко_NOUN груша_NOUN виноград_NOUN банан_NOUN лимон_NOUN картофель_NOUN'.split())) """ Explanation: Найди лишнее! End of explanation """ print(model.most_similar(positive=['пицца_NOUN', 'россия_NOUN'], negative=['италия_NOUN'])[0][0]) """ Explanation: Реши пропорцию! End of explanation """ print(processed_ud[:15]) MODEL = 'ruscorpora_upos_cbow_300_20_2019' FORMAT = 'csv' WORD = processed_ud[1] def api_neighbor(m, w, f): neighbors = {} url = '/'.join(['http://rusvectores.org', m, w, 'api', f]) + '/' r = requests.get(url=url, stream=True) for line in r.text.split('\n'): try: # первые две строки в файле -- служебные, их мы пропустим word, sim = re.split('\s+', line) # разбиваем строку по одному или более пробелам neighbors[word] = sim except: continue return neighbors print(api_neighbor(MODEL, WORD, FORMAT)) """ Explanation: 3. Использование API сервиса RusVectōrēs Помимо локального использования модели, вы можете также обратиться к RusVectōrēs через API, чтобы использовать наши модели в автоматическом режиме, не скачивая их (скажем, из ваших скриптов). В нашем API имеется две функции: получение списка семантически близких слов для заданного слова в заданной модели; вычисление коэффициента косинусной близости между парой слов в заданной модели. Для того чтобы получить список семантически близких слов, необходимо выполнить GET-запрос по адресу в следующем формате: https://rusvectores.org/MODEL/WORD/api/FORMAT/ Разберемся с компонентами этого запроса. MODEL - идентификатор модели, к которой мы хотим обратиться. Идентификаторы можно посмотреть в таблице со всеми моделями нашего сервиса. WORD - слово, для которого мы хотим узнать соседей. Следует помнить, что частеречный тэг здесь тоже нужен (точнее, вы можете отправлять запросы и без него, но тогда части речи ваших слов сервер определит автоматически - и не всегда правильно). FORMAT - формат выходных данных, в настоящий момент это csv (с разделением через табуляцию) либо json. Попробуем узнать семантических соседей для первых слов из нашего рассказа. Сначала создадим переменные с параметрами нашего запроса. End of explanation """ def api_similarity(m, w1, w2): url = '/'.join(['https://rusvectores.org', m, w1 + '__' + w2, 'api', 'similarity/']) r = requests.get(url, stream=True) return r.text.split('\t')[0] MODEL = 'tayga_upos_skipgram_300_2_2019' api_similarity(MODEL, WORD, 'мех_NOUN') """ Explanation: API по умолчанию сообщает 10 ближайших соседей, изменить это количество в данный момент возможности нет. Теперь рассмотрим вторую функцию, доступную в API - вычисление коэффициента близости между двумя словами. Запросы для неё должны выполняться в таком виде: https://rusvectores.org/MODEL/WORD1__WORD2/api/similarity/ Здесь переменные - MODEL (идентификатор модели, к которой мы обращаемся) и два слова (вместе с их частеречными тэгами). Обратите внимание, что слова разделены двумя нижними подчеркиваниями. End of explanation """
femtotrader/barchart-ondemand-client-python
notebooks/example.ipynb
bsd-3-clause
#barchart.API_KEY = 'YOURAPIKEY' """ Explanation: API key setup End of explanation """ import datetime import requests_cache session = requests_cache.CachedSession(cache_name='cache', backend='sqlite', expire_after=datetime.timedelta(days=1)) #session = None # pass a None session to avoid caching queries """ Explanation: You can also set an environment variable using Bash export BARCHART_API_KEY="YOURAPIKEY" Cache queries requests_cache is optional use it to have a cache mechanism a session parameter can be pass to functions End of explanation """ symbol = "^EURUSD" quote = getQuote(symbol, session=session) quote # quote is a dict """ Explanation: getQuote with ONE symbol End of explanation """ symbols = ["ZC*1", "IBM", "GOOGL" , "^EURUSD"] quotes = getQuote(symbols, session=session) quotes # quotes is a Pandas DataFrame #print(quotes.dtypes) #print(type(quotes['serverTimestamp'][0])) # should be a pandas.tslib.Timestamp CONFIG.output_pandas = False quotes = getQuote(symbols, session=session) print(quotes) # quotes is a Pandas DataFrame CONFIG.output_pandas = True """ Explanation: getQuote with SEVERAL symbols End of explanation """ symbol = 'IBM' startDate = datetime.date(year=2014, month=9, day=28) history = getHistory(symbol, typ='daily', startDate=startDate, session=session) history #print(history.dtypes) #print(type(history['timestamp'][0])) # should be a pandas.tslib.Timestamp #print(type(history.index[0])) # should be a pandas.tslib.Timestamp #print(type(history['tradingDay'][0])) # should be a pandas.tslib.Timestamp """ Explanation: getHistory with ONE symbol End of explanation """ symbols = ["ZC*1", "IBM", "GOOGL" , "^EURUSD"] histories = getHistory(symbols, typ='daily', startDate=startDate, session=session) histories #print(histories.dtypes) #print(type(histories.index[0])) # should be a pandas.tslib.Timestamp #print(type(histories['timestamp'][0])) # should be a pandas.tslib.Timestamp #histories.loc[:, :, "IBM"] #?? """ Explanation: getHistory with SEVERAL symbols End of explanation """
aleph314/K2
Foundations/Python CS/Activity 08.ipynb
gpl-3.0
def function_plot(ω0=1, ω1=1): # Define x axis range x = np.linspace(-4*np.pi, 4*np.pi, 100) # Add labels to x and y axis plt.xlabel('$x$') plt.ylabel('$\exp(x/10) \cdot \sin(\omega_{1}x) \cdot \cos(\omega_{0}x)$') # Limit x axis between start and end point of the range plt.xlim(x[0], x[-1]) # Add a title plt.title('Plot of $f$ for $ω_0 = {}$ and $ω_1 = {}$'.format(ω0, ω1)) # Plot the function plt.plot(x, np.exp(x/10) * np.sin(ω1*x) * np.cos(ω0*x)) plot() """ Explanation: Exercise 08.1 (function plotting) Consider the function $$ f(x) = e^{x/10} \sin(\omega_{1}x)\cos(\omega_{0}x) $$ from $x = -4\pi$ to $x = 4\pi$. Plot the function when $\omega_{0} = \omega_{1} = 1$. Label the axes. Create an interactive plot with sliders for $\omega_{0}$ and $\omega_{1}$, varying from 0 to 2. Plot for $\omega_0 = \omega_1 = 1$: End of explanation """ # Add sliders for the two parameters interact(function_plot, ω0=(0, 2, 0.25), ω1=(0, 2, 0.25)); """ Explanation: Plot with sliders for $\omega_0$ and $\omega_1$ from 0 to 2 with steps of 0.25: End of explanation """ # Define x axis range using an even number of points to avoid division by 0 x = np.linspace(-6*np.pi, 6*np.pi, 100) # Add labels to x and y axis plt.xlabel('$x$') plt.ylabel('$\sin(x)/x$') # Limit x axis between start and end point of the range plt.xlim(x[0], x[-1]) # Plot the function plt.plot(x, np.sin(x)/x); """ Explanation: Exercise 08.2 (multiple function plotting) Plot the function $$ f(x) = \frac{\sin(x)}{x} $$ from $x = -6\pi$ to $x = 6\pi$. Think carefully about which $x$ values you use when $x$ is close to zero. Add to the above plot the graph of $1/ \left| x \right|$, and limit the range of the $y$ axis to 1 using plt.ylim. (Hint: use np.abs(x) to return the absolute values of each component of a NumPy array x. Plot of $\sin(x) / x$ between $-6\pi$ and $6\pi$: End of explanation """ # Define x axis range using an even number of points to avoid division by 0 x = np.linspace(-6*np.pi, 6*np.pi, 100) # Add label to x axis plt.xlabel('$x$') # Limit x axis between start and end point of the range plt.xlim(x[0], x[-1]) # Limit y axis between -0.3 and 1 plt.ylim(-0.3, 1) # Plot the first function plt.plot(x, np.sin(x)/x, label='$\sin(x)/x$') # Plot the second function on the same plot plt.plot(x, 1/np.abs(x), label='$1/|x|$') # Add a legend plt.legend(); """ Explanation: Plot of $\sin(x) / x$ and $1/\left| x \right|$ between $-6\pi$ and $6\pi$: End of explanation """ def demographics_plot(year=2011, grCC=0, grEX=0, grFL=0, grHS=0, grSC=0): # Initialize district tuple, population and annual growth arrays district = ('Cambridge City', 'East Cambridgeshire', 'Fenland', 'Huntingdonshire', 'South Cambridgeshire') population = np.array((123900, 83800, 95300, 169500, 148800)) annual_growth = np.array((grCC, grEX, grFL, grHS, grSC)) # Specify slice colours colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', 'red'] # Explode the 1st slice (Cambridge City) explode = (0.1, 0.0, 0, 0, 0) # Set figure size plt.figure(figsize=(10,10)) # Plot pie chart using a linear annual growth in population plt.pie(population * (1 + (year-2011) * annual_growth / 100), explode=explode, labels=district, colors=colors, autopct='%1.1f%%', shadow=True, startangle=90) # Add title plt.title('{} population distribution in Cambridgeshire'.format(year)) # Add sliders for the annual growth of each district interact(demographics_plot, year=(2011, 2021, 1), grCC=(0, 10, 0.1), grEX=(0, 10, 0.1), grFL=(0, 10, 0.1), grHS=(0, 10, 0.1), grSC=(0, 10, 0.1)); """ Explanation: Exercise 08.3 (demographics and interactive plotting) A county planning body has requested an interactive tool to visualise the population distribution in Cambridgeshire (by district) from 2011 to 2021 for different population growth rate scenarios in each district. It can be assumed that: the growth rates are constant in each district; the growth rate will not be negative in any district; and the annual growth rate in any one district will not exceed 10%. Building on the pie chart example with population data in the body of the notebook, create an interactive plot with: A slider for the year (from 2011 to 2021); and Sliders for the annual population growth for each district (in percentage), with an initial value of zero for each district. End of explanation """ import json import requests """ Explanation: Exercise 08.4 (crime reports by location) Background Your task is to produce a crime report data plot in the neighborhood of your college, by reported crime category. It will be interesting to see how this varies between colleges! We can get crime data in the UK from the police data systems using what is known as a REST API, and turn the data into a list of Python dictionaries. Each entry in the list is a police report (an entry is a Python dictionary detailing the report). The first step is the import the modules we will be using: End of explanation """ # A triangle that includes most of the Cambridge city centre # (long, lat) for three vertices of a triangle (no spaces!) p0 = '52.211546,0.116465' p1 = '52.203510,0.145500' p2 = '52.189730,0.113050' # year-month of interest year_month = '2016-05' # Construct request URL string using the above data url = 'https://data.police.uk/api/crimes-street/all-crime?poly=' + p0 + ':' + p1 + ':' + p2 + '&date=' + year_month # Fetch data from https://data.police.uk r = requests.get(url) """ Explanation: The service https://data.police.uk has an interface where we can add specific strings to the URL (web address) to define what data we are intersted in, and the police server will return our requested data. The format is https://data.police.uk/api/crimes-street/all-crime?poly=[LAT0],[LON0]:[LAT1],[LON1]:[LAT2,LON2]&amp;date=YYYY-MM This return crimes reports in the triangle given by the three geographic coordinate points (latitude0, longitude0), (latitude1, longitude1) and (latitude2, longitude2), for the month YYY-MM. Below we create this URL string to include a large part of the Cambridge city centre. You can modify this for your own college or other area of interest (Google Maps is a handy way to get the geographic coordinates). End of explanation """ crime_data = r.json() """ Explanation: The following converts the fetched data into a list of dictionaries: End of explanation """ import pprint if crime_data: pprint.pprint(crime_data[0]) """ Explanation: To get an idea of how the data is arranged, we can look at the first report in the list. To make the displayed data easier to read, we use the 'pretty print' module pprint. End of explanation """ categories = ('anti-social-behaviour', 'bicycle-theft', 'burglary', 'criminal-damage-arson', \ 'drugs', 'other-crime', 'other-theft', 'public-order', 'shoplifting', \ 'theft-from-the-person', 'vehicle-crime', 'violent-crime') """ Explanation: Task Produce a bar chart of the number of reports in different categories. The categories are: End of explanation """ def get_crime_data(year_month, p0='52.211546,0.116465', p1='52.203510,0.145500', p2='52.189730,0.113050'): "Get the crime data for a given year and month (in the format YYYY-MM) and coordinates" # Construct request URL string using the above data url = 'https://data.police.uk/api/crimes-street/all-crime?poly=' + p0 + ':' + p1 + ':' + p2 + '&date=' + year_month # Fetch data from https://data.police.uk r = requests.get(url) return r.json() def crime_plot(year_month, p0='52.211546,0.116465', p1='52.203510,0.145500', p2='52.189730,0.113050'): "Plot the crime data on a barplot for a given year and month (in the format YYYY-MM) and coordinates" # Get the crime data crime_data = get_crime_data(year_month, p0, p1, p2) # Initialize a dict for crime category frequencies categories_freq = {} # Count the frequencies for crime in crime_data: curr_category = crime['category'] if curr_category in categories_freq: categories_freq[curr_category] += 1 else: categories_freq[curr_category] = 1 # Define values for x axis ticks x_values = np.arange(len(categories_freq)) # Create barplot plt.bar(x_values, categories_freq.values(), align='center') # Add labels to x axis ticks plt.xticks(x_values, categories_freq.keys(), rotation=90) # Add axis labels #plt.xlabel('Crime Category') plt.ylabel('Number of Crimes') # Add title plt.title('Crimes in {}'.format(year_month)) # Test for a month crime_plot('2017-01') """ Explanation: This function retrieves data from the UK police URL and returns it in a json; the default parameters for the coordinates are that of Cambridge used above: End of explanation """ categories_freq = {} """ Explanation: Run your program for different parts of Cambridge, starting with the area around your college, and for different months and years. Hints: Create an empty dictionary, which will eventually map the report category to the number of incidents: End of explanation """ # Iterate over all reports for report in crime_data: # Get category type from the report category = report['category'] if category in categories_freq: # Increment counter here pass # This can be removed once this 'if' block has a body else: # Add category to dictionary here pass # This can be removed once this 'else' block has a body """ Explanation: Iterate over all reports in the list, and extract the category string from each report. If the category string (the 'key') is already in the dictionary increment the associated counter. Otherwise add the key to the dictionary, and associate the value 1. End of explanation """ # Initialize the starting year and month and the number of months to retrieve start_year, start_month, num_months = 2016, 1, 6 # Initialize an empty list for all crimes all_crimes = [] #crime_freq = {} for unused in range(num_months): # For every month in range get crime data crime_data = get_crime_data(str(start_year) + '-' + str(start_month)) # Append every crime retrieved to the list of all crimes for crime in crime_data: all_crimes.append([crime['id'], crime['month'], crime['category']]) # Update month and year start_month += 1 if start_month % 13 == 0: start_month = 1 start_year += 1 """ Explanation: When adding the tick labels (crime categories), it may be necessary to rotate the labels, e.g.: python plt.xticks(x_pos, categories, rotation='vertical') Extensions (optional) Probe the retrieved data to build a set of all crime categories in the data set. Explore the temporal (time) aspect of the data. Thinks of ways to represent the change in reported incident types over time. Create a list containing all crimes in a given period: End of explanation """ # Initialize figure setting size plt.figure(figsize=(20,20)) # for each category in the above list create a dict of frequencies and plot it on a line for category in categories: category_freq = {} for crime in all_crimes: if crime[2] == category: if crime[1] in category_freq: category_freq[crime[1]] += 1 else: category_freq[crime[1]] = 1 # Define values for x axis ticks x_values = np.arange(len(category_freq)) # Create a plot using dict values (not sure if they are in the right order here...) plt.plot(x_values, list(category_freq.values()), '-o', label=category) # Add x axis values labels using keys plt.xticks(x_values, category_freq.keys(), rotation=90) # Add legend, title and labels plt.legend() plt.title('Crime categories in time') plt.xlabel('Month') plt.ylabel('Number of Crimes'); """ Explanation: Create a plot to represent crimes by year-month and category (not sure about order of the data in the dictionary): End of explanation """
sorig/shogun
doc/ipython-notebooks/neuralnets/neuralnets_digits.ipynb
bsd-3-clause
%pylab inline %matplotlib inline import os SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') from scipy.io import loadmat from shogun import features, MulticlassLabels, Math # load the dataset dataset = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat')) Xall = dataset['data'] # the usps dataset has the digits labeled from 1 to 10 # we'll subtract 1 to make them in the 0-9 range instead Yall = np.array(dataset['label'].squeeze(), dtype=np.double)-1 # 1000 examples for training Xtrain = features(Xall[:,0:1000]) Ytrain = MulticlassLabels(Yall[0:1000]) # 4000 examples for validation Xval = features(Xall[:,1001:5001]) Yval = MulticlassLabels(Yall[1001:5001]) # the rest for testing Xtest = features(Xall[:,5002:-1]) Ytest = MulticlassLabels(Yall[5002:-1]) # initialize the random number generator with a fixed seed, for repeatability Math.init_random(10) """ Explanation: Neural Nets for Digit Classification by Khaled Nasr as a part of a <a href="https://www.google-melange.com/gsoc/project/details/google/gsoc2014/khalednasr92/5657382461898752">GSoC 2014 project</a> mentored by Theofanis Karaletsos and Sergey Lisitsyn This notebook illustrates how to use the NeuralNets module to teach a neural network to recognize digits. It also explores the different optimization and regularization methods supported by the module. Convolutional neural networks are also discussed. Introduction An Artificial Neural Network is a machine learning model that is inspired by the way biological nervous systems, such as the brain, process information. The building block of neural networks is called a neuron. All a neuron does is take a weighted sum of its inputs and pass it through some non-linear function (activation function) to produce its output. A (feed-forward) neural network is a bunch of neurons arranged in layers, where each neuron in layer i takes its input from all the neurons in layer i-1. For more information on how neural networks work, follow this link. In this notebook, we'll look at how a neural network can be used to recognize digits. We'll train the network on the USPS dataset of handwritten digits. We'll start by loading the data and dividing it into a training set, a validation set, and a test set. The USPS dataset has 9298 examples of handwritten digits. We'll intentionally use just a small portion (1000 examples) of the dataset for training . This is to keep training time small and to illustrate the effects of different regularization methods. End of explanation """ from shogun import NeuralNetwork, NeuralInputLayer, NeuralLogisticLayer, NeuralSoftmaxLayer from shogun import DynamicObjectArray # setup the layers layers = DynamicObjectArray() layers.append_element(NeuralInputLayer(256)) # input layer, 256 neurons layers.append_element(NeuralLogisticLayer(256)) # first hidden layer, 256 neurons layers.append_element(NeuralLogisticLayer(128)) # second hidden layer, 128 neurons layers.append_element(NeuralSoftmaxLayer(10)) # output layer, 10 neurons # create the networks net_no_reg = NeuralNetwork(layers) net_no_reg.quick_connect() net_no_reg.initialize_neural_network() net_l2 = NeuralNetwork(layers) net_l2.quick_connect() net_l2.initialize_neural_network() net_l1 = NeuralNetwork(layers) net_l1.quick_connect() net_l1.initialize_neural_network() net_dropout = NeuralNetwork(layers) net_dropout.quick_connect() net_dropout.initialize_neural_network() """ Explanation: Creating the network To create a neural network in shogun, we'll first create an instance of NeuralNetwork and then initialize it by telling it how many inputs it has and what type of layers it contains. To specifiy the layers of the network a DynamicObjectArray is used. The array contains instances of NeuralLayer-based classes that determine the type of neurons each layer consists of. Some of the supported layer types are: NeuralLinearLayer, NeuralLogisticLayer and NeuralSoftmaxLayer. We'll create a feed-forward, fully connected (every neuron is connected to all neurons in the layer below) neural network with 2 logistic hidden layers and a softmax output layer. The network will have 256 inputs, one for each pixel (16*16 image). The first hidden layer will have 256 neurons, the second will have 128 neurons, and the output layer will have 10 neurons, one for each digit class. Note that we're using a big network, compared with the size of the training set. This is to emphasize the effects of different regularization methods. We'll try training the network with: No regularization L2 regularization L1 regularization Dropout regularization Therefore, we'll create 4 versions of the network, train each one of them differently, and then compare the results on the validation set. End of explanation """ # import networkx, install if necessary try: import networkx as nx except ImportError: import pip pip.main(['install', '--user', 'networkx']) import networkx as nx G = nx.DiGraph() pos = {} for i in range(8): pos['X'+str(i)] = (i,0) # 8 neurons in the input layer pos['H'+str(i)] = (i,1) # 8 neurons in the first hidden layer for j in range(8): G.add_edge('X'+str(j),'H'+str(i)) if i<4: pos['U'+str(i)] = (i+2,2) # 4 neurons in the second hidden layer for j in range(8): G.add_edge('H'+str(j),'U'+str(i)) if i<6: pos['Y'+str(i)] = (i+1,3) # 6 neurons in the output layer for j in range(4): G.add_edge('U'+str(j),'Y'+str(i)) nx.draw(G, pos, node_color='y', node_size=750) """ Explanation: We can also visualize what the network would look like. To do that we'll draw a smaller network using networkx. The network we'll draw will have 8 inputs (labeled X), 8 neurons in the first hidden layer (labeled H), 4 neurons in the second hidden layer (labeled U), and 6 neurons in the output layer (labeled Y). Each neuron will be connected to all neurons in the layer that precedes it. End of explanation """ from shogun import MulticlassAccuracy def compute_accuracy(net, X, Y): predictions = net.apply_multiclass(X) evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(predictions, Y) return accuracy*100 """ Explanation: Training NeuralNetwork supports two methods for training: LBFGS (default) and mini-batch gradient descent. LBFGS is a full-batch optimization methods, it looks at the entire training set each time before it changes the network's parameters. This makes it slow with large datasets. However, it works very well with small/medium size datasets and is very easy to use as it requires no parameter tuning. Mini-batch Gradient Descent looks at only a small portion of the training set (a mini-batch) before each step, which it makes it suitable for large datasets. However, it's a bit harder to use than LBFGS because it requires some tuning for its parameters (learning rate, learning rate decay,..) Training in NeuralNetwork stops when: Number of epochs (iterations over the entire training set) exceeds max_num_epochs The (percentage) difference in error between the current and previous iterations is smaller than epsilon, i.e the error is not anymore being reduced by training To see all the options supported for training, check the documentation We'll first write a small function to calculate the classification accuracy on the validation set, so that we can compare different models: End of explanation """ net_no_reg.put('epsilon', 1e-6) net_no_reg.put('max_num_epochs', 600) # uncomment this line to allow the training progress to be printed on the console #from shogun import MSG_INFO; net_no_reg.io.put('loglevel', MSG_INFO) net_no_reg.put('labels', Ytrain) net_no_reg.train(Xtrain) # this might take a while, depending on your machine # compute accuracy on the validation set print("Without regularization, accuracy on the validation set =", compute_accuracy(net_no_reg, Xval, Yval), "%") """ Explanation: Training without regularization We'll start by training the first network without regularization using LBFGS optimization. Note that LBFGS is suitable because we're using a small dataset. End of explanation """ # turn on L2 regularization net_l2.put('l2_coefficient', 3e-4) net_l2.put('epsilon', 1e-6) net_l2.put('max_num_epochs', 600) net_l2.put('labels', Ytrain) net_l2.train(Xtrain) # this might take a while, depending on your machine # compute accuracy on the validation set print("With L2 regularization, accuracy on the validation set =", compute_accuracy(net_l2, Xval, Yval), "%") """ Explanation: Training with L2 regularization We'll train another network, but with L2 regularization. This type of regularization attempts to prevent overfitting by penalizing large weights. This is done by adding $\frac{1}{2} \lambda \Vert W \Vert_2$ to the optimization objective that the network tries to minimize, where $\lambda$ is the regularization coefficient. End of explanation """ # turn on L1 regularization net_l1.put('l1_coefficient', 3e-5) net_l1.put('epsilon', e-6) net_l1.put('max_num_epochs', 600) net_l1.put('labels', Ytrain) net_l1.train(Xtrain) # this might take a while, depending on your machine # compute accuracy on the validation set print("With L1 regularization, accuracy on the validation set =", compute_accuracy(net_l1, Xval, Yval), "%") """ Explanation: Training with L1 regularization We'll now try L1 regularization. It works by by adding $\lambda \Vert W \Vert_1$ to the optimzation objective. This has the effect of penalizing all non-zero weights, therefore pushing all the weights to be close to 0. End of explanation """ from shogun import NNOM_GRADIENT_DESCENT # set the dropout probabilty for neurons in the hidden layers net_dropout.put('dropout_hidden', 0.5) # set the dropout probabilty for the inputs net_dropout.put('dropout_input', 0.2) # limit the maximum incoming weights vector lengh for neurons net_dropout.put('max_norm', 15) net_dropout.put('epsilon', 1e-6) net_dropout.put('max_num_epochs', 600) # use gradient descent for optimization net_dropout.put('optimization_method', NNOM_GRADIENT_DESCENT) net_dropout.put('gd_learning_rate', 0.5) net_dropout.put('gd_mini_batch_size', 100) net_dropout.put('labels', Ytrain) net_dropout.train(Xtrain) # this might take a while, depending on your machine # compute accuracy on the validation set print("With dropout, accuracy on the validation set =", compute_accuracy(net_dropout, Xval, Yval), "%") """ Explanation: Training with dropout The idea behind dropout is very simple: randomly ignore some neurons during each training iteration. When used on neurons in the hidden layers, it has the effect of forcing each neuron to learn to extract features that are useful in any context, regardless of what the other hidden neurons in its layer decide to do. Dropout can also be used on the inputs to the network by randomly omitting a small fraction of them during each iteration. When using dropout, it's usually useful to limit the L2 norm of a neuron's incoming weights vector to some constant value. Due to the stochastic nature of dropout, LBFGS optimization doesn't work well with it, therefore we'll use mini-batch gradient descent instead. End of explanation """ from shogun import NeuralConvolutionalLayer, CMAF_RECTIFIED_LINEAR # prepere the layers layers_conv = DynamicObjectArray() # input layer, a 16x16 image single channel image layers_conv.append_element(NeuralInputLayer(16,16,1)) # the first convolutional layer: 10 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 10 8x8 feature maps layers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 10, 2, 2, 2, 2)) # the first convolutional layer: 15 feature maps, filters with radius 2 (5x5 filters) # and max-pooling in a 2x2 region: its output will be 15 4x4 feature maps layers_conv.append_element(NeuralConvolutionalLayer(CMAF_RECTIFIED_LINEAR, 15, 2, 2, 2, 2)) # output layer layers_conv.append_element(NeuralSoftmaxLayer(10)) # create and initialize the network net_conv = NeuralNetwork(layers_conv) net_conv.quick_connect() net_conv.initialize_neural_network() """ Explanation: Convolutional Neural Networks Now we'll look at a different type of network, namely convolutional neural networks. A convolutional net operates on two principles: Local connectivity: Convolutional nets work with inputs that have some sort of spacial structure, where the order of the inputs features matter, i.e images. Local connectivity means that each neuron will be connected only to a small neighbourhood of pixels. Weight sharing: Different neurons use the same set of weights. This greatly reduces the number of free parameters, and therefore makes the optimization process easier and acts as a good regularizer. With that in mind, each layer in a convolutional network consists of a number of feature maps. Each feature map is produced by convolving a small filter with the layer's inputs, adding a bias, and then applying some non-linear activation function. The convolution operation satisfies the local connectivity and the weight sharing constraints. Additionally, a max-pooling operation can be performed on each feature map by dividing it into small non-overlapping regions and taking the maximum over each region. This adds some translation invarience and improves the performance. Convolutional nets in Shogun are handled through the CNeuralNetwork class along with the CNeuralConvolutionalLayer class. A CNeuralConvolutionalLayer represents a convolutional layer with multiple feature maps, optional max-pooling, and support for different types of activation functions Now we'll creates a convolutional neural network with two convolutional layers and a softmax output layer. We'll use the rectified linear activation function for the convolutional layers: End of explanation """ # 50% dropout in the input layer net_conv.put('dropout_input', 0.5) # max-norm regularization net_conv.put('max_norm', 1.0) # set gradient descent parameters net_conv.put('optimization_method', NNOM_GRADIENT_DESCENT) net_conv.put('gd_learning_rate', 0.01) net_conv.put('gd_mini_batch_size', 100) net_conv.put('epsilon', 0.0) net_conv.put('max_num_epochs', 100) # start training net_conv.put('labels', Ytrain) net_conv.train(Xtrain) # compute accuracy on the validation set print("With a convolutional network, accuracy on the validation set =", compute_accuracy(net_conv, Xval, Yval), "%") """ Explanation: Now we can train the network. Like in the previous section, we'll use gradient descent with dropout and max-norm regularization: End of explanation """ print("Accuracy on the test set using the convolutional network =", compute_accuracy(net_conv, Xtest, Ytest), "%") """ Explanation: Evaluation According the accuracy on the validation set, the convolutional network works best in out case. Now we'll measure its performance on the test set: End of explanation """ predictions = net_conv.apply_multiclass(Xtest) _=figure(figsize=(10,12)) # plot some images, with the predicted label as the title of each image # this code is borrowed from the KNN notebook by Chiyuan Zhang and Sören Sonnenburg for i in range(100): ax=subplot(10,10,i+1) title(int(predictions[i])) ax.imshow(Xtest[:,i].reshape((16,16)), interpolation='nearest', cmap = cm.Greys_r) ax.set_xticks([]) ax.set_yticks([]) """ Explanation: We can also look at some of the images and the network's response to each of them: End of explanation """
huajianmao/learning
coursera/deep-learning/4.convolutional-neural-networks/week2/.ipynb_checkpoints/pa.1.Keras - Tutorial - Happy House v1-checkpoint.ipynb
mit
import numpy as np from keras import layers from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D from keras.models import Model from keras.preprocessing import image from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils import plot_model from kt_utils import * import keras.backend as K K.set_image_data_format('channels_last') import matplotlib.pyplot as plt from matplotlib.pyplot import imshow %matplotlib inline """ Explanation: Keras tutorial - the Happy House Welcome to the first assignment of week 2. In this assignment, you will: 1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK. 2. See how you can in a couple of hours build a deep learning algorithm. Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models. In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House! End of explanation """ X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255. # Reshape Y_train = Y_train_orig.T Y_test = Y_test_orig.T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) """ Explanation: Note: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: X = Input(...) or X = ZeroPadding2D(...). 1 - The Happy House For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness. <img src="images/happy-house.jpg" style="width:350px;height:270px;"> <caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : the Happy House</center></caption> As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy. You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled. <img src="images/house-members.png" style="width:550px;height:250px;"> Run the following code to normalize the dataset and learn about its shapes. End of explanation """ # GRADED FUNCTION: HappyModel def HappyModel(input_shape): """ Implementation of the HappyModel. Arguments: input_shape -- shape of the images of the dataset Returns: model -- a Model() instance in Keras """ ### START CODE HERE ### ### END CODE HERE ### return model """ Explanation: Details of the "Happy" dataset: - Images are of shape (64,64,3) - Training: 600 pictures - Test: 150 pictures It is now time to solve the "Happy" Challenge. 2 - Building a model in Keras Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results. Here is an example of a model in Keras: ```python def model(input_shape): # Define the input placeholder as a tensor with shape input_shape. Think of this as your input image! X_input = Input(input_shape) # Zero-Padding: pads the border of X_input with zeroes X = ZeroPadding2D((3, 3))(X_input) # CONV -&gt; BN -&gt; RELU Block applied to X X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) # MAXPOOL X = MaxPooling2D((2, 2), name='max_pool')(X) # FLATTEN X (means convert it to a vector) + FULLYCONNECTED X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) # Create model. This creates your Keras model instance, you'll use this instance to train/test the model. model = Model(inputs = X_input, outputs = X, name='HappyModel') return model ``` Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as X, Z1, A1, Z2, A2, etc. for the computations for the different layers, in Keras code each line above just reassigns X to a new value using X = .... In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable X. The only exception was X_input, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (model = Model(inputs = X_input, ...) above). Exercise: Implement a HappyModel(). This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as AveragePooling2D(), GlobalMaxPooling2D(), Dropout(). Note: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to. End of explanation """ ### START CODE HERE ### (1 line) ### END CODE HERE ### """ Explanation: You have now built a function to describe your model. To train and test this model, there are four steps in Keras: 1. Create the model by calling the function above 2. Compile the model by calling model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"]) 3. Train the model on train data by calling model.fit(x = ..., y = ..., epochs = ..., batch_size = ...) 4. Test the model on test data by calling model.evaluate(x = ..., y = ...) If you want to know more about model.compile(), model.fit(), model.evaluate() and their arguments, refer to the official Keras documentation. Exercise: Implement step 1, i.e. create the model. End of explanation """ ### START CODE HERE ### (1 line) ### END CODE HERE ### """ Explanation: Exercise: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of compile() wisely. Hint: the Happy Challenge is a binary classification problem. End of explanation """ ### START CODE HERE ### (1 line) ### END CODE HERE ### """ Explanation: Exercise: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size. End of explanation """ ### START CODE HERE ### (1 line) ### END CODE HERE ### print() print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) """ Explanation: Note that if you run fit() again, the model will continue to train with the parameters it has already learnt instead of reinitializing them. Exercise: Implement step 4, i.e. test/evaluate the model. End of explanation """ ### START CODE HERE ### ### END CODE HERE ### img = image.load_img(img_path, target_size=(64, 64)) imshow(img) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print(happyModel.predict(x)) """ Explanation: If your happyModel() function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets. To pass this assignment, you have to get at least 75% accuracy. To give you a point of comparison, our model gets around 95% test accuracy in 40 epochs (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare. If you have not yet achieved 75% accuracy, here're some things you can play around with to try to achieve it: Try using blocks of CONV->BATCHNORM->RELU such as: python X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer. You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width. Change your optimizer. We find Adam works well. If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise) Run on more epochs, until you see the train accuracy plateauing. Even if you have achieved 75% accuracy, please feel free to keep playing with your model to try to get even better results. Note: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here. 3 - Conclusion Congratulations, you have solved the Happy House challenge! Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here. <font color='blue'> What we would like you to remember from this assignment: - Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras? - Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test. 4 - Test with your own image (Optional) Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)! The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try! End of explanation """ happyModel.summary() plot_model(happyModel, to_file='HappyModel.png') SVG(model_to_dot(happyModel).create(prog='dot', format='svg')) """ Explanation: 5 - Other useful functions in Keras (Optional) Two other basic features of Keras that you'll find useful are: - model.summary(): prints the details of your layers in a table with the sizes of its inputs/outputs - plot_model(): plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook. Run the following code. End of explanation """
chengjun/iching
iching_intro.ipynb
mit
from iching import iching iching.ichingDate(1985052620150704) """ Explanation: iching is a packge developed by Cheng-Jun Wang. It employs the method of Shicao prediction to reproce the prediction of I Ching--the Book of Exchanges. The I Ching ([î tɕíŋ]; Chinese: 易經; pinyin: Yìjīng), also known as the Classic of Changes or Book of Changes in English, is an ancient divination text and the oldest of the Chinese classics. Fifty yarrow Achillea millefolium subsp. m. var. millefolium stalks, used for I Ching divination.The Zhou yi provided a guide to cleromancy that used the stalks of the yarrow plant, but it is not known how the yarrow stalks became numbers, or how specific lines were chosen from the line readings.[22] In the hexagrams, broken lines were used as shorthand for the numbers 6 (六) and 8 (八), and solid lines were shorthand for values of 7 (七) and 9 (九). The Great Commentary contains a late classic description of a process where various numerological operations are performed on a bundle of 50 stalks, leaving remainders of 6 to 9. 大衍之数五十,其用四十有九。分而为二以象两,挂一以象三,揲之以四以象四时,归奇于扐以象闰。五岁再闰,故再扐而后挂。天一,地二;天三,地四;天五,地六;天七,地八;天九,地十。天数五,地数五。五位相得而各有合,天数二十有五,地数三十,凡天地之数五十有五,此所以成变化而行鬼神也。乾之策二百一十有六,坤之策百四十有四,凡三百六十,当期之日。二篇之策,万有一千五百二十,当万物之数也。是故四营而成《易》,十有八变而成卦,八卦而小成。引而伸之,触类而长之,天下之能事毕矣。显道神德行,是故可与酬酢,可与祐神矣。子曰:“知变化之道者,其知神之所为乎。” Install pip install iching Import and Use End of explanation """ iching.ichingDate(1985052620150704) iching.getPredict() """ Explanation: Predicting Six Yao End of explanation """ iching.ichingDate(1985052620150704) fixPred, changePred = iching.getPredict() print iching.ichingName(fixPred, changePred ) iching.ichingDate(1985052620150704) print iching.ichingText(fixPred, iching) iching.ichingDate(1985052620150704) if changePred: print ching.ichingText(fixPred, iching) else: print None """ Explanation: Get the iching name End of explanation """ data = 50 - 1 sky, earth, firstChange, data = iching.getChange(data) print sky, '\n', earth, '\n',firstChange, '\n', data """ Explanation: First Change End of explanation """ sky, earth, secondChange, data = iching.getChange(data) print sky, '\n', earth, '\n',secondChange, '\n', data """ Explanation: Second Change End of explanation """ sky, earth, thirdChange, data = iching.getChange(data) print sky, '\n', earth, '\n',thirdChange, '\n', data """ Explanation: Third Change End of explanation """ %matplotlib inline iching.plotTransition(100, w = 50) %matplotlib inline import matplotlib.pyplot as plt fig = plt.figure(figsize=(15, 10),facecolor='white') plt.subplot(2, 2, 1) iching.plotTransition(1000, w = 50) plt.subplot(2, 2, 2) iching.plotTransition(1000, w = 50) plt.subplot(2, 2, 3) iching.plotTransition(1000, w = 50) plt.subplot(2, 2, 4) iching.plotTransition(1000, w = 50) """ Explanation: Plot transitions End of explanation """
opalytics/opalytics-ticdat
examples/expert_section/notebooks/pandas_and_ticdat.ipynb
bsd-2-clause
import ticdat.testing.testutils as tdu from ticdat import TicDatFactory tdf = TicDatFactory(**tdu.netflowSchema()) dat = tdf.copy_tic_dat(tdu.netflowData()) """ Explanation: pandas and ticdat pandas is arguably the most successful data library in history, not just for Python but across all languages. That said, in the context of prescriptive analytics in general, and Mixed-Integer-Programming in particular, there are a few reasons to use pandas.DataFrame with caution. For one, DataFrame might present a level of complexity that is seen as intimidating to novice Python developers. For another, there are common MIP idioms (such as empty slicing returning an empty set) that aren't supported by DataFrame. In fact, such "iterate over indicies and capture slices of other tables" coding strategies are not likely to be thought of as pandonic (i.e. idiomatic pandas code). Last (and also probably not least) a DataFrame might not be the most computationally efficient data structure if your primary method of accessing data is via primary-key based slices. In sum, as Jeff says, "iterating over values is not idiomatic at all and completely non performant". This is not a ringing endorsement for a data science community that is used to expressing constraints precisely by iterating over values - either as part of an inner summation or in order to generate a collection of similar constraints. Regardless, ticdat wants to stand in solidarity with pandas. We all share the same goal - to unify data science with a common high level programming language. To that end, ticdat supports pandas in 2 ways - either by creating TicDat objects from properly formatted DataFrame objects, or by creating a DataFrame based copy of a TicDat object. This notebook focuses on the latter technique. Here, we dive into the nuances of the copy_to_pandas routine. To begin, lets use the ticdat testing section to get some data tables. (As always, the testing section is not appropriate for production code, but is fine for demonstration code like this). End of explanation """ dat.cost df_cost = tdf.copy_to_pandas(dat).cost df_cost """ Explanation: Now let's look at a couple of different types of tables, and see what how copy_to_pandas handles different types of data. Here is the "cost" table both in dict-of-dicts format and the as a DataFrame. End of explanation """ df_cost.index """ Explanation: It's important to emphasize whats happening. The only column in the DataFrame is "cost". This is because the cost table only has a single data field (also named "cost"). The "commodity","source","destination" columns are all primary key columns. To avoid redundant data in the DataFrame, copy_to_pandas includes this information only as part of the index of the DataFrame. End of explanation """ ('Pens', 'Denver', 'Seattle') in df_cost ('Pens', 'Denver', 'Seattle') in df_cost.index ('Pens', 'Denver', 'Seattle') in dat.cost """ Explanation: This is also a good time to point out that asking "is this row present" is completely different for the "dict-of-dicts" representation than for the DataFrame representation. Choices are good - use whatever version will make your code more readable. Personally, I'd be a little worried that someone might forget to include .index when looking for row presence in pandas, but I also think an experienced pandas developer would be pretty unlikely to make that mistake. End of explanation """ dat.nodes tdf.copy_to_pandas(dat).nodes """ Explanation: Now, lets look at the nodes table. End of explanation """ tdf.copy_to_pandas(dat, drop_pk_columns=True).nodes """ Explanation: Wait, why is the "name" information duplicated here? Shouldn't the primary key just be part of the index? The answer, by default, is it depends. If the table has no data fields, then by default it will duplicate the primary key data in both the index of the DataFrame and in the column(s) of the DataFrame. In other words, by default, copy_to_pandas will duplicate data only to avoid having "no column" DataFrames. But, you can override this behavior however you want by specifying non-None booleans as the drop_pk_columns argument. End of explanation """ tdf.copy_to_pandas(dat, drop_pk_columns=False).cost """ Explanation: This is a DataFrame with an .index but no column. Seems like a strange thing to have, but you can create it just fine if you want. Here I go back and create a version of the "cost" table that doesn't drop the primary key fields from the columns of the DataFrame. End of explanation """
statsmodels/statsmodels.github.io
v0.13.1/examples/notebooks/generated/statespace_chandrasekhar.ipynb
bsd-3-clause
%matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import matplotlib.pyplot as plt from pandas_datareader.data import DataReader """ Explanation: State space models - Chandrasekhar recursions End of explanation """ cpi_apparel = DataReader('CPIAPPNS', 'fred', start='1986') cpi_apparel.index = pd.DatetimeIndex(cpi_apparel.index, freq='MS') inf_apparel = np.log(cpi_apparel).diff().iloc[1:] * 1200 inf_apparel.plot(figsize=(15, 5)); """ Explanation: Although most operations related to state space models rely on the Kalman filtering recursions, in some special cases one can use a separate method often called "Chandrasekhar recursions". These provide an alternative way to iteratively compute the conditional moments of the state vector, and in some cases they can be substantially less computationally intensive than the Kalman filter recursions. For complete details, see the paper "Using the 'Chandrasekhar Recursions' for Likelihood Evaluation of DSGE Models" (Herbst, 2015). Here we just sketch the basic idea. State space models and the Kalman filter Recall that a time-invariant state space model can be written: $$ \begin{aligned} y_t &= Z \alpha_t + \varepsilon_t, \qquad \varepsilon_t \sim N(0, H) \ \alpha_{t+1} & = T \alpha_t + R \eta_t, \qquad \eta_t \sim N(0, Q) \ \alpha_1 & \sim N(a_1, P_1) \end{aligned} $$ where $y_t$ is a $p \times 1$ vector and $\alpha_t$ is an $m \times 1$ vector. Each iteration of the Kalman filter, say at time $t$, can be split into three parts: Initialization: specification of $a_t$ and $P_t$ that define the conditional state distribution, $\alpha_t \mid y^{t-1} \sim N(a_t, P_t)$. Updating: computation of $a_{t|t}$ and $P_{t|t}$ that define the conditional state distribution, $\alpha_t \mid y^{t} \sim N(a_{t|t}, P_{t|t})$. Prediction: computation of $a_{t+1}$ and $P_{t+1}$ that define the conditional state distribution, $\alpha_{t+1} \mid y^{t} \sim N(a_{t+1}, P_{t+1})$. Of course after the first iteration, the prediction part supplies the values required for initialization of the next step. Focusing on the prediction step, the Kalman filter recursions yield: $$ \begin{aligned} a_{t+1} & = T a_{t|t} \ P_{t+1} & = T P_{t|t} T' + R Q R' \ \end{aligned} $$ where the matrices $T$ and $P_{t|t}$ are each $m \times m$, where $m$ is the size of the state vector $\alpha$. In some cases, the state vector can become extremely large, which can imply that the matrix multiplications required to produce $P_{t+1}$ can be become computationally intensive. Example: seasonal autoregression As an example, notice that an AR(r) model (we use $r$ here since we already used $p$ as the dimension of the observation vector) can be put into state space form as: $$ \begin{aligned} y_t &= \alpha_t \ \alpha_{t+1} & = T \alpha_t + R \eta_t, \qquad \eta_t \sim N(0, Q) \end{aligned} $$ where: $$ \begin{aligned} T = \begin{bmatrix} \phi_1 & \phi_2 & \dots & \phi_r \ 1 & 0 & & 0 \ \vdots & \ddots & & \vdots \ 0 & & 1 & 0 \ \end{bmatrix} \qquad R = \begin{bmatrix} 1 \ 0 \ \vdots \ 0 \end{bmatrix} \qquad Q = \begin{bmatrix} \sigma^2 \end{bmatrix} \end{aligned} $$ In an AR model with daily data that exhibits annual seasonality, we might want to fit a model that incorporates lags up to $r=365$, in which case the state vector would be at least $m = 365$. The matrices $T$ and $P_{t|t}$ then each have $365^2 = 133225$ elements, and so most of the time spent computing the likelihood function (via the Kalman filter) can become dominated by the matrix multiplications in the prediction step. State space models and the Chandrasekhar recursions The Chandrasekhar recursions replace equation $P_{t+1} = T P_{t|t} T' + R Q R'$ with a different recursion: $$ P_{t+1} = P_t + W_t M_t W_t' $$ but where $W_t$ is a matrix with dimension $m \times p$ and $M_t$ is a matrix with dimension $p \times p$, where $p$ is the dimension of the observed vector $y_t$. These matrices themselves have recursive formulations. For more general details and for the formulas for computing $W_t$ and $M_t$, see Herbst (2015). Important note: unlike the Kalman filter, the Chandrasekhar recursions can not be used for every state space model. In particular, the latter has the following restrictions (that are not required for the use of the former): The model must be time-invariant, except that time-varying intercepts are permitted. Stationary initialization of the state vector must be used (this rules out all models in non-stationary components) Missing data is not permitted To understand why this formula can imply more efficient computations, consider again the SARIMAX case, above. In this case, $p = 1$, so that $M_t$ is a scalar and we can rewrite the Chandrasekhar recursion as: $$ P_{t+1} = P_t + M_t \times W_t W_t' $$ The matrices being multiplied, $W_t$, are then of dimension $m \times 1$, and in the case $r=365$, they each only have $365$ elements, rather than $365^2$ elements. This implies substantially fewer computations are required to complete the prediction step. Convergence A factor that complicates a straightforward discussion of performance implications is the well-known fact that in time-invariant models, the predicted state covariance matrix will converge to a constant matrix. This implies that there exists an $S$ such that, for every $t > S$, $P_t = P_{t+1}$. Once convergence has been achieved, we can eliminate the equation for $P_{t+1}$ from the prediction step altogether. In simple time series models, like AR(r) models, convergence is achieved fairly quickly, and this can limit the performance benefit to using the Chandrasekhar recursions. Herbst (2015) focuses instead on DSGE (Dynamic Stochastic General Equilibrium) models instead, which often have a large state vector and often a large number of periods to achieve convergence. In these cases, the performance gains can be quite substantial. Practical example As a practical example, we will consider monthly data that has a clear seasonal component. In this case, we look at the inflation rate of apparel, as measured by the consumer price index. A graph of the data indicates strong seasonality. End of explanation """ # Model that will apply Kalman filter recursions mod_kf = sm.tsa.SARIMAX(inf_apparel, order=(6, 0, 0), seasonal_order=(15, 0, 0, 12), tolerance=0) print(mod_kf.k_states) # Model that will apply Chandrasekhar recursions mod_ch = sm.tsa.SARIMAX(inf_apparel, order=(6, 0, 0), seasonal_order=(15, 0, 0, 12), tolerance=0) mod_ch.ssm.filter_chandrasekhar = True """ Explanation: We will construct two model instances. The first will be set to use the Kalman filter recursions, while the second will be set to use the Chandrasekhar recursions. This setting is controlled by the ssm.filter_chandrasekhar property, as shown below. The model we have in mind is a seasonal autoregression, where we include the first 6 months as lags as well as the given month in each of the previous 15 years as lags. This implies that the state vector has dimension $m = 186$, which is large enough that we might expect to see some substantial performance gains by using the Chandrasekhar recursions. Remark: We set tolerance=0 in each model - this has the effect of preventing the filter from ever recognizing that the prediction covariance matrix has converged. This is not recommended in practice. We do this here to highlight the superior performance of the Chandrasekhar recursions when they are used in every period instead of the typical Kalman filter recursions. Later, we will show the performance in a more realistic setting that we do allow for convergence. End of explanation """ # Model that will apply Kalman filter recursions mod_kf = sm.tsa.SARIMAX(inf_apparel, order=(6, 0, 0), seasonal_order=(15, 0, 0, 12)) print(mod_kf.k_states) # Model that will apply Chandrasekhar recursions mod_ch = sm.tsa.SARIMAX(inf_apparel, order=(6, 0, 0), seasonal_order=(15, 0, 0, 12)) mod_ch.ssm.filter_chandrasekhar = True """ Explanation: We time computation of the log-likelihood function, using the following code: python %timeit mod_kf.loglike(mod_kf.start_params) %timeit mod_ch.loglike(mod_ch.start_params) This results in: 171 ms ± 19.7 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 85 ms ± 4.97 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) The implication is that in this experiment, the Chandrasekhar recursions improved performance by about a factor of 2. As we mentioned above, in the previous experiment we disabled convergence of the predicted covariance matrices, so the results there are an upper bound. Now we allow for convergence, as usual, by removing the tolerance=0 argument: End of explanation """ res_kf = mod_kf.filter(mod_kf.start_params) print('Convergence at t=%d, of T=%d total observations' % (res_kf.filter_results.period_converged, res_kf.nobs)) """ Explanation: Again, we time computation of the log-likelihood function, using the following code: python %timeit mod_kf.loglike(mod_kf.start_params) %timeit mod_ch.loglike(mod_ch.start_params) This results in: 114 ms ± 7.64 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) 70.5 ms ± 2.43 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) The Chandrasekhar recursions still improve performance, but now only by about 33%. The reason for this is that after convergence, we no longer need to compute the predicted covariance matrices, so that for those post-convergence periods, there will be no difference in computation time between the two approaches. Below we check the period in which convergence was achieved: End of explanation """
zrhans/python
exemplos/manipulacao-estatistica-de-dados-meteorologicos.ipynb
gpl-2.0
import pandas as pd from pandas import DataFrame import datetime import pandas.io.data ## Vamos utilizar para acesso à API do Yahoo finance e importação de dados import matplotlib.pyplot as plt csna3 = pd.io.data.get_data_yahoo('CSNA3.SA', start = datetime.datetime(2000,10,1), end = datetime.datetime(2015,7,10)) # o Método .head() mostra as primeiras linhas do objeto csna3.head() """ Explanation: Manipulação Estatística em dados Meteorologicos Manipular estruturas de dados 2d (bidimensionais) usando as estruturas Python dos pacote Numpy e PANDAS*. Mineirar, manipular e aplicar tratamento estatístico em séries temporais meteorológicas. * Pacote para análise de dados - http://pandas.pydata.org/ PARTE 01 Na Parte 01 vamos utilizar uma série temporal de contações diárias de um ativo financeiro como fonte de dados para compreender os conceitos (termos) e algumas das funcionalidades das estruturas(objetos) de manipulação de dados. Os principais objetos do PANDAS são: - Series : comporta-se como um array numpy 1d. - Aceita quisquer tipos de dados - Operações vetorizadas - Sintaxe: s = Series; Ex: s1 = [0,1,2,3,4,5] - DataFrame: comporta-se como uma matriz bidimensional (2d numpy array like) - Aceita quisquer tipos de dados. - Possui um índice por padrão (1 coluna) que pode ser alterado e/ou reorganizado - As colunas possuem nome - As colunas podem ser movidas e/ou reorganizadas - As colunas podem ser criadas a partir de operações matemáticas (*operations on rows*) - Referências em *prior rows* pelo método .diff() - Relevantes Funções de biblioteca: - Média móvel *rolling means in PANDAS* - Desvio padrão, correlação, covariançia, etc - Gráficos (usa a biblioteca matplotlib - http://matplotlib.org/) - Operações vetorizadas. - Sintaxe: df = DataFrame(data, index=index, columns=list); Ex: df = DataFrame(np.random.randn(8, 3), index=index, columns=['A', 'B', 'C']) Os objetos Series e Dataframe também pode se comportar como dicionários dictionary like Praticamente todos os objetos possuem o método para fatiamento slicing para seleção de dados. Obtendo (mineirando) e preparando a base de dados Com finalidade prática, vamos usar uma série histórica, mineirando alguns dados da cotação do ativo CSNA públicos disponíveis na web via yahoo finance e armazenar um conjunto em um arquivo do tipo .csv End of explanation """ # Salvar o dados do DataFrame para o formato .csv csna3.to_csv('csna3_ohlc.csv') """ Explanation: Resultado da chamada ao método .head() via comando csna3.head() <pre> Open High Low Close Volume Adj Close Date 2000-10-02 19.50000 19.50000 19.00001 19.28333 7034400 481.03537 2000-10-03 19.66332 19.83334 19.17334 19.17334 16718400 478.29154 2000-10-04 19.49333 19.49333 19.10666 19.23334 7365600 479.78828 2000-10-05 19.23334 19.23334 19.23334 19.23334 0 479.78828 2000-10-06 19.36668 19.36668 19.13333 19.21668 3924000 479.37280 </pre> End of explanation """ # Lendo o arquivo .csv [pd.read_csv('arquivo.csv')] # Comente sobre os dois argumentos além do filename df = pd.read_csv('csna3_ohlc.csv', index_col='Date', parse_dates=True) df.head() """ Explanation: A partir desse momento, não precisamos mais importar os dados do yahoo finance. Carregaremos os dados diretamente a partir do arquivo csv criado. End of explanation """ # Especificando apenas uma coluna (via string '') df2 = df['Open'] df2.head() # Especificando multiplas colunas (via lista[]) df3 = df[['Open', 'Close']] df3.head() # Removendo uma coluna df3_tmp = df3.copy() # Criando uma cópia de segurança del df3['Close'] df3.head() df3 = df3_tmp.copy() # Restaurando o df3 ao estado anterior del df3_tmp # Removendo a cópia de segurança df3.head() df3.head() # Renomeando as colunas df3.rename(columns={'Close': 'CLOSE'}, inplace=True) df3.head() # Filtrando dados via condição lógica `df3[(condição-logica)]` df4 = df3[(df3['CLOSE'] > 1)] # Seleciona onde valores de CLOSE são > 1 df4.head() # Criando colunas com colunas a partir de operações matemáticas df['H-L'] = df['High'] - df.Low df.head() """ Explanation: Operação com colunas End of explanation """ # Médias móveis (em pandas `pd.rolling_mean()`) # Criando uma coluna com a Media móvel com janela de 100 períodos df['100MA'] = pd.rolling_mean(df['Close'], 100) print df[200:205] # Os primeiros 100 valores serão NaN # Referências em *prior rows* pelo método .diff() # Criando uma nova colunas com valores calculados # de uma coluna ['Close'] menos o valor anterior (por linhas) df['Diferenca'] = df['Close'].diff() df.head() """ Explanation: Note que usamos duas formas (ambas são aceitas) para nos referir às colunas High e Low no DataFrame: df['High'] e df.Low End of explanation """ figure(figsize=(16,8)) # Grafico do DataFrame Inteiro df.plot() savefig('img-df-inteiro.png') """ Explanation: Gráficos End of explanation """ # Gráfigo de uma coluna específica # Gráfico da média móvel 100MA figure(figsize=(16,8)) df['100MA'].plot() df['hpm'] = 7.02 df['hpm'].plot() savefig('img-df-hpm-100MA.png') # Gráfigo de colunas múltiplas # Gráfico da média móvel 100MA e Close figure(figsize=(16,8)) df['100MA'].plot(legend=True) df['Close'].plot(legend=True) #df[['Close','100MA']].plot() savefig('img-df-Close+100MA.png') # Gráfigo de colunas múltiplas # Gráfico da média móvel 100MA e Close figure(figsize=(16,8)) df[['Open','High','Low','Close','100MA']].plot() savefig('img-df-ohlc+100MA.png') """ Explanation: Observa-se que plotar um gráfico do DtaFrame inteiro não tem muita serventia, pois algumas informações ficam mascaradas uma vez que todas compartilham o mesmo eixo de imagem (y). End of explanation """ ''' Pode-se usar um grágico 3D para mostrar o preço de fechamento (*Close*) do ativo em função do volume de operações e a covariância entre esses dados. Interessante se pudermos usar o mouse para mudar a visão. Nos demais casos é melhor um gráfico de espalhamento bidimensional. ''' from mpl_toolkits.mplot3d import Axes3D fig = plt.figure(figsize=(16,10)) graf3d = fig.gca(projection='3d') #============================================================ # Recriando o DataFrame com indice inteiro # df = pd.read_csv('csna3_ohlc.csv', parse_dates=True) df['H-L'] = df.High - df.Low df['100MA'] = pd.rolling_mean(df['Close'], 100) df['Diferenca'] = df['Close'].diff() #============================================================ # df.index é o índice dos dados graf3d.scatter(df.index, df['H-L'], df['Close'], alpha=0.5) # Nomeando os 3 eixos graf3d.set_xlabel('Index') graf3d.set_ylabel('H-L') graf3d.set_zlabel('Close') fig.show() savefig('img-df-3d-HLxClose.png') # Comportamento de H-L responde a variação no Volume fig = plt.figure(figsize=(16,10)) graf3d =fig.gca(projection='3d') graf3d.scatter(df.index, df['H-L'], df['Volume'], alpha=0.5) # Nomeando os 3 eixos graf3d.set_xlabel('Index') graf3d.set_ylabel('H-L') graf3d.set_zlabel('Volume') fig.show() savefig('img-df-3d-HLxVol.png') """ Explanation: Gráficos 3D Para traçar gráficos 3D, é necessário a importação de alguns módulos do pacote mpl_toolkits. O principal módulo aser importado é o mplot3d via comando from mpl_toolkits.mplot3d import Axes3D. End of explanation """ """ Adicionando uma coluna ao DataFrame para adicionar o cálculo do valor do Desvio Padrão (*STD*) .sublot(rows,cols, plot-number) """ #============================================================ # Recriando o DataFrame com indice sendo a Data # df = pd.read_csv('csna3_ohlc.csv', index_col='Date', parse_dates=True) df['H-L'] = df.High - df.Low df['100MA'] = pd.rolling_mean(df['Close'], 100, min_periods=1) df['Diferenca'] = df['Close'].diff() #============================================================ df['STD'] = pd.rolling_std(df['Close'], 25, min_periods=1) fig = plt.figure(figsize=(16,10)) # Criando eixos para multiplot # Superior (Linha 1) ax1 = plt.subplot(2,1,1) # ax1.plot(df['Close']) # usando matplotlib diretamente df['Close'].plot(legend=True) # usando pandas # Inferior (Linha 2) # se quiser compartilhar o eixo .subplot(2,1,2, sharex = ax1) ax2 = plt.subplot(2,1,2) df['STD'].plot(legend=True) fig.show() savefig('img-df-mpl-close-std.png') """ Explanation: Múltipos Gráficos (subplots) End of explanation """ # Já que cada coluna do DataFrame comporta-se como uma série, # pode-se invocar o método `.describe` em todo o DataFrame df.describe() ''' Correlação = Relação na variação entre duas grandezas. Quão influente é a variação de uma variável como causa da variação da outra. Obs: Correlação não é causa. Covariância = Medida da força da correlação. Quão forte é a influência de uma sobre a outra. ''' # Mostrando a correlação df.corr() # Mostrando a Covariância df.cov() # Se quisermos apenas a estatística comparativa entre duas variáveis df[['Volume', 'H-L']].corr() """ Explanation: Informações Estatísticas A Descrição Estatística de um DataFrame (Das séries em geral) é obtida pelo método comum .describe. O exemplo abaixo permite estimar os mais comuns (principais - Distribuição Normal e Escalas) parâmetros estatísticos sobre uma série de dados. <pre> In [76]: series.describe() Out[76]: count 500.000000 mean -0.039663 std 1.069371 min -3.463789 25% -0.731101 50% -0.058918 75% 0.672758 max 3.120271 dtype: float64 </pre> Uma coluna de um DataFrame possui o método .describe() End of explanation """ #============================================================ # Criando o DataFrame com indice sendo a Data # df = pd.read_csv('csna3_ohlc.csv', index_col='Date', parse_dates=True) #============================================================ import random def function(data): ''' Recebe o parâmetro data como argumento e o retorna multiplicado por um numero aleatorio entre 0 e 5 ''' x = random.randrange(0,5) return data*x df['Multiple'] = map(function, df['Close']) df.head() """ Explanation: Exercício 1 - Construa uma tabela das correlações entre vários ativos. Use os dados públicos Yahoo finance para obter as séries históricas e descreva como você observa as correlações entre os ativos de alguns segmentos. Exercício 2 - Aplique os conceitos (termos) e as funcionalidades das estruturas(objetos) que você aprendeu nessa aula em um conjunto de séries históricas meteorológicas. Fonte de dados para utilizar: http://200.132.24.235/csda/topicos-met.csv.zip Map Functions Vamos refazer algumas operações utilizando map functions ao invés dos métodos do pandas. É importante saber como fazer algumas coisas do jeito "difícil" ou como se diz "à unha", pois nem sempre se quer utilizar black boxes embora tudo no python seja open source. O Exemplo abaixo ilustra a possibilidade de você criar sua própria função e aplicá-la a cada valor (linha por linha) da coluna de um DataFrame. Esse exemplo é interessante pois a sua função particular pode receber quantos e quais parâmetros você quiser. End of explanation """
mne-tools/mne-tools.github.io
0.13/_downloads/plot_decoding_csp_space.ipynb
bsd-3-clause
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Romain Trachel <romain.trachel@inria.fr> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path() """ Explanation: ==================================================================== Decoding in sensor space data using the Common Spatial Pattern (CSP) ==================================================================== Decoding applied to MEG data in sensor space decomposed using CSP. Here the classifier is applied to features extracted on CSP filtered signals. See http://en.wikipedia.org/wiki/Common_spatial_pattern and [1] [1] Zoltan J. Koles. The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG. Electroencephalography and Clinical Neurophysiology, 79(6):440--447, December 1991. End of explanation """ raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' tmin, tmax = -0.2, 0.5 event_id = dict(aud_l=1, vis_l=3) # Setup for reading the raw data raw = io.read_raw_fif(raw_fname, preload=True) raw.filter(2, None, method='iir') # replace baselining with high-pass events = mne.read_events(event_fname) raw.info['bads'] = ['MEG 2443'] # set bad channels picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=False, exclude='bads') # Read epochs epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks, baseline=None, preload=True) labels = epochs.events[:, -1] evoked = epochs.average() """ Explanation: Set parameters and read data End of explanation """ from sklearn.svm import SVC # noqa from sklearn.cross_validation import ShuffleSplit # noqa from mne.decoding import CSP # noqa n_components = 3 # pick some components svc = SVC(C=1, kernel='linear') csp = CSP(n_components=n_components) # Define a monte-carlo cross-validation generator (reduce variance): cv = ShuffleSplit(len(labels), 10, test_size=0.2, random_state=42) scores = [] epochs_data = epochs.get_data() for train_idx, test_idx in cv: y_train, y_test = labels[train_idx], labels[test_idx] X_train = csp.fit_transform(epochs_data[train_idx], y_train) X_test = csp.transform(epochs_data[test_idx]) # fit classifier svc.fit(X_train, y_train) scores.append(svc.score(X_test, y_test)) # Printing the results class_balance = np.mean(labels == labels[0]) class_balance = max(class_balance, 1. - class_balance) print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores), class_balance)) # Or use much more convenient scikit-learn cross_val_score function using # a Pipeline from sklearn.pipeline import Pipeline # noqa from sklearn.cross_validation import cross_val_score # noqa cv = ShuffleSplit(len(labels), 10, test_size=0.2, random_state=42) clf = Pipeline([('CSP', csp), ('SVC', svc)]) scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1) print(scores.mean()) # should match results above # And using reuglarized csp with Ledoit-Wolf estimator csp = CSP(n_components=n_components, reg='ledoit_wolf') clf = Pipeline([('CSP', csp), ('SVC', svc)]) scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1) print(scores.mean()) # should get better results than above # plot CSP patterns estimated on full data for visualization csp.fit_transform(epochs_data, labels) data = csp.patterns_ fig, axes = plt.subplots(1, 4) for idx in range(4): mne.viz.plot_topomap(data[idx], evoked.info, axes=axes[idx], show=False) fig.suptitle('CSP patterns') fig.tight_layout() fig.show() """ Explanation: Decoding in sensor space using a linear SVM End of explanation """
zhouqifanbdh/liupengyuan.github.io
chapter1/homework/localization/201621198175.ipynb
mit
def product_sum(end): i = 1 total_n = 1 while i < end: i += 1 total_n *= i return total_n m = int(input("请输入第1个整数,以回车结束:")) n = int(input("请输入第2个整数,以回车结束:")) k = int(input("请输入第3个整数,以回车结束:")) print("最终的和是:",product_sum(m)+product_sum(n)+product_sum(k)) """ Explanation: 练习 1:仿照求 ∑mi=1i+∑ni=1i+∑ki=1i∑i=1mi+∑i=1ni+∑i=1ki 的完整代码,写程序,可求m!+n!+k! End of explanation """ def sum(n): i = 1 total_n = 0 while i <= n: total_n += (-1)**(n-1)*(1.0/(2*n-1)) i += 1 return total_n print(4*sum(1000)) print(4*sum(100000)) """ Explanation: 练习 2:写函数可返回1-1/3+1/5-1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。 End of explanation """ #练习1 def star(): name = input("please enter your name:") date = input("please enter your date of birth:") date=float(date) if 3.21 <= date <= 4.19: print(name,",你是非常有性格的白羊座!") elif 4.20 <= date <= 5.20: print(name,",你是非常有性格的金牛座!") elif 5.21 <= date <= 6.21: print(name,",你是非常有性格的双子座!") elif 6.22 <= date <= 7.22: print(name,",你是非常有性格的巨蟹座!") elif 7.23 <= date <= 8.22: print(name,",你是非常有性格的狮子座!") elif 8.23 <= date <= 9.22: print(name,",你是非常有性格的处女座!") elif 9.23 <= date <= 10.23: print(name,",你是非常有性格的天秤座!") elif 10.24 <= date <= 11.22: print(name,",你是非常有性格的天蝎座!") elif 11.23 <= date <= 12.21: print(name,",你是非常有性格的射手座!") elif 1.20 <= date <= 2.18: print(name,",你是非常有性格的水瓶座!") elif 2.19 <= date <= 3.20: print(name,",你是非常有性格的双鱼座!") else: print(name,",你是非常有性格的摩羯座!") star() #练习4 def conversion(): word = input("please enter a word:") if word.endswith("x"): print(word,"es",sep = "") elif word.endswith("sh"): print(word,"es",sep = "") else: print(word,"s",sep = "") conversion() """ Explanation: 练习 3:将task3中的练习1及练习4改写为函数,并进行调用。 End of explanation """ def sum(): i = m total = 0 while i <= n: total += i i += k return total m = int(input("please enter an integer:")) n = int(input("please enter a biger integer:")) k = int(input("please enter interval between two numbers:")) print("最终的和是:",sum()) """ Explanation: 挑战性练习:写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。 End of explanation """
patrickmineault/xcorr-notebooks
notebooks/Multi-armed bandit as a Markov decision process.ipynb
mit
import itertools import numpy as np from pprint import pprint def sorted_values(dict_): return [dict_[x] for x in sorted(dict_)] def solve_bmab_value_iteration(N_arms, M_trials, gamma=1, max_iter=10, conv_crit = .01): util = {} # Initialize every state to utility 0. state_ranges = [range(M_trials+1) for x in range(N_arms*2)] # The reward state state_ranges.append(range(2)) for state in itertools.product(*state_ranges): # Some states are impossible to reach. if sum(state[:-1]) > M_trials: # A state with the total of alphas and betas greater than # the number of trials. continue if sum(state[:-1:2]) == 0 and state[-1] == 1: # A state with a reward but alphas all equal to 0. continue if sum(state[:-1:2]) == M_trials and state[-1] == 0: # A state with no reward but alphas adding up to M_trials. continue if sum(state[:-1]) == 1 and sum(state[:-1:2]) == 1 and state[-1] == 0: # A state with an initial reward according to alphas but not according # the reward index continue util[state] = 0 # Main loop. converged = False new_util = util.copy() opt_actions = {} for j in range(max_iter): # Line 5 of value iteration for state in util.keys(): reward = state[-1] # Terminal state. if sum(state[:-1]) == M_trials: new_util[state] = reward continue values = np.zeros(N_arms) # Consider every action for i in range(N_arms): # Successes and failure for this state. alpha = state[i*2] beta = state[i*2+1] # Two possible outcomes: either that arm gets rewarded, # or not. # Transition to unrewarded state: state0 = list(state) state0[-1] = 0 state0[2*i+1] += 1 state0 = tuple(state0) # The probability that we'll transition to this unrewarded state. p_state0 = (beta + 1) / float(alpha + beta + 2) # Rewarded state. state1 = list(state) state1[-1] = 1 state1[2*i] += 1 state1 = tuple(state1) p_state1 = 1 - p_state0 try: value = gamma*(util[state0]*p_state0 + util[state1]*p_state1) except KeyError,e: print state print state0 print state1 raise e #print state0, util[state0], p_state0 #print state1, util[state1], p_state1 values[i] = value #print state, values, reward new_util[state] = reward + np.max(values) opt_actions[state] = np.argmax(values) # Consider the difference between the new util # and the old util. max_diff = np.max(abs(np.array(sorted_values(util)) - np.array(sorted_values(new_util)))) util = new_util.copy() print "Iteration %d, max diff = %.5f" % (j, max_diff) if max_diff < conv_crit: converged = True break #pprint(util) if converged: print "Converged after %d iterations" % j else: print "Not converged after %d iterations" % max_iter return util, opt_actions util, opt_actions = solve_bmab_value_iteration(2, 2, max_iter=5) opt_actions """ Explanation: Multi-armed bandit as a Markov decision process Let's model the Bernouilli multi-armed bandit. The Bernoulli MBA is an $N$-armed bandit where each arm gives binary rewards according to some probability: $r_i \sim Bernouilli(\mu_i)$ Here $i$ is the index of the arm. Let's model this as a Markov decision process. The state is going to be defined as: $s(t) = (\alpha_1, \beta_1, \ldots, \alpha_N, \beta_N, r_t)$ $\alpha_i$ is the number of successes encountered so far when pulling arm $i$. $\beta_i$ is, similarly, the number of failures encountered when pulling that arm. $r_t$ is the reward, either 0 or 1, from the last trial. Assuming a uniform prior on $\mu_i$, the posterior distribution of the $\mu_i$ in a given state are: $p(\mu_i|s(t)) = Beta(\alpha_i+1,\beta_i+1)$ When we're in a given state, we have the choice of performing one of $N$ actions, corresponding to pulling each of the arms. Let's call pulling the $i$'th arm $a_i$. This will put us in a new state, with a certain probability. The new state will be same for arms not equal to i. For the $i$'th arm, we have: $s(t+1) = (\ldots \alpha_i + 1, \beta_i \ldots 1)$ with probability $(\alpha_i+1)/(\alpha_i+\beta_i+2)$ $s(t+1) = (\ldots \alpha_i, \beta_i + 1 \ldots 0)$ with probability $(\beta_i+1)/(\alpha_i+\beta_i+2)$ We can solve this MDP exactly, e.g. using value iteration, for a small enough state space. For $M$ trials, the state space has cardinality $M^{2N}$ - it's possible to solve the 2-armed bandit for 10-20 trials this way, but it grows exponentially fast. Nevertheless, we can use this optimal solution to compare it with commonly used heuristics like $\epsilon$-greedy and UCB and determine how often these pick the optimal moves. Then we'll get some intuitions about what $\epsilon$-greedy and UCB get right and wrong. Let's do it! End of explanation """ util """ Explanation: For the 2-armed, 2-trial Bernoulli bandit, the strategy is simple: pick the first arm. If it rewards, then pick it again. If not, pick the other. Note that this is the same as most sensible strategies, for instance $\epsilon$- greedy or UCB. End of explanation """ 2*.5*2.0/3.0 + .5/3.0 + .5*.5 """ Explanation: Note that the utility of the root node is 1.08 - what does that mean? If we get rewarded in the initial trial, that means that the posterior for the mean of that arm is .67. OTOH, when we fail on the first trial, we can still pick the other arm, which still has a posterior mean of .5. Thus, we have rewards: +2 with probability .5*2/3 +1 with prob .5*1/3 +1 with prob .5*.5 +0 with prob .5*.5 That means the expected total reward is: End of explanation """ util, opt_actions = solve_bmab_value_iteration(2, 3, max_iter=5) opt_actions """ Explanation: And that's what utility means in this context. Let's see about the 3-trial 2-armed bandit: End of explanation """ util, opt_actions = solve_bmab_value_iteration(2, 4, max_iter=6) """ Explanation: The optimal strategy goes: pick arm 0. If it rewards, pick it again for the next 2 trials. If it doesn't reward, then pick arm 1. If that rewards, keep that one. If it doesn't, pick 0 again. Let's see with 4: End of explanation """ M_trials = 16 %time util, opt_actions = solve_bmab_value_iteration(2, M_trials, max_iter=M_trials+2) """ Explanation: What's interesting here is that value iteration always converges in M_trials + 1 iterations - information only travels backwards through time - much as in Viterbi in the context of HMMs. If we're only interested in the next best action given the current state, it might be possible to iterate backwards through time, starting from the terminal states, throwing away the latest data as we go along -- but let's not get into this premature optimization just yet. Let's see how for how many trials we can solve this without crashing my 5 year-old laptop. End of explanation """ # Create a design matrix related to the optimal strategies. X = [] y = [] seen_keys = {} for key, val in opt_actions.iteritems(): if key[:-1] in seen_keys: # We've already seen this, continue. continue alpha0 = float(key[0] + 1) beta0 = float(key[1] + 1) alpha1 = float(key[2] + 1) beta1 = float(key[3] + 1) if alpha0 == alpha1 and beta0 == beta1: # We're in a perfectly symmetric situtation, skip this then. continue seen_keys = key[:-1] # Standard results for the Beta distribution. # https://en.wikipedia.org/wiki/Beta_distribution mean0 = alpha0/(alpha0 + beta0) mean1 = alpha1/(alpha1 + beta1) std0 = np.sqrt(alpha0*beta0 / (alpha0 + beta0 + 1)) / (alpha0 + beta0) std1 = np.sqrt(alpha1*beta1 / (alpha1 + beta1 + 1)) / (alpha1 + beta1) t = alpha0 + beta0 + alpha1 + beta1 X.append([mean0,mean1,std0,std1,t,1,alpha0 - 1,beta0 - 1,alpha1 - 1,beta1 - 1]) y.append(val) X = np.array(X) y = np.array(y) """ Explanation: It seems like my laptop can look ahead at least sixteen steps into the future without dying - pretty good! Optimal versus UCB Let's try and figure out how the optimal strategy relates to the upper confidence bound (UCB) heuristic. Let's train a logistic regression model with the same inputs as a UCB strategy - mean, standard deviation, time - and see how well it can approximate the optimal strategy. End of explanation """ from sklearn.linear_model import LogisticRegression the_model = LogisticRegression(C=100.0) X_ = X[:,:2] the_model.fit(X_,y) y_pred = the_model.predict(X_) print ("Greedy: %.4f%% of moves are incorrect" % ((np.mean(abs(y_pred-y)))*100)) print the_model.coef_ the_model = LogisticRegression(C=100.0) X_ = X[:,:4] the_model.fit(X_,y) y_pred = the_model.predict(X_) print ("UCB: %.4f%% of moves are incorrect" % ((np.mean(abs(y_pred-y)))*100)) print the_model.coef_ the_model = LogisticRegression(C=100000.0) X_ = X[:,:4] X_ = np.hstack((X_,(X[:,4]).reshape((-1,1))*X[:,2:4])) the_model.fit(X_,y) y_pred = the_model.predict(X_) print ("UCB X time: %.4f%% of moves are incorrect" % ((np.mean(abs(y_pred-y)))*100)) print the_model.coef_ """ Explanation: Let's train three supervised networks: a purely myopic, greedy strategy one which uses the uncertainty in the estimates one which uses both uncertainty and number of trials left to hedge its bets End of explanation """
enbanuel/phys202-2015-work
assignments/assignment04/MatplotlibEx02.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Matplotlib Exercise 2 Imports End of explanation """ !head -n 30 open_exoplanet_catalogue.txt """ Explanation: Exoplanet properties Over the past few decades, astronomers have discovered thousands of extrasolar planets. The following paper describes the properties of some of these planets. http://iopscience.iop.org/1402-4896/2008/T130/014001 Your job is to reproduce Figures 2 and 4 from this paper using an up-to-date dataset of extrasolar planets found on this GitHub repo: https://github.com/OpenExoplanetCatalogue/open_exoplanet_catalogue A text version of the dataset has already been put into this directory. The top of the file has documentation about each column of data: End of explanation """ # YOUR CODE HERE f = np.genfromtxt('open_exoplanet_catalogue.txt', delimiter=',') data = np.array(f) data assert data.shape==(1993,24) """ Explanation: Use np.genfromtxt with a delimiter of ',' to read the data into a NumPy array called data: End of explanation """ # YOUR CODE HERE flat = np.ravel(data) mass = flat[2::24] k = [i for i in mass if str(i) != 'nan'] #I used the user's Lego Stormtrooper advice on Stackoverflow to filter out the nan's type(k[1]) # print(k) f = plt.figure(figsize=(15,6)) plt.hist(k, bins=len(k)) ax = plt.gca() ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) ax.get_xaxis().tick_bottom() ax.get_yaxis().tick_left() plt.xlim(0.0, 14.0) plt.ylim(0.0, 300.0) plt.xlabel('Masses of Planets (M_jup)') plt.ylabel('Number of Planets') assert True # leave for grading """ Explanation: Make a histogram of the distribution of planetary masses. This will reproduce Figure 2 in the original paper. Customize your plot to follow Tufte's principles of visualizations. Customize the box, grid, spines and ticks to match the requirements of this data. Pick the number of bins for the histogram appropriately. End of explanation """ # YOUR CODE HERE orbit = flat[6::24] y = [i for i in orbit] print(len(y)) axis = flat[5::24] x = [i for i in axis] print(len(x)) f = plt.figure(figsize=(11,5)) plt.scatter(x, y, color='r', s=20, alpha=0.5) plt.xscale('log') plt.xlabel('Semimajor Axis(log)') plt.ylabel('Orbital Eccentricity') plt.ylim(0.0, 1.0) plt.xlim(0.01, 100) assert True # leave for grading """ Explanation: Make a scatter plot of the orbital eccentricity (y) versus the semimajor axis. This will reproduce Figure 4 of the original paper. Use a log scale on the x axis. Customize your plot to follow Tufte's principles of visualizations. Customize the box, grid, spines and ticks to match the requirements of this data. End of explanation """
kubeflow/kfp-tekton-backend
samples/core/lightweight_component/lightweight_component.ipynb
apache-2.0
# Install the SDK #!pip3 install 'kfp>=0.1.31.2' --quiet import kfp import kfp.components as comp """ Explanation: Lightweight python components Lightweight python components do not require you to build a new container image for every code change. They're intended to use for fast iteration in notebook environment. Building a lightweight python component To build a component just define a stand-alone python function and then call kfp.components.func_to_container_op(func) to convert it to a component that can be used in a pipeline. There are several requirements for the function: * The function should be stand-alone. It should not use any code declared outside of the function definition. Any imports should be added inside the main function. Any helper functions should also be defined inside the main function. * The function can only import packages that are available in the base image. If you need to import a package that's not available you can try to find a container image that already includes the required packages. (As a workaround you can use the module subprocess to run pip install for the required package. There is an example below in my_divmod function.) * If the function operates on numbers, the parameters need to have type hints. Supported types are [int, float, bool]. Everything else is passed as string. * To build a component with multiple output values, use the typing.NamedTuple type hint syntax: NamedTuple('MyFunctionOutputs', [('output_name_1', type), ('output_name_2', float)]) End of explanation """ #Define a Python function def add(a: float, b: float) -> float: '''Calculates sum of two arguments''' return a + b """ Explanation: Simple function that just add two numbers: End of explanation """ add_op = comp.func_to_container_op(add) """ Explanation: Convert the function to a pipeline operation End of explanation """ #Advanced function #Demonstrates imports, helper functions and multiple outputs from typing import NamedTuple def my_divmod(dividend: float, divisor:float) -> NamedTuple('MyDivmodOutput', [('quotient', float), ('remainder', float), ('mlpipeline_ui_metadata', 'UI_metadata'), ('mlpipeline_metrics', 'Metrics')]): '''Divides two numbers and calculate the quotient and remainder''' #Pip installs inside a component function. #NOTE: installs should be placed right at the beginning to avoid upgrading a package # after it has already been imported and cached by python import sys, subprocess; subprocess.run([sys.executable, '-m', 'pip', 'install', 'tensorflow==1.8.0']) #Imports inside a component function: import numpy as np #This function demonstrates how to use nested functions inside a component function: def divmod_helper(dividend, divisor): return np.divmod(dividend, divisor) (quotient, remainder) = divmod_helper(dividend, divisor) from tensorflow.python.lib.io import file_io import json # Exports a sample tensorboard: metadata = { 'outputs' : [{ 'type': 'tensorboard', 'source': 'gs://ml-pipeline-dataset/tensorboard-train', }] } # Exports two sample metrics: metrics = { 'metrics': [{ 'name': 'quotient', 'numberValue': float(quotient), },{ 'name': 'remainder', 'numberValue': float(remainder), }]} from collections import namedtuple divmod_output = namedtuple('MyDivmodOutput', ['quotient', 'remainder', 'mlpipeline_ui_metadata', 'mlpipeline_metrics']) return divmod_output(quotient, remainder, json.dumps(metadata), json.dumps(metrics)) """ Explanation: A bit more advanced function which demonstrates how to use imports, helper functions and produce multiple outputs. End of explanation """ my_divmod(100, 7) """ Explanation: Test running the python function directly End of explanation """ divmod_op = comp.func_to_container_op(my_divmod, base_image='tensorflow/tensorflow:1.11.0-py3') """ Explanation: Convert the function to a pipeline operation You can specify an alternative base container image (the image needs to have Python 3.5+ installed). End of explanation """ import kfp.dsl as dsl @dsl.pipeline( name='Calculation pipeline', description='A toy pipeline that performs arithmetic calculations.' ) def calc_pipeline( a='a', b='7', c='17', ): #Passing pipeline parameter and a constant value as operation arguments add_task = add_op(a, 4) #Returns a dsl.ContainerOp class instance. #Passing a task output reference as operation arguments #For an operation with a single return value, the output reference can be accessed using `task.output` or `task.outputs['output_name']` syntax divmod_task = divmod_op(add_task.output, b) #For an operation with a multiple return values, the output references can be accessed using `task.outputs['output_name']` syntax result_task = add_op(divmod_task.outputs['quotient'], c) """ Explanation: Define the pipeline Pipeline function has to be decorated with the @dsl.pipeline decorator End of explanation """ #Specify pipeline argument values arguments = {'a': '7', 'b': '8'} #Submit a pipeline run kfp.Client().create_run_from_pipeline_func(calc_pipeline, arguments=arguments) # Run the pipeline on a separate Kubeflow Cluster instead # (use if your notebook is not running in Kubeflow - e.x. if using AI Platform Notebooks) # kfp.Client(host='<ADD KFP ENDPOINT HERE>').create_run_from_pipeline_func(calc_pipeline, arguments=arguments) #vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working) """ Explanation: Submit the pipeline for execution End of explanation """
tensorflow/docs-l10n
site/en-snapshot/model_optimization/guide/quantization/training_comprehensive_guide.ipynb
apache-2.0
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: Copyright 2020 The TensorFlow Authors. End of explanation """ ! pip uninstall -y tensorflow ! pip install -q tf-nightly ! pip install -q tensorflow-model-optimization import tensorflow as tf import numpy as np import tensorflow_model_optimization as tfmot import tempfile input_shape = [20] x_train = np.random.randn(1, 20).astype(np.float32) y_train = tf.keras.utils.to_categorical(np.random.randn(1), num_classes=20) def setup_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(20, input_shape=input_shape), tf.keras.layers.Flatten() ]) return model def setup_pretrained_weights(): model= setup_model() model.compile( loss=tf.keras.losses.categorical_crossentropy, optimizer='adam', metrics=['accuracy'] ) model.fit(x_train, y_train) _, pretrained_weights = tempfile.mkstemp('.tf') model.save_weights(pretrained_weights) return pretrained_weights def setup_pretrained_model(): model = setup_model() pretrained_weights = setup_pretrained_weights() model.load_weights(pretrained_weights) return model setup_model() pretrained_weights = setup_pretrained_weights() """ Explanation: Quantization aware training comprehensive guide <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/quantization/training_comprehensive_guide.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Welcome to the comprehensive guide for Keras quantization aware training. This page documents various use cases and shows how to use the API for each one. Once you know which APIs you need, find the parameters and the low-level details in the API docs. If you want to see the benefits of quantization aware training and what's supported, see the overview. For a single end-to-end example, see the quantization aware training example. The following use cases are covered: Deploy a model with 8-bit quantization with these steps. Define a quantization aware model. For Keras HDF5 models only, use special checkpointing and deserialization logic. Training is otherwise standard. Create a quantized model from the quantization aware one. Experiment with quantization. Anything for experimentation has no supported path to deployment. Custom Keras layers fall under experimentation. Setup For finding the APIs you need and understanding purposes, you can run but skip reading this section. End of explanation """ base_model = setup_model() base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy quant_aware_model = tfmot.quantization.keras.quantize_model(base_model) quant_aware_model.summary() """ Explanation: Define quantization aware model By defining models in the following ways, there are available paths to deployment to backends listed in the overview page. By default, 8-bit quantization is used. Note: a quantization aware model is not actually quantized. Creating a quantized model is a separate step. Quantize whole model Your use case: * Subclassed models are not supported. Tips for better model accuracy: Try "Quantize some layers" to skip quantizing the layers that reduce accuracy the most. It's generally better to finetune with quantization aware training as opposed to training from scratch. To make the whole model aware of quantization, apply tfmot.quantization.keras.quantize_model to the model. End of explanation """ # Create a base model base_model = setup_model() base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy # Helper function uses `quantize_annotate_layer` to annotate that only the # Dense layers should be quantized. def apply_quantization_to_dense(layer): if isinstance(layer, tf.keras.layers.Dense): return tfmot.quantization.keras.quantize_annotate_layer(layer) return layer # Use `tf.keras.models.clone_model` to apply `apply_quantization_to_dense` # to the layers of the model. annotated_model = tf.keras.models.clone_model( base_model, clone_function=apply_quantization_to_dense, ) # Now that the Dense layers are annotated, # `quantize_apply` actually makes the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) quant_aware_model.summary() """ Explanation: Quantize some layers Quantizing a model can have a negative effect on accuracy. You can selectively quantize layers of a model to explore the trade-off between accuracy, speed, and model size. Your use case: * To deploy to a backend that only works well with fully quantized models (e.g. EdgeTPU v1, most DSPs), try "Quantize whole model". Tips for better model accuracy: * It's generally better to finetune with quantization aware training as opposed to training from scratch. * Try quantizing the later layers instead of the first layers. * Avoid quantizing critical layers (e.g. attention mechanism). In the example below, quantize only the Dense layers. End of explanation """ print(base_model.layers[0].name) """ Explanation: While this example used the type of the layer to decide what to quantize, the easiest way to quantize a particular layer is to set its name property, and look for that name in the clone_function. End of explanation """ # Use `quantize_annotate_layer` to annotate that the `Dense` layer # should be quantized. i = tf.keras.Input(shape=(20,)) x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i) o = tf.keras.layers.Flatten()(x) annotated_model = tf.keras.Model(inputs=i, outputs=o) # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) # For deployment purposes, the tool adds `QuantizeLayer` after `InputLayer` so that the # quantized model can take in float inputs instead of only uint8. quant_aware_model.summary() """ Explanation: More readable but potentially lower model accuracy This is not compatible with finetuning with quantization aware training, which is why it may be less accurate than the above examples. Functional example End of explanation """ # Use `quantize_annotate_layer` to annotate that the `Dense` layer # should be quantized. annotated_model = tf.keras.Sequential([ tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=input_shape)), tf.keras.layers.Flatten() ]) # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model) quant_aware_model.summary() """ Explanation: Sequential example End of explanation """ # Define the model. base_model = setup_model() base_model.load_weights(pretrained_weights) # optional but recommended for model accuracy quant_aware_model = tfmot.quantization.keras.quantize_model(base_model) # Save or checkpoint the model. _, keras_model_file = tempfile.mkstemp('.h5') quant_aware_model.save(keras_model_file) # `quantize_scope` is needed for deserializing HDF5 models. with tfmot.quantization.keras.quantize_scope(): loaded_model = tf.keras.models.load_model(keras_model_file) loaded_model.summary() """ Explanation: Checkpoint and deserialize Your use case: this code is only needed for the HDF5 model format (not HDF5 weights or other formats). End of explanation """ base_model = setup_pretrained_model() quant_aware_model = tfmot.quantization.keras.quantize_model(base_model) # Typically you train the model here. converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model) converter.optimizations = [tf.lite.Optimize.DEFAULT] quantized_tflite_model = converter.convert() """ Explanation: Create and deploy quantized model In general, reference the documentation for the deployment backend that you will use. This is an example for the TFLite backend. End of explanation """ LastValueQuantizer = tfmot.quantization.keras.quantizers.LastValueQuantizer MovingAverageQuantizer = tfmot.quantization.keras.quantizers.MovingAverageQuantizer class DefaultDenseQuantizeConfig(tfmot.quantization.keras.QuantizeConfig): # Configure how to quantize weights. def get_weights_and_quantizers(self, layer): return [(layer.kernel, LastValueQuantizer(num_bits=8, symmetric=True, narrow_range=False, per_axis=False))] # Configure how to quantize activations. def get_activations_and_quantizers(self, layer): return [(layer.activation, MovingAverageQuantizer(num_bits=8, symmetric=False, narrow_range=False, per_axis=False))] def set_quantize_weights(self, layer, quantize_weights): # Add this line for each item returned in `get_weights_and_quantizers` # , in the same order layer.kernel = quantize_weights[0] def set_quantize_activations(self, layer, quantize_activations): # Add this line for each item returned in `get_activations_and_quantizers` # , in the same order. layer.activation = quantize_activations[0] # Configure how to quantize outputs (may be equivalent to activations). def get_output_quantizers(self, layer): return [] def get_config(self): return {} """ Explanation: Experiment with quantization Your use case: using the following APIs means that there is no supported path to deployment. For instance, TFLite conversion and kernel implementations only support 8-bit quantization. The features are also experimental and not subject to backward compatibility. * tfmot.quantization.keras.QuantizeConfig * tfmot.quantization.keras.quantizers.Quantizer * tfmot.quantization.keras.quantizers.LastValueQuantizer * tfmot.quantization.keras.quantizers.MovingAverageQuantizer Setup: DefaultDenseQuantizeConfig Experimenting requires using tfmot.quantization.keras.QuantizeConfig, which describes how to quantize the weights, activations, and outputs of a layer. Below is an example that defines the same QuantizeConfig used for the Dense layer in the API defaults. During the forward propagation in this example, the LastValueQuantizer returned in get_weights_and_quantizers is called with layer.kernel as the input, producing an output. The output replaces layer.kernel in the original forward propagation of the Dense layer, via the logic defined in set_quantize_weights. The same idea applies to the activations and outputs. End of explanation """ quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class CustomLayer(tf.keras.layers.Dense): pass model = quantize_annotate_model(tf.keras.Sequential([ quantize_annotate_layer(CustomLayer(20, input_shape=(20,)), DefaultDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `DefaultDenseQuantizeConfig` with `quantize_scope` # as well as the custom Keras layer. with quantize_scope( {'DefaultDenseQuantizeConfig': DefaultDenseQuantizeConfig, 'CustomLayer': CustomLayer}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() """ Explanation: Quantize custom Keras layer This example uses the DefaultDenseQuantizeConfig to quantize the CustomLayer. Applying the configuration is the same across the "Experiment with quantization" use cases. * Apply tfmot.quantization.keras.quantize_annotate_layer to the CustomLayer and pass in the QuantizeConfig. * Use tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults. End of explanation """ quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig): # Configure weights to quantize with 4-bit instead of 8-bits. def get_weights_and_quantizers(self, layer): return [(layer.kernel, LastValueQuantizer(num_bits=4, symmetric=True, narrow_range=False, per_axis=False))] """ Explanation: Modify quantization parameters Common mistake: quantizing the bias to fewer than 32-bits usually harms model accuracy too much. This example modifies the Dense layer to use 4-bits for its weights instead of the default 8-bits. The rest of the model continues to use API defaults. End of explanation """ model = quantize_annotate_model(tf.keras.Sequential([ # Pass in modified `QuantizeConfig` to modify this Dense layer. quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`: with quantize_scope( {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() """ Explanation: Applying the configuration is the same across the "Experiment with quantization" use cases. * Apply tfmot.quantization.keras.quantize_annotate_layer to the Dense layer and pass in the QuantizeConfig. * Use tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults. End of explanation """ quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig): def get_activations_and_quantizers(self, layer): # Skip quantizing activations. return [] def set_quantize_activations(self, layer, quantize_activations): # Empty since `get_activaations_and_quantizers` returns # an empty list. return """ Explanation: Modify parts of layer to quantize This example modifies the Dense layer to skip quantizing the activation. The rest of the model continues to use API defaults. End of explanation """ model = quantize_annotate_model(tf.keras.Sequential([ # Pass in modified `QuantizeConfig` to modify this Dense layer. quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`: with quantize_scope( {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() """ Explanation: Applying the configuration is the same across the "Experiment with quantization" use cases. * Apply tfmot.quantization.keras.quantize_annotate_layer to the Dense layer and pass in the QuantizeConfig. * Use tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults. End of explanation """ quantize_annotate_layer = tfmot.quantization.keras.quantize_annotate_layer quantize_annotate_model = tfmot.quantization.keras.quantize_annotate_model quantize_scope = tfmot.quantization.keras.quantize_scope class FixedRangeQuantizer(tfmot.quantization.keras.quantizers.Quantizer): """Quantizer which forces outputs to be between -1 and 1.""" def build(self, tensor_shape, name, layer): # Not needed. No new TensorFlow variables needed. return {} def __call__(self, inputs, training, weights, **kwargs): return tf.keras.backend.clip(inputs, -1.0, 1.0) def get_config(self): # Not needed. No __init__ parameters to serialize. return {} class ModifiedDenseQuantizeConfig(DefaultDenseQuantizeConfig): # Configure weights to quantize with 4-bit instead of 8-bits. def get_weights_and_quantizers(self, layer): # Use custom algorithm defined in `FixedRangeQuantizer` instead of default Quantizer. return [(layer.kernel, FixedRangeQuantizer())] """ Explanation: Use custom quantization algorithm The tfmot.quantization.keras.quantizers.Quantizer class is a callable that can apply any algorithm to its inputs. In this example, the inputs are the weights, and we apply the math in the FixedRangeQuantizer __call__ function to the weights. Instead of the original weights values, the output of the FixedRangeQuantizer is now passed to whatever would have used the weights. End of explanation """ model = quantize_annotate_model(tf.keras.Sequential([ # Pass in modified `QuantizeConfig` to modify this `Dense` layer. quantize_annotate_layer(tf.keras.layers.Dense(20, input_shape=(20,)), ModifiedDenseQuantizeConfig()), tf.keras.layers.Flatten() ])) # `quantize_apply` requires mentioning `ModifiedDenseQuantizeConfig` with `quantize_scope`: with quantize_scope( {'ModifiedDenseQuantizeConfig': ModifiedDenseQuantizeConfig}): # Use `quantize_apply` to actually make the model quantization aware. quant_aware_model = tfmot.quantization.keras.quantize_apply(model) quant_aware_model.summary() """ Explanation: Applying the configuration is the same across the "Experiment with quantization" use cases. * Apply tfmot.quantization.keras.quantize_annotate_layer to the Dense layer and pass in the QuantizeConfig. * Use tfmot.quantization.keras.quantize_annotate_model to continue to quantize the rest of the model with the API defaults. End of explanation """
luchorivera/Prueba
Kaggle_Panda_Curso.ipynb
mit
import pandas as pd """ Explanation: <a href="https://colab.research.google.com/github/luchorivera/Prueba/blob/master/Kaggle_Panda_Curso.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> https://www.kaggle.com/residentmario/creating-reading-and-writing Pandas Home Page End of explanation """ pd.DataFrame({'Yes': [50, 21], 'No': [131, 2]}) """ Explanation: Creating data¶ There are two core objects in pandas: the DataFrame and the Series. DataFrame A DataFrame is a table. It contains an array of individual entries, each of which has a certain value. Each entry corresponds to a row (or record) and a column. For example, consider the following simple DataFrame: End of explanation """ pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'], 'Sue': ['Pretty good.', 'Bland.']}) """ Explanation: In this example, the "0, No" entry has the value of 131. The "0, Yes" entry has a value of 50, and so on. DataFrame entries are not limited to integers. For instance, here's a DataFrame whose values are strings: End of explanation """ pd.DataFrame({'Bob': ['I liked it.', 'It was awful.'], 'Sue': ['Pretty good.', 'Bland.']}, index=['Product A', 'Product B']) """ Explanation: The dictionary-list constructor assigns values to the column labels, but just uses an ascending count from 0 (0, 1, 2, 3, ...) for the row labels. Sometimes this is OK, but oftentimes we will want to assign these labels ourselves. The list of row labels used in a DataFrame is known as an Index. We can assign values to it by using an index parameter in our constructor: End of explanation """ pd.Series([1, 2, 3, 4, 5]) """ Explanation: Series A Series, by contrast, is a sequence of data values. If a DataFrame is a table, a Series is a list. And in fact you can create one with nothing more than a list End of explanation """ pd.Series([30, 35, 40], index=['2015 Sales', '2016 Sales', '2017 Sales'], name='Product A') """ Explanation: A Series is, in essence, a single column of a DataFrame. So you can assign column values to the Series the same way as before, using an index parameter. However, a Series does not have a column name, it only has one overall name: End of explanation """ # wine_reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv") """ Explanation: The Series and the DataFrame are intimately related. It's helpful to think of a DataFrame as actually being just a bunch of Series "glued together". We'll see more of this in the next section of this tutorial. Reading data files Being able to create a DataFrame or Series by hand is handy. But, most of the time, we won't actually be creating our own data by hand. Instead, we'll be working with data that already exists. Data can be stored in any of a number of different forms and formats. By far the most basic of these is the humble CSV file. When you open a CSV file you get something that looks like this: Product A,Product B,Product C, 30,21,9, 35,34,1, 41,11,11 So a CSV file is a table of values separated by commas. Hence the name: "Comma-Separated Values", or CSV. Let's now set aside our toy datasets and see what a real dataset looks like when we read it into a DataFrame. We'll use the pd.read_csv() function to read the data into a DataFrame. This goes thusly: End of explanation """ wine_reviews = pd.read_csv("winemag-data-130k-v2.csv") """ Explanation: Para bajar los datos: https://www.kaggle.com/luisrivera/exercise-creating-reading-and-writing/edit Para subir los datos a Google Collaborative: D:\kaggle\Cursos\Panda End of explanation """ wine_reviews.shape """ Explanation: We can use the shape attribute to check how large the resulting DataFrame is: End of explanation """ wine_reviews.head() """ Explanation: So our new DataFrame has 130,000 records split across 14 different columns. That's almost 2 million entries! We can examine the contents of the resultant DataFrame using the head() command, which grabs the first five rows: End of explanation """ # wine_reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) # wine_reviews.head() wine_reviews = pd.read_csv("winemag-data-130k-v2.csv", index_col=0) wine_reviews.head() """ Explanation: The pd.read_csv() function is well-endowed, with over 30 optional parameters you can specify. For example, you can see in this dataset that the CSV file has a built-in index, which pandas did not pick up on automatically. To make pandas use that column for the index (instead of creating a new one from scratch), we can specify an index_col. End of explanation """ import pandas as pd pd.set_option('max_rows', 5) # from learntools.core import binder; binder.bind(globals()) # from learntools.pandas.creating_reading_and_writing import * # print("Setup complete.") """ Explanation: Para practicar directo en Kaggle: https://www.kaggle.com/luisrivera/exercise-creating-reading-and-writing/edit End of explanation """ # Your code goes here. Create a dataframe matching the above diagram and assign it to the variable fruits. fruits = pd.DataFrame({'Apples': ['30'], 'Bananas': ['21']}) #q1.check() fruits fruits = pd.DataFrame([[30, 21]], columns=['Apples', 'Bananas']) fruits """ Explanation: 1. In the cell below, create a DataFrame fruits that looks like this: End of explanation """ fruit_sales = pd.DataFrame({'Apples': ['35', '41'], 'Bananas': ['21', '34' ]}, index=['2017 Sales', '2018 Sales']) fruit_sales """ Explanation: 2. Create a dataframe fruit_sales that matches the diagram below: End of explanation """ quantities = ['4 cups', '1 cup', '2 large', '1 can'] items = ['Flour', 'Milk', 'Eggs', 'Spam'] recipe = pd.Series(quantities, index=items, name='Dinner') recipe """ Explanation: 3. Create a variable ingredients with a Series that looks like: Flour 4 cups Milk 1 cup Eggs 2 large Spam 1 can Name: Dinner, dtype: object End of explanation """ #reviews = pd.read_csv("../input/wine-reviews/winemag-data_first150k.csv", index_col=0) reviews = pd.read_csv("winemag-data_first150k.csv", index_col=0) reviews """ Explanation: 4. Read the following csv dataset of wine reviews into a DataFrame called reviews: The filepath to the csv file is ../input/wine-reviews/winemag-data_first150k.csv. The first few lines look like: ,country,description,designation,points,price,province,region_1,region_2,variety,winery 0,US,"This tremendous 100% varietal wine[...]",Martha's Vineyard,96,235.0,California,Napa Valley,Napa,Cabernet Sauvignon,Heitz 1,Spain,"Ripe aromas of fig, blackberry and[...]",Carodorum Selección Especial Reserva,96,110.0,Northern Spain,Toro,,Tinta de Toro,Bodega Carmen Rodríguez End of explanation """ animals = pd.DataFrame({'Cows': [12, 20], 'Goats': [22, 19]}, index=['Year 1', 'Year 2']) animals """ Explanation: 5. Run the cell below to create and display a DataFrame called animals: End of explanation """ animals.to_csv("cows_and_goats.csv") """ Explanation: In the cell below, write code to save this DataFrame to disk as a csv file with the name cows_and_goats.csv. End of explanation """ reviews """ Explanation: https://www.kaggle.com/residentmario/indexing-selecting-assigning Naive accessors Native Python objects provide good ways of indexing data. Pandas carries all of these over, which helps make it easy to start with. Consider this DataFrame: End of explanation """ reviews.country """ Explanation: In Python, we can access the property of an object by accessing it as an attribute. A book object, for example, might have a title property, which we can access by calling book.title. Columns in a pandas DataFrame work in much the same way. Hence to access the country property of reviews we can use: End of explanation """ reviews['country'] """ Explanation: If we have a Python dictionary, we can access its values using the indexing ([]) operator. We can do the same with columns in a DataFrame: End of explanation """ reviews['country'][0] """ Explanation: These are the two ways of selecting a specific Series out of a DataFrame. Neither of them is more or less syntactically valid than the other, but the indexing operator [] does have the advantage that it can handle column names with reserved characters in them (e.g. if we had a country providence column, reviews.country providence wouldn't work). Doesn't a pandas Series look kind of like a fancy dictionary? It pretty much is, so it's no surprise that, to drill down to a single specific value, we need only use the indexing operator [] once more: End of explanation """ reviews.iloc[0] """ Explanation: Indexing in pandas The indexing operator and attribute selection are nice because they work just like they do in the rest of the Python ecosystem. As a novice, this makes them easy to pick up and use. However, pandas has its own accessor operators, loc and iloc. For more advanced operations, these are the ones you're supposed to be using. Index-based selection Pandas indexing works in one of two paradigms. The first is index-based selection: selecting data based on its numerical position in the data. iloc follows this paradigm. To select the first row of data in a DataFrame, we may use the following: End of explanation """ reviews.iloc[:, 0] On its own, the : operator, which also comes from native Python, means "everything". When combined with other selectors, however, it can be used to indicate a range of values. For example, to select the country column from just the first, second, and third row, we would do: reviews.iloc[:3, 0] """ Explanation: Both loc and iloc are row-first, column-second. This is the opposite of what we do in native Python, which is column-first, row-second. This means that it's marginally easier to retrieve rows, and marginally harder to get retrieve columns. To get a column with iloc, we can do the following: End of explanation """ reviews.iloc[1:3, 0] """ Explanation: Or, to select just the second and third entries, we would do: End of explanation """ reviews.iloc[[0, 1, 2], 0] """ Explanation: It's also possible to pass a list: End of explanation """ reviews.iloc[-5:] """ Explanation: Finally, it's worth knowing that negative numbers can be used in selection. This will start counting forwards from the end of the values. So for example here are the last five elements of the dataset. End of explanation """ reviews.loc[0, 'country'] """ Explanation: Label-based selection The second paradigm for attribute selection is the one followed by the loc operator: label-based selection. In this paradigm, it's the data index value, not its position, which matters. For example, to get the first entry in reviews, we would now do the following: End of explanation """ reviews.loc[:, ['taster_name', 'taster_twitter_handle', 'points']] """ Explanation: iloc is conceptually simpler than loc because it ignores the dataset's indices. When we use iloc we treat the dataset like a big matrix (a list of lists), one that we have to index into by position. loc, by contrast, uses the information in the indices to do its work. Since your dataset usually has meaningful indices, it's usually easier to do things using loc instead. For example, here's one operation that's much easier using loc: End of explanation """ # reviews.set_index("title") # da error, no existe esa columna reviews.set_index("variety") """ Explanation: Choosing between loc and iloc When choosing or transitioning between loc and iloc, there is one "gotcha" worth keeping in mind, which is that the two methods use slightly different indexing schemes. iloc uses the Python stdlib indexing scheme, where the first element of the range is included and the last one excluded. So 0:10 will select entries 0,...,9. loc, meanwhile, indexes inclusively. So 0:10 will select entries 0,...,10. Why the change? Remember that loc can index any stdlib type: strings, for example. If we have a DataFrame with index values Apples, ..., Potatoes, ..., and we want to select "all the alphabetical fruit choices between Apples and Potatoes", then it's a lot more convenient to index df.loc['Apples':'Potatoes'] than it is to index something like df.loc['Apples', 'Potatoet] (t coming after s in the alphabet). This is particularly confusing when the DataFrame index is a simple numerical list, e.g. 0,...,1000. In this case df.iloc[0:1000] will return 1000 entries, while df.loc[0:1000] return 1001 of them! To get 1000 elements using loc, you will need to go one lower and ask for df.iloc[0:999]. Otherwise, the semantics of using loc are the same as those for iloc. Manipulating the index Label-based selection derives its power from the labels in the index. Critically, the index we use is not immutable. We can manipulate the index in any way we see fit. The set_index() method can be used to do the job. Here is what happens when we set_index to the title field: End of explanation """ reviews.country == 'Italy' """ Explanation: This is useful if you can come up with an index for the dataset which is better than the current one. Conditional selection So far we've been indexing various strides of data, using structural properties of the DataFrame itself. To do interesting things with the data, however, we often need to ask questions based on conditions. For example, suppose that we're interested specifically in better-than-average wines produced in Italy. We can start by checking if each wine is Italian or not: End of explanation """ reviews.loc[reviews.country == 'Italy'] """ Explanation: This operation produced a Series of True/False booleans based on the country of each record. This result can then be used inside of loc to select the relevant data: End of explanation """ reviews.loc[(reviews.country == 'Italy') & (reviews.points >= 90)] """ Explanation: This DataFrame has ~20,000 rows. The original had ~130,000. That means that around 15% of wines originate from Italy. We also wanted to know which ones are better than average. Wines are reviewed on a 80-to-100 point scale, so this could mean wines that accrued at least 90 points. We can use the ampersand (&) to bring the two questions together: End of explanation """ reviews.loc[(reviews.country == 'Italy') | (reviews.points >= 90)] """ Explanation: Suppose we'll buy any wine that's made in Italy or which is rated above average. For this we use a pipe (|): End of explanation """ reviews.loc[reviews.country.isin(['Italy', 'France'])] """ Explanation: Pandas comes with a few built-in conditional selectors, two of which we will highlight here. The first is isin. isin is lets you select data whose value "is in" a list of values. For example, here's how we can use it to select wines only from Italy or France: End of explanation """ reviews.loc[reviews.price.notnull()] """ Explanation: The second is isnull (and its companion notnull). These methods let you highlight values which are (or are not) empty (NaN). For example, to filter out wines lacking a price tag in the dataset, here's what we would do: End of explanation """ reviews['critic'] = 'everyone' reviews['critic'] """ Explanation: Assigning data Going the other way, assigning data to a DataFrame is easy. You can assign either a constant value: End of explanation """ reviews['index_backwards'] = range(len(reviews), 0, -1) reviews['index_backwards'] """ Explanation: Or with an iterable of values: End of explanation """ import pandas as pd # reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) reviews = pd.read_csv("winemag-data-130k-v2.csv", index_col=0) pd.set_option("display.max_rows", 5) # from learntools.core import binder; binder.bind(globals()) # from learntools.pandas.indexing_selecting_and_assigning import * # print("Setup complete.") """ Explanation: https://www.kaggle.com/luisrivera/exercise-indexing-selecting-assigning/edit Ejercicios Run the following cell to load your data and some utility functions (including code to check your answers). End of explanation """ reviews.head() """ Explanation: Look at an overview of your data by running the following line. End of explanation """ desc = reviews['description'] desc = reviews.description #or #desc = reviews["description"] # desc is a pandas Series object, with an index matching the reviews DataFrame. In general, when we select a single column from a DataFrame, we'll get a Series. desc.head(10) """ Explanation: # Exercises 1. Select the description column from reviews and assign the result to the variable desc. End of explanation """ first_description = reviews["description"][0] # q2.check() first_description # Solution: first_description = reviews.description.iloc[0] # Note that while this is the preferred way to obtain the entry in the DataFrame, many other options will return a valid result, # such as reviews.description.loc[0], reviews.description[0], and more! first_description """ Explanation: 2. Select the first value from the description column of reviews, assigning it to variable first_description. End of explanation """ first_row = reviews.iloc[0] # q3.check() first_row # Solution: first_row = reviews.iloc[0] """ Explanation: 3. Select the first row of data (the first record) from reviews, assigning it to the variable first_row. End of explanation """ first_descriptions = reviews.iloc[:10, 1] # first_descriptions = reviews.description.iloc[0:9] # first_descriptions = reviews.description.loc[0:9,'description'] # q4.check() first_descriptions # Solution: first_descriptions = reviews.description.iloc[:10] # Note that many other options will return a valid result, such as desc.head(10) and reviews.loc[:9, "description"]. first_descriptions """ Explanation: 4. Select the first 10 values from the description column in reviews, assigning the result to variable first_descriptions. Hint: format your output as a pandas Series. End of explanation """ sample_reviews = reviews.iloc[[1,2,3,5,8],] # q5.check() sample_reviews # Solution: indices = [1, 2, 3, 5, 8] sample_reviews = reviews.loc[indices] sample_reviews """ Explanation: 5. Select the records with index labels 1, 2, 3, 5, and 8, assigning the result to the variable sample_reviews. In other words, generate the following DataFrame: End of explanation """ df = reviews.loc[[0,1,10,100],['country', 'province', 'region_1', 'region_2']] # q6.check() df # Solution: cols = ['country', 'province', 'region_1', 'region_2'] indices = [0, 1, 10, 100] df = reviews.loc[indices, cols] df """ Explanation: 6. Create a variable df containing the country, province, region_1, and region_2 columns of the records with the index labels 0, 1, 10, and 100. In other words, generate the following DataFrame: End of explanation """ df = reviews.loc[0:99,['country', 'variety']] # q7.check() df # # Correct: cols = ['country', 'variety'] df = reviews.loc[:99, cols] # or # cols_idx = [0, 11] # df = reviews.iloc[:100, cols_idx] df """ Explanation: 7. Create a variable df containing the country and variety columns of the first 100 records. Hint: you may use loc or iloc. When working on the answer this question and the several of the ones that follow, keep the following "gotcha" described in the tutorial: iloc uses the Python stdlib indexing scheme, where the first element of the range is included and the last one excluded. loc, meanwhile, indexes inclusively. This is particularly confusing when the DataFrame index is a simple numerical list, e.g. 0,...,1000. In this case df.iloc[0:1000] will return 1000 entries, while df.loc[0:1000] return 1001 of them! To get 1000 elements using loc, you will need to go one lower and ask for df.iloc[0:999]. End of explanation """ italian_wines = reviews.loc[reviews.country == 'Italy'] # q8.check() italian_wines # Solution: italian_wines = reviews[reviews.country == 'Italy'] italian_wines """ Explanation: 8. Create a DataFrame italian_wines containing reviews of wines made in Italy. Hint: reviews.country equals what? End of explanation """ top_oceania_wines = reviews.loc[reviews.country.isin(['Australia', 'New Zealand']) & (reviews.points >= 95)] # q9.check() top_oceania_wines # Solution: top_oceania_wines = reviews.loc[ (reviews.country.isin(['Australia', 'New Zealand'])) & (reviews.points >= 95) ] """ Explanation: 9. Create a DataFrame top_oceania_wines containing all reviews with at least 95 points (out of 100) for wines from Australia or New Zealand. End of explanation """ reviews """ Explanation: https://www.kaggle.com/residentmario/summary-functions-and-maps Funciones y Mapas End of explanation """ reviews.points.describe() """ Explanation: Summary functions Pandas provides many simple "summary functions" (not an official name) which restructure the data in some useful way. For example, consider the describe() method: End of explanation """ reviews.taster_name.describe() """ Explanation: This method generates a high-level summary of the attributes of the given column. It is type-aware, meaning that its output changes based on the data type of the input. The output above only makes sense for numerical data; for string data here's what we get: End of explanation """ reviews.points.mean() """ Explanation: If you want to get some particular simple summary statistic about a column in a DataFrame or a Series, there is usually a helpful pandas function that makes it happen. For example, to see the mean of the points allotted (e.g. how well an averagely rated wine does), we can use the mean() function: End of explanation """ # reviews.taster_name.unique() # se demora mucho?? """ Explanation: To see a list of unique values we can use the unique() function: End of explanation """ reviews.taster_name.value_counts() """ Explanation: To see a list of unique values and how often they occur in the dataset, we can use the value_counts() method: End of explanation """ review_points_mean = reviews.points.mean() reviews.points.map(lambda p: p - review_points_mean) """ Explanation: Maps A map is a term, borrowed from mathematics, for a function that takes one set of values and "maps" them to another set of values. In data science we often have a need for creating new representations from existing data, or for transforming data from the format it is in now to the format that we want it to be in later. Maps are what handle this work, making them extremely important for getting your work done! There are two mapping methods that you will use often. map() is the first, and slightly simpler one. For example, suppose that we wanted to remean the scores the wines received to 0. We can do this as follows: End of explanation """ def remean_points(row): row.points = row.points - review_points_mean return row reviews.apply(remean_points, axis='columns') """ Explanation: The function you pass to map() should expect a single value from the Series (a point value, in the above example), and return a transformed version of that value. map() returns a new Series where all the values have been transformed by your function. apply() is the equivalent method if we want to transform a whole DataFrame by calling a custom method on each row. La función que pase a map () debería esperar un único valor de la Serie (un valor de punto, en el ejemplo anterior) y devolver una versión transformada de ese valor. map () devuelve una nueva serie donde todos los valores han sido transformados por su función. apply () es el método equivalente si queremos transformar un DataFrame completo llamando a un método personalizado en cada fila. End of explanation """ reviews.head(1) """ Explanation: If we had called reviews.apply() with axis='index', then instead of passing a function to transform each row, we would need to give a function to transform each column. Note that map() and apply() return new, transformed Series and DataFrames, respectively. They don't modify the original data they're called on. If we look at the first row of reviews, we can see that it still has its original points value. Si hubiéramos llamado reviews.apply () con axis = 'index', entonces, en lugar de pasar una función para transformar cada fila, tendríamos que dar una función para transformar cada columna. Tenga en cuenta que map () y apply () devuelven Series y DataFrames nuevos y transformados, respectivamente. No modifican los datos originales a los que se les solicita. Si miramos la primera fila de revisiones, podemos ver que todavía tiene su valor de puntos original. End of explanation """ review_points_mean = reviews.points.mean() reviews.points - review_points_mean """ Explanation: Pandas provides many common mapping operations as built-ins. For example, here's a faster way of remeaning our points column: End of explanation """ reviews.country + " - " + reviews.region_1 """ Explanation: In this code we are performing an operation between a lot of values on the left-hand side (everything in the Series) and a single value on the right-hand side (the mean value). Pandas looks at this expression and figures out that we must mean to subtract that mean value from every value in the dataset. Pandas will also understand what to do if we perform these operations between Series of equal length. For example, an easy way of combining country and region information in the dataset would be to do the following: End of explanation """ reviews.groupby('points').points.count() reviews.groupby('points').price.min() reviews.groupby('winery').apply(lambda df: df.title.iloc[0]) reviews.groupby(['country', 'province']).apply(lambda df: df.loc[df.points.idxmax()]) reviews.groupby(['country']).price.agg([len, min, max]) """ Explanation: These operators are faster than map() or apply() because they uses speed ups built into pandas. All of the standard Python operators (>, <, ==, and so on) work in this manner. However, they are not as flexible as map() or apply(), which can do more advanced things, like applying conditional logic, which cannot be done with addition and subtraction alone. Estos operadores son más rápidos que map () o apply () porque usan aceleraciones integradas en pandas. Todos los operadores estándar de Python (>, <, ==, etc.) funcionan de esta manera. Sin embargo, no son tan flexibles como map () o apply (), que pueden hacer cosas más avanzadas, como aplicar lógica condicional, que no se puede hacer solo con la suma y la resta. https://www.kaggle.com/residentmario/grouping-and-sorting Groupwise analysis One function we've been using heavily thus far is the value_counts() function. We can replicate what value_counts() does by doing the following: End of explanation """ countries_reviewed = reviews.groupby(['country', 'province']).description.agg([len]) countries_reviewed mi = countries_reviewed.index type(mi) countries_reviewed.reset_index() """ Explanation: Multi-indexes In all of the examples we've seen thus far we've been working with DataFrame or Series objects with a single-label index. groupby() is slightly different in the fact that, depending on the operation we run, it will sometimes result in what is called a multi-index. A multi-index differs from a regular index in that it has multiple levels. For example: End of explanation """ countries_reviewed = countries_reviewed.reset_index() countries_reviewed.sort_values(by='len') countries_reviewed.sort_values(by='len', ascending=False) countries_reviewed.sort_index() countries_reviewed.sort_values(by=['country', 'len']) """ Explanation: Sorting Looking again at countries_reviewed we can see that grouping returns data in index order, not in value order. That is to say, when outputting the result of a groupby, the order of the rows is dependent on the values in the index, not in the data. To get data in the order want it in we can sort it ourselves. The sort_values() method is handy for this. End of explanation """ import pandas as pd # reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) #pd.set_option("display.max_rows", 5) reviews = pd.read_csv("./winemag-data-130k-v2.csv", index_col=0) # from learntools.core import binder; binder.bind(globals()) # from learntools.pandas.grouping_and_sorting import * # print("Setup complete.") """ Explanation: https://www.kaggle.com/luisrivera/exercise-grouping-and-sorting/edit Exercise: Grouping and Sorting End of explanation """ reviews_written = reviews.groupby('taster_twitter_handle').size() #or reviews_written reviews_written = reviews.groupby('taster_twitter_handle').taster_twitter_handle.count() reviews_written """ Explanation: 1. Who are the most common wine reviewers in the dataset? Create a Series whose index is the taster_twitter_handle category from the dataset, and whose values count how many reviews each person wrote. End of explanation """ best_rating_per_price = reviews.groupby('price')['points'].max().sort_index() best_rating_per_price """ Explanation: 2. What is the best wine I can buy for a given amount of money? Create a Series whose index is wine prices and whose values is the maximum number of points a wine costing that much was given in a review. Sort the values by price, ascending (so that 4.0 dollars is at the top and 3300.0 dollars is at the bottom). End of explanation """ price_extremes = reviews.groupby('variety').price.agg([min, max]) price_extremes """ Explanation: 3. What are the minimum and maximum prices for each variety of wine? Create a DataFrame whose index is the variety category from the dataset and whose values are the min and max values thereof. End of explanation """ sorted_varieties = price_extremes.sort_values(by=['min', 'max'], ascending=False) sorted_varieties """ Explanation: 4. What are the most expensive wine varieties? Create a variable sorted_varieties containing a copy of the dataframe from the previous question where varieties are sorted in descending order based on minimum price, then on maximum price (to break ties). End of explanation """ reviewer_mean_ratings = reviews.groupby('taster_name').points.mean() reviewer_mean_ratings reviewer_mean_ratings.describe() """ Explanation: 5. Create a Series whose index is reviewers and whose values is the average review score given out by that reviewer. Hint: you will need the taster_name and points columns. End of explanation """ country_variety_counts = reviews.groupby(['country', 'variety']).size().sort_values(ascending=False) country_variety_counts """ Explanation: 6. What combination of countries and varieties are most common? Create a Series whose index is a MultiIndexof {country, variety} pairs. For example, a pinot noir produced in the US should map to {"US", "Pinot Noir"}. Sort the values in the Series in descending order based on wine count. End of explanation """ reviews.price.dtype """ Explanation: https://www.kaggle.com/residentmario/data-types-and-missing-values data-types-and-missing-values Dtypes The data type for a column in a DataFrame or a Series is known as the dtype. You can use the dtype property to grab the type of a specific column. For instance, we can get the dtype of the price column in the reviews DataFrame: End of explanation """ reviews.dtypes """ Explanation: Alternatively, the dtypes property returns the dtype of every column in the DataFrame: End of explanation """ reviews.points.astype('float64') """ Explanation: Data types tell us something about how pandas is storing the data internally. float64 means that it's using a 64-bit floating point number; int64 means a similarly sized integer instead, and so on. One peculiarity to keep in mind (and on display very clearly here) is that columns consisting entirely of strings do not get their own type; they are instead given the object type. It's possible to convert a column of one type into another wherever such a conversion makes sense by using the astype() function. For example, we may transform the points column from its existing int64 data type into a float64 data type: End of explanation """ reviews.index.dtype """ Explanation: A DataFrame or Series index has its own dtype, too: End of explanation """ reviews[pd.isnull(reviews.country)] """ Explanation: Pandas also supports more exotic data types, such as categorical data and timeseries data. Because these data types are more rarely used, we will omit them until a much later section of this tutorial. Missing data Entries missing values are given the value NaN, short for "Not a Number". For technical reasons these NaN values are always of the float64 dtype. Pandas provides some methods specific to missing data. To select NaN entries you can use pd.isnull() (or its companion pd.notnull()). This is meant to be used thusly: End of explanation """ reviews.region_2.fillna("Unknown") """ Explanation: Replacing missing values is a common operation. Pandas provides a really handy method for this problem: fillna(). fillna() provides a few different strategies for mitigating such data. For example, we can simply replace each NaN with an "Unknown": End of explanation """ reviews.taster_twitter_handle.replace("@kerinokeefe", "@kerino") """ Explanation: Or we could fill each missing value with the first non-null value that appears sometime after the given record in the database. This is known as the backfill strategy. Alternatively, we may have a non-null value that we would like to replace. For example, suppose that since this dataset was published, reviewer Kerin O'Keefe has changed her Twitter handle from @kerinokeefe to @kerino. One way to reflect this in the dataset is using the replace() method: End of explanation """ # import pandas as pd # reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) # from learntools.core import binder; binder.bind(globals()) # from learntools.pandas.data_types_and_missing_data import * # print("Setup complete.") """ Explanation: The replace() method is worth mentioning here because it's handy for replacing missing data which is given some kind of sentinel value in the dataset: things like "Unknown", "Undisclosed", "Invalid", and so on. Exercise: Data Types and Missing Values https://www.kaggle.com/luisrivera/exercise-data-types-and-missing-values/edit End of explanation """ # Your code here dtype = reviews.points.dtype """ Explanation: Exercises 1. What is the data type of the points column in the dataset? End of explanation """ point_strings = reviews.points.astype(str) point_strings """ Explanation: 2. Create a Series from entries in the points column, but convert the entries to strings. Hint: strings are str in native Python. End of explanation """ missing_price_reviews = reviews[reviews.price.isnull()] n_missing_prices = len(missing_price_reviews) n_missing_prices # Cute alternative solution: if we sum a boolean series, True is treated as 1 and False as 0 n_missing_prices = reviews.price.isnull().sum() n_missing_prices # or equivalently: n_missing_prices = pd.isnull(reviews.price).sum() n_missing_prices ## 4. What are the most common wine-producing regions? Create a Series counting the number of times each value occurs in the `region_1` field. This field is often missing data, so replace missing values with `Unknown`. Sort in descending order. Your output should look something like this: ``` Unknown 21247 Napa Valley 4480 ... Bardolino Superiore 1 Primitivo del Tarantino 1 Name: region_1, Length: 1230, dtype: int64 reviews_per_region = reviews.region_1.fillna('Unknown').value_counts().sort_values(ascending=False) reviews_per_region """ Explanation: 3. Sometimes the price column is null. How many reviews in the dataset are missing a price? End of explanation """ import pandas as pd # reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) reviews = pd.read_csv("winemag-data-130k-v2.csv", index_col=0) # from learntools.core import binder; binder.bind(globals()) # from learntools.pandas.renaming_and_combining import * # print("Setup complete.") reviews.head() reviews.rename(columns={'points': 'score'}) """ Explanation: Renaming-and-combining columns https://www.kaggle.com/residentmario/renaming-and-combining Introduction Oftentimes data will come to us with column names, index names, or other naming conventions that we are not satisfied with. In that case, you'll learn how to use pandas functions to change the names of the offending entries to something better. You'll also explore how to combine data from multiple DataFrames and/or Series. Renaming The first function we'll introduce here is rename(), which lets you change index names and/or column names. For example, to change the points column in our dataset to score, we would do: End of explanation """ reviews.rename(index={0: 'firstEntry', 1: 'secondEntry'}) """ Explanation: rename() lets you rename index or column values by specifying a index or column keyword parameter, respectively. It supports a variety of input formats, but usually a Python dictionary is the most convenient. Here is an example using it to rename some elements of the index. End of explanation """ reviews.rename_axis("wines", axis='rows').rename_axis("fields", axis='columns') """ Explanation: You'll probably rename columns very often, but rename index values very rarely. For that, set_index() is usually more convenient. Both the row index and the column index can have their own name attribute. The complimentary rename_axis() method may be used to change these names. For example: End of explanation """ # canadian_youtube = pd.read_csv("../input/youtube-new/CAvideos.csv") # british_youtube = pd.read_csv("../input/youtube-new/GBvideos.csv") canadian_youtube = pd.read_csv("CAvideos.csv") british_youtube = pd.read_csv("GBvideos.csv") pd.concat([canadian_youtube, british_youtube]) """ Explanation: Combining When performing operations on a dataset, we will sometimes need to combine different DataFrames and/or Series in non-trivial ways. Pandas has three core methods for doing this. In order of increasing complexity, these are concat(), join(), and merge(). Most of what merge() can do can also be done more simply with join(), so we will omit it and focus on the first two functions here. The simplest combining method is concat(). Given a list of elements, this function will smush those elements together along an axis. This is useful when we have data in different DataFrame or Series objects but having the same fields (columns). One example: the YouTube Videos dataset, which splits the data up based on country of origin (e.g. Canada and the UK, in this example). If we want to study multiple countries simultaneously, we can use concat() to smush them together: https://www.kaggle.com/datasnaek/youtube-new ; datos End of explanation """ left = canadian_youtube.set_index(['title', 'trending_date']) right = british_youtube.set_index(['title', 'trending_date']) left.join(right, lsuffix='_CAN', rsuffix='_UK') """ Explanation: The middlemost combiner in terms of complexity is join(). join() lets you combine different DataFrame objects which have an index in common. For example, to pull down videos that happened to be trending on the same day in both Canada and the UK, we could do the following: End of explanation """ # import pandas as pd # reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0) # from learntools.core import binder; binder.bind(globals()) # from learntools.pandas.renaming_and_combining import * # print("Setup complete.") """ Explanation: The lsuffix and rsuffix parameters are necessary here because the data has the same column names in both British and Canadian datasets. If this wasn't true (because, say, we'd renamed them beforehand) we wouldn't need them. Exercise: Renaming and Combining https://www.kaggle.com/luisrivera/exercise-renaming-and-combining/edit End of explanation """ reviews.head() """ Explanation: Exercises View the first several lines of your data by running the cell below: End of explanation """ renamed = reviews.rename(columns=dict(region_1='region', region_2='locale')) # q1.check() renamed """ Explanation: 1. region_1 and region_2 are pretty uninformative names for locale columns in the dataset. Create a copy of reviews with these columns renamed to region and locale, respectively. End of explanation """ reindexed = reviews.rename_axis('wines', axis='rows') reindexed """ Explanation: 2. Set the index name in the dataset to wines. End of explanation """ # gaming_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/g/gaming.csv") gaming_products = pd.read_csv("gaming.csv") gaming_products['subreddit'] = "r/gaming" # movie_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/m/movies.csv") movie_products = pd.read_csv("movies.csv") movie_products['subreddit'] = "r/movies" """ Explanation: 3. The Things on Reddit dataset includes product links from a selection of top-ranked forums ("subreddits") on reddit.com. Run the cell below to load a dataframe of products mentioned on the /r/gaming subreddit and another dataframe for products mentioned on the r//movies subreddit. End of explanation """ combined_products = pd.concat([gaming_products, movie_products]) # q3.check() combined_products.head() """ Explanation: Create a DataFrame of products mentioned on either subreddit. End of explanation """ # powerlifting_meets = pd.read_csv("../input/powerlifting-database/meets.csv") # powerlifting_competitors = pd.read_csv("../input/powerlifting-database/openpowerlifting.csv") powerlifting_meets = pd.read_csv("meets.csv") powerlifting_meets.head() powerlifting_competitors = pd.read_csv("openpowerlifting.csv") powerlifting_competitors.head() """ Explanation: 4. The Powerlifting Database dataset on Kaggle includes one CSV table for powerlifting meets and a separate one for powerlifting competitors. Run the cell below to load these datasets into dataframes: End of explanation """ powerlifting_combined = powerlifting_meets.set_index("MeetID").join(powerlifting_competitors.set_index("MeetID")) powerlifting_combined.head() """ Explanation: Both tables include references to a MeetID, a unique key for each meet (competition) included in the database. Using this, generate a dataset combining the two tables into one. End of explanation """
vipmunot/Data-Science-Course
Data Visualization/Lab 8/w08_lab_Vipul_Munot.ipynb
mit
import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np import scipy.stats as ss import warnings warnings.filterwarnings("ignore") sns.set_style('white') %matplotlib inline """ Explanation: W8 Lab Assignment End of explanation """ x = np.array([1, 1, 1,1, 10, 100, 1000]) y = np.array([1000, 100, 10, 1, 1, 1, 1]) ratio = x/y print(ratio) """ Explanation: Ratio and logarithm If you use linear scale to visualize ratios, it can be very misleading. Let's first create some ratios. End of explanation """ plt.scatter( np.arange(len(ratio)), ratio, s=100 ) plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 ) # plot the line ratio = 1 """ Explanation: Plot on the linear scale using the scatter() function. End of explanation """ plt.scatter( np.arange(len(ratio)), ratio, s=100 ) plt.yscale('log') plt.ylim( (0.0001,10000) ) # set the scope the y axis plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 ) """ Explanation: Plot on the log scale. End of explanation """ # TODO: generate random numbers and calculate ratios between two consecutive numbers x = np.random.rand(10) print(x) ratio = [ i/j for i,j in zip(x[1:],x[:-1]) ] print(ratio) # TODO: plot the ratios on the linear scale plt.scatter( np.arange(len(ratio)), ratio, s=100 ) plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 ) # TODO: plot the ratios on the log scale plt.scatter( np.arange(len(ratio)), ratio, s=100 ) plt.yscale('log') plt.plot( [0,len(ratio)], [1,1], color='k', linestyle='--', linewidth=.5 ) """ Explanation: What do you see from the two plots? Why do we need to use log scale to visualize ratios? Let's practice this using random numbers. Generate 10 random numbers between [0,1], calculate the ratios between two consecutive numbers (the second number divides by the first, and so on), and plot the ratios on the linear and log scale. End of explanation """ # TODO: plot the histogram of movie votes movie_df = pd.read_csv('imdb.csv', delimiter='\t') plt.hist(movie_df['Votes']) """ Explanation: Log-bin Let's first see what the histogram looks like if we do not use the log scale. End of explanation """ # TODO: change the y scale to log plt.hist(movie_df['Votes']) plt.yscale('log') """ Explanation: As we can see, most votes fall in the first bin, and we cannot see the values from the second bin. How about plotting on the log scale? End of explanation """ # TODO: set the bin number to 1000 plt.hist(movie_df['Votes'], bins=1000) plt.yscale('log') """ Explanation: Change the number of bins to 1000. End of explanation """ plt.hist( movie_df['Rating'], bins=range(0,11) ) """ Explanation: Now, let's try log-bin. Recall that when plotting histgrams we can specify the edges of bins through the bins parameter. For example, we can specify the edges of bins to [1, 2, 3, ... , 10] as follows. End of explanation """ # TODO: specify the edges of bins using np.logspace bins = np.logspace( np.log10(min(movie_df['Votes'])), np.log10(max(movie_df['Votes'])), 20) """ Explanation: Here, we can specify the edges of bins in a similar way. Instead of specifying on the linear scale, we do it on the log space. Some useful resources: Google query: python log-bin numpy.logspace numpy.linspace vs numpy.logspace Hint: since $10^{\text{start}} = \text{min_votes}$, $\text{start} = \log_{10}(\text{min_votes})$ End of explanation """ plt.hist(movie_df['Votes'], bins=bins) plt.xscale('log') # TODO: correct the plot plt.hist(movie_df['Votes'], bins=bins, normed=True) plt.xscale('log') plt.yscale('log') """ Explanation: Now we can plot histgram with log-bin. End of explanation """ movie_df = pd.read_csv('imdb.csv', delimiter='\t') movie_df.head() """ Explanation: KDE Import the IMDb data. End of explanation """ movie_df['Rating'].hist(bins=10, normed=True) movie_df['Rating'].plot(kind='kde') """ Explanation: We can plot histogram and KDE using pandas: End of explanation """ sns.distplot(movie_df['Rating'], bins=10) """ Explanation: Or using seaborn: End of explanation """ # TODO: implement this using pandas logs = np.log(movie_df['Votes']) logs.hist(bins=10, normed=True) logs.plot(kind='kde') plt.xlim(0, 25) # TODO: implement this using seaborn sns.distplot(logs, bins=10) """ Explanation: Can you plot the histogram and KDE of the log of movie votes? End of explanation """ f = plt.figure(figsize=(15,8)) plt.xlim(0, 10) sample_sizes = [10, 50, 100, 500, 1000, 10000] for i, N in enumerate(sample_sizes, 1): plt.subplot(2,3,i) plt.title("Sample size: {}".format(N)) for j in range(5): s = movie_df['Rating'].sample(N) sns.kdeplot(s, kernel='gau', legend=False) """ Explanation: We can get a random sample using pandas' sample() function. The kdeplot() function in seaborn provides many options (like kernel types) to do KDE. End of explanation """ X1 = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0] Y1 = [8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68] X2 = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0] Y2 = [9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74] X3 = [10.0, 8.0, 13.0, 9.0, 11.0, 14.0, 6.0, 4.0, 12.0, 7.0, 5.0] Y3 = [7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73] X4 = [8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 19.0, 8.0, 8.0, 8.0] Y4 = [6.58, 5.76, 7.71, 8.84, 8.47, 7.04, 5.25, 12.50, 5.56, 7.91, 6.89] data = [ (X1,Y1),(X2,Y2),(X3,Y3),(X4,Y4) ] plt.figure(figsize=(10,8)) for i,p in enumerate(data, 1): X, Y = p[0], p[1] plt.subplot(2, 2, i) plt.scatter(X, Y, s=30, facecolor='#FF4500', edgecolor='#FF4500') slope, intercept, r_value, p_value, std_err = ss.linregress(X, Y) plt.plot([0, 20], [intercept, slope*20+intercept], color='#1E90FF') #plot the fitted line Y = slope * X + intercept # TODO: display the fitted equations using the text() function. plt.text(2, 11, r'$Y = {:1.2f} \cdot X + {:1.2f}$'.format(slope,intercept)) plt.xlim(0,20) plt.xlabel('X'+str(i)) plt.ylabel('Y'+str(i)) """ Explanation: Regression Remember Anscombe's quartet? Let's plot the four datasets and do linear regression, which can be done with scipy's linregress() function. TODO: display the fitted equations using the text() function. End of explanation """ df = sns.load_dataset("anscombe") df.head() """ Explanation: Actually, the dataset is included in seaborn and we can load it. End of explanation """ sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df, col_wrap=2, ci=None, palette="muted", size=4, scatter_kws={"s": 50, "alpha": 1}) """ Explanation: All four datasets are in this single data frame and the 'dataset' indicator is one of the columns. This is a form often called tidy data, which is easy to manipulate and plot. In tidy data, each row is an observation and columns are the properties of the observation. Seaborn makes use of the tidy form. We can show the linear regression results for each eadataset. Here is the example: End of explanation """ sns.lmplot(x="y", y="x", col="dataset", hue="dataset", data=df, col_wrap=2, ci=None, palette="muted", size=4, scatter_kws={"s": 25, "alpha": 0.8}) """ Explanation: What do these parameters mean? The documentation for the lmplot() is here. End of explanation """ geq = movie_df['Year'] >= 1990 leq = movie_df['Year'] <= 1999 subset = movie_df[ geq & leq ] subset.head() """ Explanation: 2-D scatter plot and KDE Select movies released in the 1990s: End of explanation """ plt.scatter(subset['Votes'], subset['Rating']) plt.xlabel('Votes') plt.ylabel('Rating') """ Explanation: We can draw a scatter plot of movie votes and ratings using the scatter() function. End of explanation """ plt.scatter(subset['Votes'], subset['Rating'], s=20, alpha=0.6, facecolors='none', edgecolors='b') plt.xlabel('Votes') plt.ylabel('Rating') """ Explanation: Too many data points. We can decrease symbol size, set symbols empty, and make them transparent. End of explanation """ plt.scatter(subset['Votes'], subset['Rating'], s=10, alpha=0.6, facecolors='none', edgecolors='b') plt.xscale('log') plt.xlabel('Votes') plt.ylabel('Rating') """ Explanation: Number of votes is broadly distributed. So set the x axis to log scale. End of explanation """ sns.jointplot(np.log(subset['Votes']), subset['Rating']) """ Explanation: We can combine scatter plot with 1D histogram using seaborn's jointplot() function. End of explanation """ # TODO: draw a joint plot with hexbins and two histograms for each marginal distribution sns.jointplot(np.log(subset['Votes']), subset['Rating'], kind='hexbin') """ Explanation: Hexbin There are too many data points. We need to bin them, which can be done by using the jointplot() and setting the kind parameter. End of explanation """ sns.kdeplot(np.log(subset['Votes']), subset['Rating'], cmap="Reds", shade=True, shade_lowest=False) """ Explanation: KDE We can also do 2D KDE using seaborn's kdeplot() function. End of explanation """ # TODO: draw a joint plot with bivariate KDE as well as marginal distributions with KDE sns.jointplot(np.log(subset['Votes']), subset['Rating'], kind='kde', shade_lowest=False) """ Explanation: Or using jointplot() by setting the kind parameter. End of explanation """
jamesmarva/maths-with-python
10-generators.ipynb
mit
def naivesum_list(N): """ Naively sum the first N integers """ A = 0 for i in list(range(N + 1)): A += i return A """ Explanation: Iterators and Generators In the section on loops we introduced the range function, and said that you should think about it as creating a list of numbers. In Python 2.X this is exactly what it does. In Python 3.X this is not what it does. Instead it creates the numbers one at a time. The difference in speed and memory usage is enormous for very large lists - examples are given here and here. We can recreate one of the examples from Meuer's slides in detail: End of explanation """ %load_ext memory_profiler %memit naivesum_list(10**4) %memit naivesum_list(10**5) %memit naivesum_list(10**6) %memit naivesum_list(10**7) %memit naivesum_list(10**8) """ Explanation: We will now see how much memory this uses: End of explanation """ def naivesum(N): """ Naively sum the first N integers """ A = 0 for i in range(N + 1): A += i return A %memit naivesum(10**4) %memit naivesum(10**5) %memit naivesum(10**6) %memit naivesum(10**7) %memit naivesum(10**8) """ Explanation: We see that the memory usage is growing very rapidly - as the list gets large it's growing as $N$. Instead we can use the range function that yields one integer at a time: End of explanation """ def all_primes(N): """ Return all primes less than or equal to N. Parameters ---------- N : int Maximum number Returns ------- prime : generator Prime numbers """ primes = [] for n in range(2, N+1): is_n_prime = True for p in primes: if n%p == 0: is_n_prime = False break if is_n_prime: primes.append(n) yield n """ Explanation: We see that the memory usage is unchanged with $N$, making it practical to run much larger calculations. Iterators The range function is returning an iterator here. This is an object - a general thing - that represents a stream, or a sequence, of data. The iterator knows how to create the first element of the stream, and it knows how to get the next element. It does not, in general, need to know all of the elements at once. As we've seen above this can save a lot of memory. It can also save time: the code does not need to construct all of the members of the sequence before starting, and it's quite possible you don't need all of them (think about the "Shortest published mathematical paper" exercise). An iterator such as range is very useful, and there's a lot more useful ways to work with iterators in the itertools module. These functions that return iterators, such as range, are called generators, and it's useful to be able to make your own. Making your own generators Let's look at an example: finding all primes less than $N$ that can be written in the form $4 k - 1$, where $k$ is an integer. We're going to need to calculate all prime numbers less than or equal to $N$. We could write a function that returns all these numbers as a list. However, if $N$ gets large then this will be expensive, both in time and memory. As we only need one number at a time, we can use a generator. End of explanation """ print("All prime numbers less than or equal to 20:") for p in all_primes(20): print(p) """ Explanation: This code needs careful examination. First it defines the list of all prime numbers that it currently knows, primes (which is initially empty). Then it loops through all integers $n$ from $2$ to $N$ (ignoring $1$ as we know it's not prime). Inside this loop it initially assumes that $n$ is prime. It then checks if any of the known primes exactly divides $n$ (n%p == 0 checks if $n \bmod p = 0$). As soon as it finds such a prime divisor it knows that $n$ is not prime it resets the assumption with this new knowledge, then breaks out of the loop. This statement stops the for p in primes loop early, as we don't need to look at later primes. If no known prime ever divides $n$ then at the end of the for p in primes loop we will still have is_n_prime being True. In this case we must have $n$ being prime, so we add it to the list of known primes and return it. It is precisely this point which makes the code above define a generator. We return the value of the prime number found using the yield keyword, not the return keyword, and we return the value as soon as it is known. It is the use of the yield keyword that makes this function a generator. This means that only the latest prime number is stored for return. To use the iterator within a loop, we code it in the same way as with the range function: End of explanation """ a = all_primes(10) next(a) next(a) next(a) next(a) next(a) """ Explanation: To see what the generator is actually doing, we can step through it one call at a time using the built in next function: End of explanation """ for p in all_primes(100): if (1+p)%4 == 0: print("The prime {} is 4 * {} - 1.".format(p, int((1+p)/4))) """ Explanation: So, when the generator gets to the end of its iteration it raises an exception. As seen in previous sections, we could surround the next call with a try block to capture the StopIteration so that we can continue after it finishes. This is effectively what the for loop is doing. We can now find all primes (less than or equal to 100, for example) that have the form $4 k - 1$ using End of explanation """
ellisonbg/talk-2014
Jupyter and IPython.ipynb
mit
from IPython.display import display, Image, HTML from talktools import website, nbviewer """ Explanation: Projects Jupyter and IPython End of explanation """ import ipythonproject ipythonproject.core_devs() """ Explanation: Overview Jupyter and IPython are a pair of open source projects that together offer an open-source (BSD-licensed), interactive computing environment for Python, Julia, R and other languages. Goals: Provide a set of reusable software tools that span the entire lifecycle of scientific computing and data science: Individual exploration, analysis and visualization Debugging, testing Production runs Parallel computing Collaboration Publication Presentation Teaching/Learning Enable the creation and sharing of code/data driven narratives across a wide range of contexts and audiences. Make computational reproducibility possible and enjoyable. Minimize the "distance" between a human user and their data through interactivity. IPython <img src="images/ipython_logo.png" width="400px"> Started in 2001 by Fernando Perez, who continues to lead the project from UC Berkeley. Originally focused on interactive computing in Python only. Starting in 2011, IPython began to include other languages such as Julia, R, Ruby, etc. Moving forward, IPython will contain the Python specific parts of the architecture: Interactive Python shell A kernel for running Python code in the Jupyter architecture Tools for interactive parallel computing in Python Jupyter <img src="images/jupyter_logo.png" width="400px"> Created in Summer 2014 by the IPython development team. To carry forward a vision of reproducible interactive computing for all programming languages: Python Julia R Ruby Haskell Scala Go ... A home for the language independent parts of the architecture: A network protocol for applications to talk to kernels that run code for interactive computations. A set of applications that enable users to write and run code on those kernels. Notebook file format and conversion tools (nbconvert). Notebook sharing service (https://nbviewer.jupyter.org/). Funding Over the past 13 years, much of Jupyter/IPython has been "funded" by volunteer developer time. Past funding: NASA, DOD, NIH, Enthought Corporation Current funding: <img src="images/sloan-logo.png" width="300px"> <img src="images/microsoft-logo.png" width="300px"> <img src="images/google-logo.png" width="300px"> <img src="images/simons-logo.png" width="300px"> <img src="images/nsf-logo.png" width="75px"> <img src="images/rackspace-logo.png" width="300px"> Contributors Jupyter/IPython development team A talented team of $\approx9$ core developers and a larger community of $\approx375$ contributors. Through the above funding sources, there are currently 6 full time people working on IPython at UC Berkeley and Cal Poly. End of explanation """
JAmarel/Phys202
Matplotlib/MatplotlibEx03.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np """ Explanation: Matplotlib Exercise 3 Imports End of explanation """ def well2d(x, y, nx, ny, L=1.0): """Compute the 2d quantum well wave function.""" return (2/L)*np.sin(nx * np.pi * x/L)*np.sin(ny * np.pi * y/L) psi = well2d(np.linspace(0,1,10), np.linspace(0,1,10), 1, 1) assert len(psi)==10 assert psi.shape==(10,) """ Explanation: Contour plots of 2d wavefunctions The wavefunction of a 2d quantum well is: $$ \psi_{n_x,n_y}(x,y) = \frac{2}{L} \sin{\left( \frac{n_x \pi x}{L} \right)} \sin{\left( \frac{n_y \pi y}{L} \right)} $$ This is a scalar field and $n_x$ and $n_y$ are quantum numbers that measure the level of excitation in the x and y directions. $L$ is the size of the well. Define a function well2d that computes this wavefunction for values of x and y that are NumPy arrays. End of explanation """ nx = 3 ny = 2 L = 1 x = np.linspace(0,L,1000) y = np.linspace(0,L,1000) XX,YY = np.meshgrid(x,y) plt.figure(figsize=(9,6)) f = plt.contourf(x,y,well2d(XX, YY, nx, ny, L),cmap=('seismic')) plt.xlabel('x') plt.ylabel('y') plt.title('Contour plot of two dimensional wave function in an infinite well') plt.tick_params(axis='x',top='off',direction='out') plt.tick_params(axis='y',right='off',direction='out') plt.colorbar(shrink=.8) XX YY assert True # use this cell for grading the contour plot """ Explanation: The contour, contourf, pcolor and pcolormesh functions of Matplotlib can be used for effective visualizations of 2d scalar fields. Use the Matplotlib documentation to learn how to use these functions along with the numpy.meshgrid function to visualize the above wavefunction: Use $n_x=3$, $n_y=2$ and $L=0$. Use the limits $[0,1]$ for the x and y axis. Customize your plot to make it effective and beautiful. Use a non-default colormap. Add a colorbar to you visualization. First make a plot using one of the contour functions: End of explanation """ plt.figure(figsize=(9,6)) plt.pcolormesh(XX,YY,well2d(XX, YY, nx, ny, L),cmap=('spectral')) plt.colorbar(shrink=.8) plt.xlabel('x') plt.ylabel('y') plt.title('Contour plot of two dimensional wave function in an infinite well') plt.xlabel('x') plt.ylabel('y') plt.tick_params(axis='x',top='off',direction='out') plt.tick_params(axis='y',right='off',direction='out') assert True # use this cell for grading the pcolor plot """ Explanation: Next make a visualization using one of the pcolor functions: End of explanation """
3DGenomes/tadbit
doc/notebooks/install.ipynb
gpl-3.0
%%bash wget -nv https://repo.continuum.io/miniconda/Miniconda2-latest-Linux-x86_64.sh -O miniconda.sh """ Explanation: Installing TADbit on GNU/Linux TADbit requires python2 >= 2.6 or python3 >= 3.6 as well as several dependencies that are listed below. Dependencies Conda Conda (http://conda.pydata.org/docs/index.html) is a package manager, mainly hosting python programs, that is very useful when no root access is available and the softwares have complicated dependencies. To install it (in case you don't already have it) just download the installer from http://conda.pydata.org/miniconda.html End of explanation """ %%bash bash miniconda.sh -b -p $HOME/miniconda2 """ Explanation: And run it with all the default options. The installer will create a miniconda2 folder in your home directory where all the programs that you need will be stored (including python). Alternatively you can also use this oneliner: End of explanation """ %%bash ## required conda config --add channels bioconda conda config --add channels conda-forge conda install -y -q -c bioconda mcl conda install -y -q future conda install -y -q h5py conda install -y -q samtools conda install -y -q pysam conda install -y -q matplotlib-base conda install -y -q scipy ## optional conda install -y -q jupyter # this notebook :) conda install -y -q -c bioconda sra-tools # to download raw data from released experiment """ Explanation: Python libraries Required: apt-get install python-scipy apt-get install python-numpy Optional packages (but highly recommended): apt-get install python-matplotlib .. note:: Alternative install, you can install python-setuptools and use easy_install to get these packages (e.g. "easy_install scipy"). With conda you can install most of the needed dependencies: End of explanation """ %%bash conda -y -q -c bioconda gem3-mapper conda -y -q -c bioconda bowtie2 conda -y -q -c bioconda hisat2 """ Explanation: IMP - 3D modeling From the repositories Since version 2.5 IMP is available in several repositories, like Ubuntu sudo apt-get install imp or in anaconda conda install -c https://conda.anaconda.org/salilab imp These options may be easier than the source compilation. From source Check https://integrativemodeling.org/download-linux.html MCL - clustering MCL is the program used for clustering the 3D models generated by IMP. It can be downloaded from http://micans.org/mcl/; on Debian/Ubuntu machines it can be automatically installed with: sudo apt-get install mcl or in anaconda http://conda.pydata.org/docs/intro.html conda install -y -q -c bioconda mcl Note: if the MCL executable is not found by TADbit, an alternative clustering method will be used. Nevertheless we strongly recommend to use MCL. GEM Mapper The default mapper in TADbit is GEM, but bowtie2 and hisat2 are also supported. GEM version 3, bowtie2 and hisat2 are available in bioconda. End of explanation """ %%bash wget -nv -O GEM.tbz2 https://sourceforge.net/projects/gemlibrary/files/gem-library/Binary%20pre-release%203/GEM-binaries-Linux-x86_64-core_i3-20130406-045632.tbz2/download """ Explanation: If you prefer the old good GEM version 2, go to the download page: https://sourceforge.net/projects/gemlibrary/files/gem-library/Binary%20pre-release%202/ and download the i3 version (the other version is for older computers, and you usually won't have to use it). End of explanation """ %%bash tar -xjvf GEM.tbz2 """ Explanation: Uncompress the archive: End of explanation """ %%bash rm -f GEM-binaries-Linux-x86_64-core_i3-20130406-045632/bin/LICENCE %%bash cp GEM-binaries-Linux-x86_64-core_i3-20130406-045632/bin/* ~/miniconda2/bin/ """ Explanation: And copy the needed binaries to somewhere in your PATH, like: End of explanation """ %%bash rm -rf GEM-binaries-Linux-x86_64-core_i3-20121106-022124 rm -f GEM.tbz2 """ Explanation: Cleanup End of explanation """ %%bash R -e ' install.packages("devtools", repos="http://cran.us.r-project.org"); devtools::install_github("qenvio/dryhic")' """ Explanation: DryHiC for oneD normalization Install dryhic from: https://github.com/qenvio/dryhic From an R console type: ``` install.packages("devtools") devtools::install_github("qenvio/dryhic") ``` Or execute this cell: End of explanation """ %%bash wget -nv http://sun.aei.polsl.pl/dsrc/download/2.0rc/dsrc %%bash chmod +x dsrc """ Explanation: DSRC FASTQ compressor DSRC is a FASTQ compressor, it's not needed, but we use it as the size of the files is significantly smaller than using gunzip (>30%), and, more importantly, the access to them can be parallelized, and is much faster than any other alternative. It can be downloaded from https://github.com/lrog/dsrc End of explanation """ %%bash mv dsrc ~/miniconda2/bin/ """ Explanation: And copy to somewhere in your PATH, like: End of explanation """ %%bash conda install -c bioconda tadbit """ Explanation: Chimera - visualization Chimera is a program used for visualization and analysis of molecular structures. It is used in TADbit to visualize the generated 3D models. Chimera is available at: http://www.cgl.ucsf.edu/chimera/ This software is only needed for the visualization of 3D models from inside TADbit. LiftOver TADbit provides a wrapper for the LiftOver tool [Fujita2011]_ (download it from: http://hgdownload.cse.ucsc.edu/admin/exe/ ). This can be used to ease the conversion of genomic TAD coordinates (e.g.: to align human TADs with mouse TADs). TADbit Once all the needed library/software have been installed, TADbit can be downloaded, unpacked and installed as: wget https://github.com/3DGenomes/tadbit/archive/master.zip -O tadbit.zip unzip tadbit.zip cd tadbit-master sudo python setup.py install sudo PYTHONPATH=$PYTHONPATH python setup.py install Finally, run the test script to check that the installation completed successfully. To do so, move to the test directory and run: cd test python test_all.py Conda builds Alternatively we regularly build a conda package in bioconda. The packages come without IMP and with gem v3 as the default mapper End of explanation """ %%bash conda create -n tadbit python=3.7 r-base=3.6.1 r-essentials=3.6 r-devtools imp samtools=1.12 jupyter_client=6.1 tadbit -c conda-forge -c salilab -c bioconda """ Explanation: If you want to install TADbit with IMP in conda this line will create a environment with them: End of explanation """ %%bash conda remove tadbit --force """ Explanation: If you wish then to have the last version of TADbit from github remove the TADbit conda package leaving the dependencies: End of explanation """
andher/labs
Modeling Oscellation.ipynb
gpl-3.0
%matplotlib inline """ Explanation: Esteban Martinez, Andres Heredia Introduction The purpose of this lab is to find out the effects of mass on the oscillation of a spring scale. We will put several weights on the scale and record the oscillation time five times, after which we will take the average. Procedure $$Oscillation$$ | mass (g) | avg. time (s) | |----- |--- | | 50 | .29 | | 100 | .44 | | 150 | .47 | | 200 | .53 | | 250 | .61 | | 300 | .71 | End of explanation """ import matplotlib.pyplot as plt import numpy as np from scipy.optimize import curve_fit mass = [ 50, 100, 150, 200, 250, 300] time = [ .29, .44, .47, .53, .61, .71] xx = np.linspace(0,400,10) def lin_model( x, a, b): return a*x + b a,b = curve_fit(lin_model, mass, time)[0] print(a,b) plt.title('Oscillation Time on a Spring Scale') plt.ylabel ('Time (s)') plt.xlabel ('Mass (g)') plt.plot(xx, lin_model(xx, a, b)) plt.plot(mass,time,'ro') """ Explanation: Data Analysis End of explanation """
UChicagoPhysics/SampleExercises
exercises/electricityAndMagnetism/Poynting Vector of Half-Wave Antenna.ipynb
gpl-2.0
import numpy as np import matplotlib.pylab as plt """ Explanation: Poynting Vector of Half-Wave Antenna PROGRAM: Poynting vector of half-wave antenna CREATED: 5/30/2018 Import packages. End of explanation """ #Define constants - permeability of free space, speed of light, current amplitude. u_0 = 1.26 * 10**(-6) c = 2.997925 * 10**8 I_0 = 5 #Choose any current amplitude value, I_0. """ Explanation: In this problem, I plot the magnitude of the time averaged Poynting vector for a half-wave antenna. The antenna is oriented vertically in this problem. - In step 1, I define constants in the problem. - $\mu_{0}$ is the permeability of free space. - $c$ is the speed of light. - $I_{0}$ is the magnitude of the current. I've chosen it to be 5 A. - In step 2, I define a function to calculate the time averaged Poynting vector magnitude, $<S> = (\frac{\mu_{0} c I_{0}^{2}}{8\pi^{2}r^{2}})\frac{cos^2(\frac{\pi}{2}cos\theta)}{sin^2\theta}$. - In step 3, I plot the Poynting vector (represented by a point at the tip of the Poynting vector) at different angles around the half-wave antenna, one meter from the antenna. - In step 4, I plot the Poynting vector at one meter, two meters, and three meters from the antenna. Its strength (magnitude, or length of vector) decreases. - In step 5, I plot the Poynting vector at different radii as a vector field. This is another way to see that its strength decreases farther from the antenna, and that it is strongest at angles close to 90 degrees and 270 degrees. 1 - Define Constants End of explanation """ def S_avg(x, r): return (u_0 * c * I_0**2)/(8 * np.pi * r**2) * np.cos(np.pi/2 * np.cos(x))**2/np.sin(x)**2 """ Explanation: 2 - Calculate Time Averaged Poynting Vector End of explanation """ #Plot average Poynting vector magnitude at different angles. fig = plt.figure(figsize = (8, 8)) ax = fig.add_subplot(1, 1, 1, projection = 'polar') #Define a range of angles. theta = np.arange(0, 2 * np.pi, 0.01) #Plot Poynting vector magnitude at different radii. ax.plot(theta, S_avg(theta, r = 1), color = 'red', label = 'r = 1') #Plot an example vector. x = 0 y = 0 u = S_avg(np.pi/3, r = 1) * np.sin(np.pi/3) v = S_avg(np.pi/3, r = 1) * np.cos(np.pi/3) ax.quiver(x, y, u, v, scale_units = 'xy', scale = 0.5, color = 'red') #Adjust plot labels. ax.set_rticks([100, 200, 300]) ax.set_rlabel_position(0) ax.grid(True) #Flip plot axes to match antenna position from notes. ax.set_theta_offset(np.pi/2) ax.set_theta_direction(-1) #Add title. ax.set_title('Time Average of Poynting Vector at Different Angles Around Antenna, \n One Meter Away', fontsize = 16, verticalalignment = 'bottom') plt.legend(title = 'Radius in Meters', loc = [0.91, 0.91]) plt.tight_layout() plt.show() """ Explanation: 3 - Plot Poynting Vector at Different Angles Around Antenna End of explanation """ #Define a range of angles. theta = np.arange(0, 2 * np.pi, 0.01) def S_avg(x, r): return (u_0 * c * I_0**2)/(8 * np.pi * r**2) * np.cos(np.pi/2 * np.cos(x))**2/np.sin(x)**2 #Plot average Poynting vector magnitude at different angles. fig = plt.figure(figsize = (8, 8)) ax = fig.add_subplot(1, 1, 1, projection = 'polar') #Plot Poynting vector magnitude at different radii. ax.plot(theta, S_avg(theta, r = 1), color = 'red', label = 'r = 1') ax.plot(theta, S_avg(theta, r = 2), color = 'blue', label = 'r = 2') ax.plot(theta, S_avg(theta, r = 3), color = 'purple', label = 'r = 3') #Plot an example vector. theta = np.pi/3 x = 0 y = 0 u = S_avg(theta, r = 1) v = 0 ax.quiver(x, y, u*np.sin(theta), u*np.cos(theta), scale_units = 'xy', scale = 0.5, color = 'red') #Adjust plot labels. ax.set_rticks([100, 200, 300]) ax.set_rlabel_position(90) ax.grid(True) #Flip plot axes to match antenna position from notes. ax.set_theta_offset(np.pi/2) ax.set_theta_direction(-1) #Add title. ax.set_title('Time Average of Poynting Vector at Different Angles Around Antenna', fontsize = 16, verticalalignment = 'bottom') plt.legend(title = 'Radius in Meters', loc = [0.91, 0.91]) plt.tight_layout() plt.savefig('Time Average of Poynting Vector at Different Angles Around Antenna.png') plt.show() def S_avg(x, r): return (u_0 * c * I_0**2)/(8 * np.pi * r**2) * np.cos(np.pi/2 * np.cos(x))**2/np.sin(x)**2 #Plot average Poynting vector magnitude at different angles. fig = plt.figure(figsize = (8, 8)) ax = fig.add_subplot(1, 1, 1, projection = 'polar') meters = 1 n = 36 theta = np.linspace(0, 2 * np.pi , n) r = np.linspace(meters, meters, 1) X, Y = np.meshgrid(theta, r) u = S_avg(X,Y) v = 0 ax.quiver(X, Y, u*np.sin(X), u*np.cos(X), color = 'red', width = 0.005) #Adjust plot labels. ax.set_rticks([1, 2, 3, 4]) ax.set_rlabel_position(90) ax.grid(True) #Flip plot axes to match antenna position from notes. ax.set_theta_offset(np.pi/2) ax.set_theta_direction(-1) #Add title. ax.set_title('Time Average Poynting Vector at Different Angles Around Antenna', fontsize = 16, verticalalignment = 'bottom') #plt.savefig('Time Average Poynting Vector at One Meter from Antenna.png') plt.show() """ Explanation: 3 - Plot Poynting Vector Magnitude at Different Angles Around Antenna for Different Distances from Origin End of explanation """
robertoalotufo/ia898
master/tutorial_ti_2.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import sys,os ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/') if ia898path not in sys.path: sys.path.append(ia898path) import ia898.src as ia """ Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#Transformações-de-intensidade" data-toc-modified-id="Transformações-de-intensidade-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Transformações de intensidade</a></div><div class="lev2 toc-item"><a href="#Descrição" data-toc-modified-id="Descrição-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Descrição</a></div><div class="lev2 toc-item"><a href="#Indexação-por-arrays" data-toc-modified-id="Indexação-por-arrays-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Indexação por arrays</a></div><div class="lev2 toc-item"><a href="#Utilização-em-imagens" data-toc-modified-id="Utilização-em-imagens-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Utilização em imagens</a></div><div class="lev3 toc-item"><a href="#T1:-Função-identidade" data-toc-modified-id="T1:-Função-identidade-131"><span class="toc-item-num">1.3.1&nbsp;&nbsp;</span>T1: Função identidade</a></div><div class="lev3 toc-item"><a href="#T2:-Função-logaritmica" data-toc-modified-id="T2:-Função-logaritmica-132"><span class="toc-item-num">1.3.2&nbsp;&nbsp;</span>T2: Função logaritmica</a></div><div class="lev3 toc-item"><a href="#T3:-Função-negativo" data-toc-modified-id="T3:-Função-negativo-133"><span class="toc-item-num">1.3.3&nbsp;&nbsp;</span>T3: Função negativo</a></div><div class="lev3 toc-item"><a href="#T4:-Função-threshold-128" data-toc-modified-id="T4:-Função-threshold-128-134"><span class="toc-item-num">1.3.4&nbsp;&nbsp;</span>T4: Função threshold 128</a></div><div class="lev3 toc-item"><a href="#T5:-Função-quantização" data-toc-modified-id="T5:-Função-quantização-135"><span class="toc-item-num">1.3.5&nbsp;&nbsp;</span>T5: Função quantização</a></div><div class="lev2 toc-item"><a href="#Outras-página-da-toolbox" data-toc-modified-id="Outras-página-da-toolbox-14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Outras página da toolbox</a></div> # Transformações de intensidade ## Descrição Transformações de intensidade modificam o valor do pixel de acordo com uma equação ou mapeamento. Estas transformações são ditas pontuais para contrastar com operações ditas de vizinhança. Um exemplo de transformação de intensidade é uma operação que divide o valor dos pixels por 2. O resultado será uma nova imagem onde todos os pixels serão mais escuros. A transformação de intensidade tem a forma $s = T(v)$, onde $v$ é um valor de nível de cinza de entrada e s é o valor de nível de cinza na saída. Este tipo de mapeamento pode apresentar muitas denominações: transformação de contraste, lookup table, tabela ou mapa de cores, etc. A transformação T pode ser implementada por uma função ou através de uma simples tabela de mapeamento. O NumPy possui um forma elegante e eficiente de se aplicar um mapeamento de intensidade a uma imagem. ## Indexação por arrays O ``ndarray`` pode ser indexado por outros ``ndarrays``. O uso de arrays como índice podem ser simples mas também bastante complexos e difíceis de entender. O uso de arrays indexados retornam sempre uma cópia dos dados originais e não uma visão ou cópia rasa normalmente obtida com o *slicing*. Assim, o uso de arrays indexados devem ser utilizados com precaução pois podem gerar códigos não tão eficientes. Veja um exemplo numérico unidimensional simples. Um vetor ``row`` de 10 elementos de 0 a 90 é criado e um outro vetor de indices ``i`` com valores [3,5,0,8] irá indexar ``row`` na forma ``row[i]``. O resultado será [30,50,0,80] que são os elementos de ``row`` indexados por ``i``: End of explanation """ row = np.arange(0.,100,10) print('row:', row) i = np.array([[3,5,0,8],[4,2,7,1]]) f = row[i] print('i:', i) print('f=row[i]\n',f) print(id(i),id(row),id(f)) """ Explanation: O indexador i precisa ser inteiro. Entretanto o array que será indexado pode ser qualquer tipo. f = row[i] shape(f) é igual ao shape(i) dtype(f) é o dtype(row) End of explanation """ f = np.array([[0, 1, 2], [2, 0, 1]]) print('f=\n',f) """ Explanation: Vejamos agora o caso bidimensional, apropriado para imagens e a transformação de intensidade. Seja uma imagem f de dimensões (2,3) com os valores de pixels variando de 0 a 2: End of explanation """ T = np.array([5, 6, 7]) print('T:', T) for i in np.arange(T.size): print('%d:%d'% (i,T[i])) """ Explanation: Seja agora a transformação de intensidade T, especificada por um vetor de 3 elementos, onde T[0] = 5; T[1] = 6 e T[2] = 7: End of explanation """ g = T[f] print('g=T[f]= \n', g) print('g.shape:', g.shape) """ Explanation: A aplicação da transformação de intensidade é feita utilizando-se a imagem f como índice da transformação T, como se escreve na equação matemática: End of explanation """ T1 = np.arange(256).astype('uint8') # função identidade T2 = ia.normalize(np.log(T1+1.)) # logaritmica - realce partes escuras T3 = 255 - T1 # negativo T4 = ia.normalize(T1 > 128) # threshold 128 T5 = ia.normalize(T1//30) # reduz o número de níveis de cinza plt.plot(T1) plt.plot(T2) plt.plot(T3) plt.plot(T4) plt.plot(T5) plt.legend(['T1', 'T2', 'T3', 'T4','T5'], loc='right') plt.xlabel('valores de entrada') plt.ylabel('valores de saída') plt.show() """ Explanation: Note que T[f] tem as mesmas dimensões de f, entretanto, seus pixels passaram pelo mapeamento da tabela T. Utilização em imagens Existem muitas funções úteis que podem ser feitas com o mapeamento T: realce de contraste, equalização de histograma, thresholding, redução de níveis de cinza, negativo da imagem, entre várias outras. É comum representar a tabela de transformação de intensidade em um gráfico. A seguir várias funções de transformações são calculadas: End of explanation """ nb = ia.nbshow(2) f = mpimg.imread('../data/cameraman.tif') f1 = T1[f] nb.nbshow(f,'original') plt.plot(T1) plt.title('T1: identidade') nb.nbshow(f1,'T1[f]') nb.nbshow() """ Explanation: Veja a aplicação destas tabelas na imagem "cameraman.tif": T1: Função identidade End of explanation """ f2 = T2[f] nb.nbshow(f,'original') plt.plot(T2) plt.title('T2: logaritmica') nb.nbshow(f2,'T2[f]') nb.nbshow() """ Explanation: T2: Função logaritmica End of explanation """ f3 = T3[f] nb.nbshow(f,'original') plt.plot(T3) plt.title('T3: negativo') nb.nbshow(f3,'T3[f]') nb.nbshow() """ Explanation: T3: Função negativo End of explanation """ f4 = T4[f] nb.nbshow(f,'original') plt.plot(T4) plt.title('T4: threshold 128') nb.nbshow(f4,'T4[f]') nb.nbshow() """ Explanation: T4: Função threshold 128 End of explanation """ f5 = T5[f] nb.nbshow(f,'original') plt.plot(T5) plt.title('T5: quantização') nb.nbshow(f5,'T5[f]') nb.nbshow() """ Explanation: T5: Função quantização End of explanation """ h = ia.histogram(f) h2 = ia.histogram(f2) #logaritmica h3 = ia.histogram(f3) # negativo h4 = ia.histogram(f4) # threshold h5 = ia.histogram(f5) # quantização plt.plot(h) #plt.plot(h2) #plt.plot(h3) #plt.plot(h4) plt.plot(h5) plt """ Explanation: Observando o histograma de cada imagem após o mapaemento: End of explanation """ f = ia.normalize(np.arange(1000000).reshape(1000,1000)) %timeit g2t = T2[f] %timeit g2 = ia.normalize(np.log(f+1.)) %timeit g3t = T3[f] %timeit g3 = 255 - f """ Explanation: Do ponto de vista de eficiência, qual é o melhor, utilizar o mapeamento pela tabela, ou processar a imagem diretamente? End of explanation """
huazhisong/race_code
kaggle_ws/titanic_ws/Titanic Data Science Solutions.ipynb
gpl-3.0
# data analysis and wrangling import pandas as pd import numpy as np import random as rnd # visualization import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline # machine learning from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC, LinearSVC from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import Perceptron from sklearn.linear_model import SGDClassifier from sklearn.tree import DecisionTreeClassifier """ Explanation: 工作流程: 1.问题定义 2.获取训练集和测试集 3.清洗数据,做好准备 4.分析并识别模式,探索数据 5.建立模型,预测并解决问题 6.可视化解决问题的步骤以及最终的解决方案 7.提交答案 工作目标:分类,将样本进行分类,考虑其与目标类的相关性;相关性,特征与目标的相关性;转换,将特征值转化为符合模型的类型;补充,补充存在的缺失值;修正,修正错误的特征值;创造,创造新的特征值;图表化,选择正确的图表。 开始真正的测试 End of explanation """ train_df = pd.read_csv(r'E:\song_ws\data\kaggle\Titanic\train.csv') test_df = pd.read_csv(r'E:\song_ws\data\kaggle\Titanic\test.csv') combine = [train_df, test_df] """ Explanation: 获取数据集 End of explanation """ print(train_df.columns.values) """ Explanation: 数据集的特征 End of explanation """ # preview the data train_df.head() train_df.tail() train_df.info() print('_'*40) test_df.info() """ Explanation: PassagerId:乘客Id编号,没有实际意义 Survived:乘客最后是否存活,0代表No,1代表Yes Pclass:船票等级,1代表高级Upper,2代表中级Middle,3代表低级Lower Sex: 性别 Age:年龄,代为年,如果小于1,则年龄是小数,如果年龄是估算的,则可能是xx.5的形式 SibSp:在船上的兄弟或者配偶,Sibling包括兄弟,姐妹,法律意义上的兄弟姐妹 Parch:在船上的父母或者孩子 Ticket:船票的编号 Fare:旅客票价 Cabin:船舱号 Embarked:登陆港口,C代表Cherbourg,Q代表Queenstown,S代表Southampton Categorical 特征:Survived,Sex,Embarked,Ordinal:Pclass。 Continue 特征:Age,Fare,Discrete:SibSp,Parch End of explanation """ train_df.describe() train_df.describe(include=['O']) """ Explanation: Ticket是数字加字母的混合数据,Cabin是字母连着数字。 Name有可能存在拼写错误。 训练集中Cabin>Age>Embarked存在缺失值 测试集中Cabin>Age存在缺失值。 数据的分布情况 End of explanation """ train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False) train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False) train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False) train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False) train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False) """ Explanation: 分析特征 End of explanation """ g = sns.FacetGrid(train_df, col='Survived') g.map(plt.hist, 'Age', bins=20) """ Explanation: 1.Pclass不同的类别明显拥有不同的存活概率,等级越高,存活率越大 2.Sex:明显女性的存活率远高于男性 3.Sibsp和Parch对存活率没有明显的相关性 4.Embarked:不同的港口登陆存活率稍微有些差别,C港口的存活率明显高一些。 End of explanation """ # grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived') grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6) grid.map(plt.hist, 'Age', alpha=.5, bins=20) grid.add_legend() """ Explanation: 1.婴儿有较高的存活率 2.八十岁的老人都存活了 3.大部分的乘客在15-35之间 4.死亡率最大的在15-25之间 End of explanation """ grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6) grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep') grid.add_legend() """ Explanation: 1.大部分的乘客都是pclass=3,但是死亡率最高 2.Pclass=2的婴儿都存活了 3.大部分的pclass=1的人存活了 End of explanation """ grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6) grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None) grid.add_legend() """ Explanation: 1、女人的存活率普遍高于男性 2、在Embarked=C中,女性存活率低于男性,不能代表Embarked和Survived有直接关系 3、Embarked不同,存活率也是不同 End of explanation """
rsignell-usgs/notebook
HOPS/hops_velocity3.ipynb
mit
from netCDF4 import Dataset #url = ('http://geoport.whoi.edu/thredds/dodsC/usgs/data2/rsignell/gdrive/' # 'nsf-alpha/Data/MIT_MSEAS/MSEAS_Tides_20160317/mseas_tides_2015071612_2015081612_01h.nc') url = ('/usgs/data2/rsignell/gdrive/' 'nsf-alpha/Data/MIT_MSEAS/MSEAS_Tides_20160317/mseas_tides_2015071612_2015081612_01h.nc') nc = Dataset(url) """ Explanation: The problem: CF compliant readers cannot read HOPS dataset directly. The solution: read with the netCDF4-python raw interface and create a CF object from the data. NOTE: Ideally this should be a nco script that could be run as a CLI script and fix the files. Here I am using Python+iris. That works and could be written as a CLI script too. The main advantage is that it takes care of the CF boilerplate. However, this approach is to "heavy-weight" to be applied in many variables and files. End of explanation """ vtime = nc['time'] coords = nc['vgrid2'] vbaro = nc['vbaro'] """ Explanation: Extract lon, lat variables from vgrid2 and u, v variables from vbaro. The goal is to split the joint variables into individual CF compliant phenomena. End of explanation """ import iris iris.FUTURE.netcdf_no_unlimited = True longitude = iris.coords.AuxCoord(coords[:, :, 0], var_name='vlat', long_name='lon values', units='degrees') latitude = iris.coords.AuxCoord(coords[:, :, 1], var_name='vlon', long_name='lat values', units='degrees') # Dummy Dimension coordinate to avoid default names. # (This is either a bug in CF or in iris. We should not need to do this!) lon = iris.coords.DimCoord(range(866), var_name='x', long_name='lon_range', standard_name='longitude') lat = iris.coords.DimCoord(range(1032), var_name='y', long_name='lat_range', standard_name='latitude') """ Explanation: Using iris to create the CF object. NOTE: ideally lon, lat should be DimCoord like time and not AuxCoord, but iris refuses to create 2D DimCoord. Not sure if CF enforces that though. First the Coordinates. FIXME: change to a full time slice later! End of explanation """ vbaro.shape import numpy as np u_cubes = iris.cube.CubeList() v_cubes = iris.cube.CubeList() for k in range(vbaro.shape[0]): # vbaro.shape[0] time = iris.coords.DimCoord(vtime[k], var_name='time', long_name=vtime.long_name, standard_name='time', units=vtime.units) u = vbaro[k, :, :, 0] u_cubes.append(iris.cube.Cube(np.broadcast_to(u, (1,) + u.shape), units=vbaro.units, long_name=vbaro.long_name, var_name='u', standard_name='barotropic_eastward_sea_water_velocity', dim_coords_and_dims=[(time, 0), (lon, 1), (lat, 2)], aux_coords_and_dims=[(latitude, (1, 2)), (longitude, (1, 2))])) v = vbaro[k, :, :, 1] v_cubes.append(iris.cube.Cube(np.broadcast_to(v, (1,) + v.shape), units=vbaro.units, long_name=vbaro.long_name, var_name='v', standard_name='barotropic_northward_sea_water_velocity', dim_coords_and_dims=[(time, 0), (lon, 1), (lat, 2)], aux_coords_and_dims=[(longitude, (1, 2)), (latitude, (1, 2))])) """ Explanation: Now the phenomena. NOTE: You don't need the broadcast_to trick if saving more than 1 time step. Here I just wanted the single time snapshot to have the time dimension to create a full example. End of explanation """ u_cube = u_cubes.concatenate_cube() v_cube = v_cubes.concatenate_cube() cubes = iris.cube.CubeList([u_cube, v_cube]) """ Explanation: Join the individual CF phenomena into one dataset. End of explanation """ iris.save(cubes, 'hops.nc') !ncdump -h hops.nc """ Explanation: Save the CF-compliant file! End of explanation """
sauravrt/signal-processing
ipynb/BeamformingFFT.ipynb
gpl-2.0
from IPython.display import YouTubeVideo YouTubeVideo('DVi1TC24_BY') """ Explanation: Beamforming and FFT This notebook explains the relation between spatial beamforming and the time domain FFT operation and show how beamforming can be implemented using FFT. The content presented below is loosely based on a tutorial by Prof. John R. Buck at UMassD. This document is presented as an IPython notebook. This video tutorial demonstrates how to setup Anaconda package for Python on a Windows machine. Similar approach is applicable for Linux and OSX platforms. End of explanation """ import numpy as np import matplotlib.pyplot as plt import scipy.signal as signal from scipy.fftpack import fft, fftshift, ifft from numpy.random import randn %matplotlib notebook from matplotlib import rc rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']}) ## for Palatino and other serif fonts use: #rc('font',**{'family':'serif','serif':['Palatino']}) #rc('text', usetex=True) """ Explanation: Discrete Fourier Transform (DFT) Given a finite length discrete time sequence $x[m]$ for $0 \leq m \leq M - 1$, the discrete time fourier transform (DTFT) is defined as $$X(e^{j\omega}) = \sum\limits_{m=0}^{M-1} x[n] e^{-jm\omega}.$$ $X(e^{j\omega})$ is a continuous function of $\omega$. The discrete fourier transform (DFT) can be viewed as the fequency domain sampling of DTFT every $\Delta\omega = 2\pi/N$. Time domain sampling results in creating copies of the spectrum in the frequency domain. Similarly, sampling in the frequency domain has an effect of creatign copies of the time domain sequence at every $N$. Hence to avoid aliasing in time, we need to choose $N \geq M$ to avoid aliasing in time. The DFT is defined as $$ X[l] = X(e^{j\omega})\vert_{\omega = 2\pi l/N} $$ $$X[l] = \sum\limits_{n=0}^{N-1} x[n] e^{-j(2\pi/N)ln} \quad l = 0,\ldots, N-1$$ Above computation requries $\mathcal{O}(N^2)$. Fast Fourier Trasform is a divide-and-conquer based approach to evaluate the DFT with fewer number of operation, specifically $\mathcal{O}(N\log_2 N)$. finite length signals = finite aperture Time domain DTFT : Frequency response: $H(e^{j\omega}) = \sum\limits_{m=0}^{M-1}h[m]e^{-j\omega m}$ $X_N[l] = DFT(x[m])$ Spatial domain Finite set of narrowband array data $\mathbf{x}$ uniform line array (ULA) $u = \cos(\theta)$ $k = 2\pi/\lambda$ $$ \begin{align} [\mathbf{v}(\theta)]_n =& [e^{j(2\pi d/\lambda)n\cos(\theta)}] \ =& [e^{j(2\pi d/\lambda)nu}] \end{align} $$ Scanned response $$y = \mathbf{w}^{H}(u)\mathbf{x}$$ $$\mathbf{w}(u) = \frac{1}{M}[e^{-jkdmu}]$$ $$y(u) = \frac{1}{N}\sum\limits x_m e^{jkdmu}$$ Beampattern Gain for each plane wave from different direction $$B(u) = \mathbf{w}^{H}\mathbf{v}(u), \quad \text{for} -1\leq u \leq 1$$ $$ B(u) = \sum w^* e^{-jkdmu}$$ $\omega = kdu$ and $\Delta\omega = kd\Delta u = 2\pi/N$ fft(conj(w), N) If $d = \lambda/2$ so that $kd = \pi$ then $\Delta \omega = \pi \Delta u$ Side note * Need to be careful when $d \neq \lambda/2$, the FFT output will have invisible areas too. * scanned response can be implemented as IFFT * Test with non-symmetric scenario * FFT is memory efficient, 'in place' * Creating exponential vectors will be memory intensive End of explanation """ def steering_vector(N, u, beta): # beta: sampling ratio, 1 => Nyquist sampling n = np.arange(0, N) sv = np.exp(-1j*np.pi*beta*u*n) return sv N = 11 sv = steering_vector(N, np.array(0), 1) Nfft = 1024 """ Explanation: Function defines the steering/replica vector $[\mathbf{v}] = [e^{-j2\pi d/\lambda n \cos(\theta)}]$ End of explanation """ uscan = np.linspace(-1, 1, Nfft) V = np.empty((11, Nfft), dtype=complex) idx = 0 for u in uscan: V[:, idx] = steering_vector(11, u, 1) idx = idx + 1 BP = np.dot(sv/N, V) """ Explanation: Direct implementation End of explanation """ u = np.linspace(-(Nfft-1)/2, Nfft/2, Nfft)*2/Nfft BP_fft = (1/N)*fftshift(fft(sv, Nfft)) f, (ax1, ax2) = plt.subplots(1, 2, sharey=True) ax1.plot(u, 20*np.log10(np.abs(BP_fft)), lw=2) ax1.set_title('FFT Beampattern') ax2.plot(uscan, 20*np.log10(np.abs(BP)), lw=2) ax2.set_title('u-scan Beampattern') ax1.set_xlabel(r'u = \cos(\theta)') ax1.grid(True) ax2.grid(True) ax1.set_xl """ Explanation: FFT based implementation End of explanation """ u1 = 3/11 u2 = -5/11 sv1 = steering_vector(N, u1, 1) sv2 = steering_vector(N, u2, 1) x = sv1 + 0.5*sv2 X = (Nfft/N)*fftshift(ifft(x, Nfft)) plt.figure() plt.plot(u, 20*np.log10(np.abs(X)), lw=2) plt.ylim((-80, 0)) plt.axvline(x=u1, color='k', linestyle='--', alpha=0.5) plt.axvline(x=u2, color='k', linestyle='--', alpha=0.5) plt.grid(True) plt.xlabel(r'u = \cos( \theta )') plt.ylabel('Scan response dB') plt.annotate('u_1', xy=(u1, 0.1), xytext=(u1, 0.2)) plt.annotate('u_2', xy=(u2, 0.1), xytext=(u2, 0.2)) #plt.savefig(filename='two_source_response.pdf', dpi=120) """ Explanation: Scan response example End of explanation """
maartenbreddels/vaex
examples/healpix_plotting.ipynb
mit
# Make sure you have healpy installed by running either command #!conda install -c conda-forge healpy #!pip install healpy import vaex as vx import healpy as hp %matplotlib inline tgas = vx.datasets.tgas.fetch() """ Explanation: Healpix plotting End of explanation """ level = 2 factor = 34359738368 * (4**(12-level)) nmax = hp.nside2npix(2**level) counts = tgas.count(binby="source_id/" + str(factor), limits=[0, nmax], shape=nmax) counts """ Explanation: Understanding healpix with vaex Using healpix is made available using the healpy package. Vaex does not need special support for healpix, only for plotting, but some helper functions are introduced to make working with healpix easier. To understand this better, we will start from the beginning. If we want to make a density sky plot, we would like to pass healpy a 1d numpy array where each value represents the density at a location of the sphere, where the location is determined by the array size (the healpix level) and the offset (the location). Since the Gaia data includes the healpix index encoded in the source_id. By diving the source_id by 34359738368 you get a healpix index level 12, and diving it further will take you to lower levels. End of explanation """ hp.mollview(counts, nest=True) """ Explanation: Using the healpy package, we can plot this in a molleweide projection End of explanation """ counts = tgas.healpix_count(healpix_level=6) hp.mollview(counts, nest=True) """ Explanation: To avoid typing this over and over again, instead, we can use Dataset.healpix_count. End of explanation """ tgas.healpix_plot(f="log1p", healpix_level=6, figsize=(10,8), healpix_output="ecliptic") """ Explanation: Using vaex for plotting Instead of using healpy, we can use vaex' Dataset.healpix_plot method. End of explanation """
MLWave/kepler-mapper
docs/notebooks/KeplerMapper-Newsgroup20-Pipeline.ipynb
mit
# from kmapper import jupyter import kmapper as km import numpy as np from sklearn.datasets import fetch_20newsgroups from sklearn.cluster import AgglomerativeClustering from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.decomposition import TruncatedSVD from sklearn.manifold import Isomap from sklearn.preprocessing import MinMaxScaler """ Explanation: KeplerMapper & NLP examples Newsgroups20 End of explanation """ newsgroups = fetch_20newsgroups(subset='train') X, y, target_names = np.array(newsgroups.data), np.array(newsgroups.target), np.array(newsgroups.target_names) print("SAMPLE",X[0]) print("SHAPE",X.shape) print("TARGET",target_names[y[0]]) """ Explanation: Data We will use the Newsgroups20 dataset. This is a canonical NLP dataset containing 11314 labeled postings on 20 different newsgroups. End of explanation """ mapper = km.KeplerMapper(verbose=2) projected_X = mapper.fit_transform(X, projection=[TfidfVectorizer(analyzer="char", ngram_range=(1,6), max_df=0.83, min_df=0.05), TruncatedSVD(n_components=100, random_state=1729), Isomap(n_components=2, n_jobs=-1)], scaler=[None, None, MinMaxScaler()]) print("SHAPE",projected_X.shape) """ Explanation: Projection To project the unstructured text dataset down to 2 fixed dimensions, we will set up a function pipeline. Every consecutive function will take as input the output from the previous function. We will try out "Latent Semantic Char-Gram Analysis followed by Isometric Mapping". TFIDF vectorize (1-6)-chargrams and discard the top 17% and bottom 5% chargrams. Dimensionality = 13967. Run TruncatedSVD with 100 components on this representation. TFIDF followed by Singular Value Decomposition is called Latent Semantic Analysis. Dimensionality = 100. Run Isomap embedding on the output from previous step to project down to 2 dimensions. Dimensionality = 2. MinMaxScale the output from previous step. Dimensionality = 2. End of explanation """ from sklearn import cluster graph = mapper.map(projected_X, inverse_X=None, clusterer=cluster.AgglomerativeClustering(n_clusters=3, linkage="complete", affinity="cosine"), overlap_perc=0.33) """ Explanation: Mapping We cover the projection with 10 33%-overlapping intervals per dimension (10*10=100 cubes total). We cluster on the projection (but, note, we can also create an inverse_X to cluster on by vectorizing the original text data). For clustering we use Agglomerative Single Linkage Clustering with the "cosine"-distance and 3 clusters. Agglomerative Clustering is a good cluster algorithm for TDA, since it both creates pleasing informative networks, and it has strong theoretical garantuees (see functor and functoriality). End of explanation """ vec = TfidfVectorizer(analyzer="word", strip_accents="unicode", stop_words="english", ngram_range=(1,3), max_df=0.97, min_df=0.02) interpretable_inverse_X = vec.fit_transform(X).toarray() interpretable_inverse_X_names = vec.get_feature_names() print("SHAPE", interpretable_inverse_X.shape) print("FEATURE NAMES SAMPLE", interpretable_inverse_X_names[:400]) """ Explanation: Interpretable inverse X Here we show the flexibility of KeplerMapper by creating an interpretable_inverse_X that is easier to interpret by humans. For text, this can be TFIDF (1-3)-wordgrams, like we do here. For structured data this can be regularitory/protected variables of interest, or using another model to select, say, the top 10% features. End of explanation """ html = mapper.visualize(graph, inverse_X=interpretable_inverse_X, inverse_X_names=interpretable_inverse_X_names, path_html="newsgroups20.html", projected_X=projected_X, projected_X_names=["ISOMAP1", "ISOMAP2"], title="Newsgroups20: Latent Semantic Char-gram Analysis with Isometric Embedding", custom_tooltips=np.array([target_names[ys] for ys in y]), color_function=y) # jupyter.display("newsgroups20.html") """ Explanation: Visualization We use interpretable_inverse_X as the inverse_X during visualization. This way we get cluster statistics that are more informative/interpretable to humans (chargrams vs. wordgrams). We also pass the projected_X to get cluster statistics for the projection. For custom_tooltips we use a textual description of the label. The color function is simply the multi-class ground truth represented as a non-negative integer. End of explanation """
martinjrobins/hobo
examples/interfaces/statsmodels-state-space.ipynb
bsd-3-clause
import pints import pints.toy as toy import pints.plot import numpy as np import matplotlib.pyplot as plt """ Explanation: Interface to statsmodels: state space time series models This notebook provides a short exposition of how it is possible to interface with the cornucopia of time series models provided by the statsmodels package. In this notebook, we illustrate how to fit the logistic ODE model, where the errors are described by state space models. End of explanation """ from statsmodels.tsa.statespace.structural import UnobservedComponents """ Explanation: Fitting a local level state space model We assume that the observed data $y(t)$ follows $$y(t)= f(t; \theta) + \epsilon(t),$$ where $f(t; \theta)$ is the logistic model solution. The errors are assumed to follow a local level model: $$ \begin{align} &\epsilon(t) = \mu(t) + \nu(t),\ &\mu(t) = \mu(t-1) + \eta(t), \end{align} $$ as described here (also see this link for a description of the huge family of state space models available in this package). Here, $\nu(t) \sim \mathcal{N}(0, \sigma_\nu)$ and $\eta(t) \sim \mathcal{N}(0, \sigma_\eta)$. The two parameters of the error process are the variances: $\sigma_\nu^2$ and $\sigma_\eta^2$. End of explanation """ import scipy.stats # Load a forward model model = toy.LogisticModel() # Create some toy data real_parameters = [0.015, 500] times = np.linspace(0, 1000, 1000) org_values = model.simulate(real_parameters, times) # function to generate local level errors def local_level(n, sigma_nu, sigma_eta): nu = scipy.stats.norm.rvs(0, sigma_nu, n) eta = scipy.stats.norm.rvs(0, sigma_eta, n) epsilon = np.zeros(n) mu = np.zeros(n) mu[0] = eta[0] for i in range(1, n): mu[i] = mu[i - 1] + eta[i] for i in range(n): epsilon[i] = mu[i] + nu[i] return epsilon sigma_nu = 10 sigma_eta = 1 errors = local_level(len(org_values), sigma_nu, sigma_eta) values = org_values + errors # Show the noisy data plt.figure() plt.plot(times, org_values) plt.plot(times, values) plt.xlabel('time') plt.ylabel('y') plt.legend(['true', 'observed']) plt.show() """ Explanation: We first generate some local level noise and overlay on the ODE. Note that this noise is non-stationary, so the observed data can drift away from the actual solution for long periods. End of explanation """ sol = model.simulate([0.15, 500], times) model = UnobservedComponents(endog=values, level='llevel', exog=sol) model.param_names """ Explanation: Note, that when wrapping the various models in this package into Pints, a useful function is the model.param_names function that provides a description of these parameters. We illustrate this below: note that beta.x1 is the coefficient on the ODE solution, so needs to be supplied as '1' in the wrapper we show later. End of explanation """ class StateSpaceLogLikelihood(pints.ProblemLogLikelihood): def __init__(self, problem, ss_model, ss_n_params): super(StateSpaceLogLikelihood, self).__init__(problem) self._nt = len(self._times) - 1 self._no = problem.n_outputs() self._ss_model = ss_model self._ss_n_params = ss_n_params self._n_parameters = problem.n_parameters() + ss_n_params * self._no self._m = self._ss_n_params * self._no def __call__(self, x): # converting to list makes it easier to append # nuisance params x = x.tolist() m = self._m # extract noise model params parameters = x[-m:] sol = self._problem.evaluate(x[:-m]) model = UnobservedComponents(endog=problem.values(), level=self._ss_model, exog=sol) # add nuisance parameter at end: the coefficient # on the ODE model solution (which should be 1) return model.loglike(parameters + [1]) """ Explanation: Wrapping the a state space model in a Pints log-likelihood. Here, ss_model is a string corresponding to the list of prebuilt state space models here; ss_n_params is the number of parameters that the particular time series model involves. End of explanation """ model = toy.LogisticModel() # Create an object with links to the model and time series problem = pints.SingleOutputProblem(model, times, values) # Create a log-likelihood function log_likelihood = StateSpaceLogLikelihood(problem, 'llevel', 2) # Create a uniform prior over both the parameters and the new noise variable log_prior = pints.UniformLogPrior( [0.00, 400, 0, 0], [0.05, 600, 3000, 3000], ) # Create a posterior log-likelihood (log(likelihood * prior)) log_posterior = pints.LogPosterior(log_likelihood, log_prior) # Choose starting points for 3 mcmc chains real_parameters = np.array([0.015, 500] + [sigma_nu**2, sigma_eta**2]) xs = [ real_parameters * 1.05, real_parameters * 1, real_parameters * 1.025 ] # Create mcmc routine mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC) # Add stopping criterion mcmc.set_max_iterations(4000) # Disable logging mcmc.set_log_to_screen(False) # Run! print('Running...') chains = mcmc.run() print('Done!') """ Explanation: Instantiate the model, log-posterior etc., then run MCMC. End of explanation """ pints.plot.trace(chains, parameter_names=[r'$r$', r'$k$', r'$\sigma^2_{\nu}$', r'$\sigma^2_{\eta}}$'], ref_parameters=real_parameters) plt.show() """ Explanation: Show traces. End of explanation """ results = pints.MCMCSummary(chains=chains, parameter_names=["r", "k", "sigma^2_nu", "sigma^2_eta"]) print(results) """ Explanation: Look at summary stats: all parameters look similar to their true values. End of explanation """
ngcm/training-public
FEEG6016 Simulation and Modelling/01-Monte-Carlo-Lab-1.ipynb
mit
from IPython.core.display import HTML css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css' HTML(url=css_file) """ Explanation: Monte Carlo Methods: Lab 1 Take a look at Chapter 10 of Newman's Computational Physics with Python where much of this material is drawn from. End of explanation """ %matplotlib inline import numpy from matplotlib import pyplot from matplotlib import rcParams rcParams['font.family'] = 'serif' rcParams['font.size'] = 16 rcParams['figure.figsize'] = (12,6) from __future__ import division def f(x): return numpy.sin(1.0/(x*(2.0-x)))**2 x = numpy.linspace(0.0, 2.0, 10000) pyplot.plot(x, f(x)) pyplot.xlabel(r"$x$") pyplot.ylabel(r"$\sin^2([x(x-2)]^{-1})$"); """ Explanation: Integration If we have an ugly function, say $$ \begin{equation} f(x) = \sin^2 \left(\frac{1}{x (2-x)}\right), \end{equation} $$ then it can be very difficult to integrate. To see this, just do a quick plot. End of explanation """ pyplot.fill_between(x, f(x)) pyplot.xlabel(r"$x$") pyplot.ylabel(r"$\sin^2([x(x-2)]^{-1})$"); """ Explanation: We see that as the function oscillates infinitely often, integrating this with standard methods is going to be very inaccurate. However, we note that the function is bounded, so the integral (given by the shaded area below) must itself be bounded - less than the total area in the plot, which is $2$ in this case. End of explanation """ def mc_integrate(f, domain_x, domain_y, N = 10000): """ Monte Carlo integration function: to be completed. Result, for the given f, should be around 1.46. """ import numpy.random return I """ Explanation: So if we scattered (using a uniform random distribution) a large number of points within this box, the fraction of them falling below the curve is approximately the integral we want to compute, divided by the area of the box: $$ \begin{equation} I = \int_a^b f(x) \, dx \quad \implies \quad I \simeq \frac{k A}{N} \end{equation} $$ where $N$ is the total number of points considered, $k$ is the number falling below the curve, and $A$ is the area of the box. We can choose the box, but we need $y \in [\min_{x \in [a, b]} (f(x)), \max_{x \in [a, b]} (f(x))] = [c, d]$, giving $A = (d-c)(b-a)$. So let's apply this technique to the function above, where the box in $y$ is $[0,1]$. End of explanation """ def mv_integrate(f, domain_x, N = 10000): """ Mean value Monte Carlo integration: to be completed """ import numpy.random return I """ Explanation: Accuracy To check the accuracy of the method, let's apply this to calculate $\pi$. The area of a circle of radius $2$ is $4\pi$, so the area of the quarter circle in $x, y \in [0, 2]$ is just $\pi$: $$ \begin{equation} \pi = \int_0^2 \sqrt{4 - x^2} \, dx. \end{equation} $$ Check the convergence of the Monte Carlo integration with $N$. (I suggest using $N = 100 \times 2^i$ for $i = 0, \dots, 19$; you should find the error scales roughly as $N^{-1/2}$) Mean Value Method Monte Carlo integration is pretty inaccurate, as seen above: it converges slowly, and has poor accuracy at all $N$. An alternative is the mean value method, where we note that by definition the average value of $f$ over the interval $[a, b]$ is precisely the integral multiplied by the width of the interval. Hence we can just choose our $N$ random points in $x$ as above, but now just compute $$ \begin{equation} I \simeq \frac{b-a}{N} \sum_{i=1}^N f(x_i). \end{equation} $$ End of explanation """ def mc_integrate_multid(f, domain, N = 10000): """ Monte Carlo integration in arbitrary dimensions (read from the size of the domain): to be completed """ return I from scipy import special def volume_hypersphere(ndim=3): return numpy.pi**(float(ndim)/2.0) / special.gamma(float(ndim)/2.0 + 1.0) """ Explanation: Let's look at the accuracy of this method again applied to computing $\pi$. The convergence rate is the same (only roughly, typically), but the Mean Value method is expected to be better in terms of its absolute error. Dimensionality Compared to standard integration methods (Gauss quadrature, Simpson's rule, etc) the convergence rate for Monte Carlo methods is very slow. However, there is one crucial advantage: as you change dimension, the amount of calculation required is unchanged, whereas for standard methods it grows geometrically with the dimension. Try to compute the volume of an $n$-dimensional unit hypersphere, which is the object in $\mathbb{R}^n$ such that $$ \begin{equation} \sum_{i=1}^n x_i^2 \le 1. \end{equation} $$ The volume of the hypersphere can be found in closed form, but can rapidly be computed using the Monte Carlo method above, by counting the $k$ points that randomly fall within the hypersphere and using the standard formula $I \simeq V k / N$. End of explanation """ x = numpy.linspace(0.01,1,2000) p = 1/(2*numpy.sqrt(x)) q = 1/(3*x**(2/3)) K = 1.6 pyplot.semilogy(x, p, lw=2, label=r"$p(x)$") pyplot.semilogy(x, K * q, lw=2, label=r"$K q(x)$") pyplot.xlabel(r"$x$") pyplot.legend() pyplot.show() """ Explanation: Now let us repeat this across multiple dimensions. The errors clearly vary over a range, but the convergence remains roughly as $N^{-1/2}$ independent of the dimension; using other techniques such as Gauss quadrature would see the points required scaling geometrically with the dimension. Importance sampling Consider the integral (which arises, for example, in the theory of Fermi gases) $$ \begin{equation} I = \int_0^1 \frac{x^{-1/2}}{e^x + 1} \, dx. \end{equation} $$ This has a finite value, but the integrand diverges as $x \to 0$. This may cause a problem for Monte Carlo integration when a single value may give a spuriously large contribution to the sum. We can get around this by changing the points at which the integrand is sampled. Choose a weighting function $w(x)$. Then a weighted average of any function $g(x)$ can be $$ \begin{equation} <g>_w = \frac{\int_a^b w(x) g(x) \, dx}{\int_a^b w(x) \, dx}. \end{equation} $$ As our integral is $$ \begin{equation} I = \int_a^b f(x) \, dx \end{equation} $$ we can, by setting $g(x) = f(x) / w(x)$ get $$ \begin{equation} I = \int_a^b f(x) \, dx = \left< \frac{f(x)}{w(x)} \right>_w \int_a^b w(x) \, dx. \end{equation} $$ This gives $$ \begin{equation} I \simeq \frac{1}{N} \sum_{i=1}^N \frac{f(x_i)}{w(x_i)} \int_a^b w(x) \, dx, \end{equation} $$ where the points $x_i$ are now chosen from a non-uniform probability distribution with pdf $$ \begin{equation} p(x) = \frac{w(x)}{\int_a^b w(x) \, dx}. \end{equation} $$ This is a generalization of the mean value method - we clearly recover the mean value method when the weighting function $w(x) \equiv 1$. A careful choice of the weighting function can mitigate problematic regions of the integrand; e.g., in the example above we could choose $w(x) = x^{-1/2}$, giving $p(x) = x^{-1/2}/2$. In general, the hard part of the algorithm is going to be generating the samples from this non-uniform distribution. Here we have the advantage that $p$ is given by the numpy.random.power distribution. So, let's try to solve the integral above. We need $\int_0^1 w(x) = 2$. The expected solution is around 0.84.So, let's try to solve the integral above. The expected solution is around 0.84. In the general case, how do we generate the samples from the non-uniform probability distribution $p$? What really matters here is not the function $p$ from which we draw the random numbers x. What really matters is that the random numbers appear to follow the behaviour, the distribution $p$, that we want. This may seem like stating the same thing, but it's not. We can use a technique called rejection sampling to construct a set of numbers that follows a certain (cumulative) distribution without having to construct the pdf that it actually follows at all. To do this, we need to know the distribution we want (here $p(x) = 1/(2 \sqrt{x})$) and another distribution $q(x)$ that we can easily compute with a constant $K$ such that $p(x) \le K q(x)$. What we're doing here is just for illustration, as the power distribution $p(x) = a x^{a-1}$ is provided by numpy.random.power and perfectly matches the distribution we want for $a=1/2$. Here we're going to need some distribution that diverges faster than $p$ for small $x$, so we can choose the power distribution with $a=1/3$, provided, for example, $K = 1.6$: End of explanation """
tpin3694/tpin3694.github.io
machine-learning/handling_missing_values_in_time_series.ipynb
mit
# Load libraries import pandas as pd import numpy as np """ Explanation: Title: Handling Missing Values In Time Series Slug: handling_missing_values_in_time_series Summary: How to handle the missing values in time series in pandas for machine learning in Python. Date: 2017-09-11 12:00 Category: Machine Learning Tags: Preprocessing Dates And Times Authors: Chris Albon Preliminaries End of explanation """ # Create date time_index = pd.date_range('01/01/2010', periods=5, freq='M') # Create data frame, set index df = pd.DataFrame(index=time_index) # Create feature with a gap of missing values df['Sales'] = [1.0,2.0,np.nan,np.nan,5.0] """ Explanation: Create Date Data With Gap In Values End of explanation """ # Interpolate missing values df.interpolate() """ Explanation: Interpolate Missing Values End of explanation """ # Forward-fill df.ffill() """ Explanation: Forward-fill Missing Values End of explanation """ # Back-fill df.bfill() """ Explanation: Backfill Missing Values End of explanation """ # Interpolate missing values df.interpolate(limit=1, limit_direction='forward') """ Explanation: Interpolate Missing Values But Only Up One Value End of explanation """
geektoni/shogun
doc/ipython-notebooks/distributions/KernelDensity.ipynb
bsd-3-clause
import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt %matplotlib inline import os import shogun as sg SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data') # generates samples from the distribution def generate_samples(n_samples,mu1,sigma1,mu2,sigma2): samples1 = np.random.normal(mu1,sigma1,(1,int(n_samples/2))) samples2 = np.random.normal(mu2,sigma2,(1,int(n_samples/2))) samples = np.concatenate((samples1,samples2),1) return samples # parameters of the distribution mu1=4 sigma1=1 mu2=8 sigma2=2 # number of samples n_samples = 200 samples=generate_samples(n_samples,mu1,sigma1,mu2,sigma2) # pdf function for plotting x = np.linspace(0,15,500) y = 0.5*(stats.norm(mu1,sigma1).pdf(x)+stats.norm(mu2,sigma2).pdf(x)) # plot samples plt.plot(samples[0,:],np.zeros(n_samples),'rx',label="Samples") # plot actual pdf plt.plot(x,y,'b--',label="Actual pdf") plt.legend(numpoints=1) plt.show() """ Explanation: Kernel Density Estimation by Parijat Mazumdar (GitHub ID: <a href='https://github.com/mazumdarparijat'>mazumdarparijat</a>) This notebook is on using the Shogun Machine Learning Toolbox for kernel density estimation (KDE). We start with a brief overview of KDE. Then we demonstrate the use of Shogun's $KernelDensity$ class on a toy example. Finally, we apply KDE to a real world example, thus demonstrating the its prowess as a non-parametric statistical method. Brief overview of Kernel Density Estimation Kernel Density Estimation (KDE) is a non-parametric way of estimating the probability density function (pdf) of ANY distribution given a finite number of its samples. The pdf of a random variable X given finite samples ($x_i$s), as per KDE formula, is given by: $$pdf(x)=\frac{1}{nh} \Sigma_{i=1}^n K(\frac{||x-x_i||}{h})$$ In the above equation, K() is called the kernel - a symmetric function that integrates to 1. h is called the kernel bandwidth which controls how smooth (or spread-out) the kernel is. The most commonly used kernel is the normal distribution function. KDE is a computationally expensive method. Given $N_1$ query points (i.e. the points where we want to compute the pdf) and $N_2$ samples, computational complexity of KDE is $\mathcal{O}(N_1.N_2.D)$ where D is the dimension of the data. This computational load can be reduced by spatially segregating data points using data structures like KD-Tree and Ball-Tree. In single tree methods, only the sample points are structured in a tree whereas in dual tree methods both sample points and query points are structured in respective trees. Using these tree structures enables us to compute the density estimate for a bunch of points together at once thus reducing the number of required computations. This speed-up, however, results in reduced accuracy. Greater the speed-up, lower the accuracy. Therefore, in practice, the maximum amount of speed-up that can be afforded is usually controlled by error tolerance values. KDE on toy data Let us learn about KDE in Shogun by estimating a mixture of 2 one-dimensional gaussian distributions. $$pdf(x) = \frac{1}{2} [\mathcal{N}(\mu_1,\sigma_1) + \mathcal{N}(\mu_2,\sigma_2)]$$ We start by plotting the actual distribution and generating the required samples (i.e. $x_i$s). End of explanation """ def get_kde_result(bandwidth,samples): # set model parameters kernel_type = sg.K_GAUSSIAN dist_metric = sg.D_EUCLIDEAN # other choice is D_MANHATTAN eval_mode = sg.EM_KDTREE_SINGLE # other choices are EM_BALLTREE_SINGLE, EM_KDTREE_DUAL and EM_BALLTREE_DUAL leaf_size = 1 # min number of samples to be present in leaves of the spatial tree abs_tol = 0 # absolute tolerance rel_tol = 0 # relative tolerance i.e. accepted error as fraction of true density k=sg.KernelDensity(bandwidth,kernel_type,dist_metric,eval_mode,leaf_size,abs_tol,rel_tol) # form Shogun features and train train_feats=sg.create_features(samples) k.train(train_feats) # get log density query_points = np.array([np.linspace(0,15,500)]) query_feats = sg.create_features(query_points) log_pdf = k.get_log_density(query_feats) return query_points,log_pdf query_points,log_pdf=get_kde_result(0.5,samples) """ Explanation: Now, we will apply KDE to estimate the actual pdf using the samples. Using KDE in Shogun is a 3 stage process : setting the model parameters, supplying sample data points for training and supplying query points for getting log of pdf estimates. End of explanation """ def plot_pdf(samples,query_points,log_pdf,title): plt.plot(samples,np.zeros((1,samples.size)),'rx') plt.plot(query_points[0,:],np.exp(log_pdf),'r',label="Estimated pdf") plt.plot(x,y,'b--',label="Actual pdf") plt.title(title) plt.legend() plt.show() plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.5') """ Explanation: We have calculated log of pdf. Let us see how accurate it is by comparing it with the actual pdf. End of explanation """ query_points,log_pdf=get_kde_result(0.1,samples) plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.1') query_points,log_pdf=get_kde_result(0.2,samples) plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.2') query_points,log_pdf=get_kde_result(0.5,samples) plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.5') query_points,log_pdf=get_kde_result(1.1,samples) plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=1.1') query_points,log_pdf=get_kde_result(1.5,samples) plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=1.5') """ Explanation: We see that the estimated pdf resembles the actual pdf with reasonable accuracy. This is a small demonstration of the fact that KDE can be used to estimate any arbitrary distribution given a finite number of it's samples. Effect of bandwidth Kernel bandwidth is a very important controlling parameter of the kernel density estimate. We have already seen that for bandwidth of 0.5, the estimated pdf almost coincides with the actual pdf. Let us see what happens when we decrease or increase the value of the kernel bandwidth keeping number of samples constant at 200. End of explanation """ samples=generate_samples(20,mu1,sigma1,mu2,sigma2) query_points,log_pdf=get_kde_result(0.7,samples) plot_pdf(samples,query_points,log_pdf,'num_samples=20, bandwidth=0.7') samples=generate_samples(200,mu1,sigma1,mu2,sigma2) query_points,log_pdf=get_kde_result(0.5,samples) plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.5') samples=generate_samples(2000,mu1,sigma1,mu2,sigma2) query_points,log_pdf=get_kde_result(0.4,samples) plot_pdf(samples,query_points,log_pdf,'num_samples=2000, bandwidth=0.4') """ Explanation: From the above plots, it can be inferred that the kernel bandwidth controls the extent of smoothness of the pdf function. Low value of bandwidth parameter causes under-smoothing (which is the case with the first 2 plots from top) and high value causes over-smoothing (as it is the case with the bottom 2 plots). The perfect value of the kernel bandwidth should be estimated using model-selection techniques which is presently not supported by Shogun (to be updated soon). Effect of number of samples Here, we see the effect of the number of samples on the estimated pdf, fine-tuning bandwidth in each case such that we get the most accurate pdf. End of explanation """ with open(os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as f: feats = [] # read data from file for line in f: words = line.rstrip().split(',') feats.append([float(i) for i in words[0:4]]) # create observation matrix obsmatrix = np.array(feats).T # Just keep 2 most important features obsmatrix = obsmatrix[2:4,:] # plot the data def plot_samples(marker='o',plot_show=True): # First 50 data belong to Iris Sentosa, plotted in green plt.plot(obsmatrix[0,0:50], obsmatrix[1,0:50], marker, color='green', markersize=5,label='Iris Sentosa') # Next 50 data belong to Iris Versicolour, plotted in red plt.plot(obsmatrix[0,50:100], obsmatrix[1,50:100], marker, color='red', markersize=5,label='Iris Versicolour') # Last 50 data belong to Iris Virginica, plotted in blue plt.plot(obsmatrix[0,100:150], obsmatrix[1,100:150], marker, color='blue', markersize=5,label='Iris Virginica') if plot_show: plt.xlim(0,8) plt.ylim(-1,3) plt.title('3 varieties of Iris plants') plt.xlabel('petal length') plt.ylabel('petal width') plt.legend(numpoints=1,bbox_to_anchor=(0.97,0.35)) plt.show() plot_samples() """ Explanation: Firstly, We see that the estimated pdf becomes more accurate with increasing number of samples. By running the above snippent multiple times, we also notice that the variation in the shape of estimated pdf, between 2 different runs of the above code snippet, is highest when the number of samples is 20 and lowest when the number of samples is 2000. Therefore, we can say that with increase in the number of samples, the stability of the estimated pdf increases. Both the results can be explained using the intuitive fact that a larger number of samples gives a better picture of the entire distribution. A formal proof of the same has been presented by L. Devroye in his book "Nonparametric Density Estimation: The $L_1$ View" [3]. It is theoretically proven that as the number of samples tends to $\infty$, the estimated pdf converges to the real pdf. Classification using KDE In this section we see how KDE can be used for classification using a generative approach. Here, we try to classify the different varieties of Iris plant making use of Fisher's Iris dataset borrowed from the <a href='http://archive.ics.uci.edu/ml/datasets/Iris'>UCI Machine Learning Repository</a>. There are 3 varieties of Iris plants: <ul><li>Iris Sensosa</li><li>Iris Versicolour</li><li>Iris Virginica</li></ul> <br> The Iris dataset enlists 4 features that can be used to segregate these varieties, but for ease of analysis and visualization, we only use two of the most important features (ie. features with very high class correlations)[refer to <a href='http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names'>summary statistics</a>] namely <ul><li>petal length</li><li>petal width</li></ul> <br> As a first step, we plot the data. End of explanation """ import scipy.interpolate as interpolate def get_kde(samples): # set model parameters bandwidth = 0.4 kernel_type = sg.K_GAUSSIAN dist_metric = sg.D_EUCLIDEAN eval_mode = sg.EM_BALLTREE_DUAL leaf_size = 1 abs_tol = 0 rel_tol = 0 k=sg.KernelDensity(bandwidth,kernel_type,dist_metric,eval_mode,leaf_size,abs_tol,rel_tol) # form Shogun features and train train_feats=sg.create_features(samples) k.train(train_feats) return k def density_estimate_grid(kdestimator): xmin,xmax,ymin,ymax=[0,8,-1,3] # Set up a regular grid of interpolation points x, y = np.linspace(xmin, xmax, 100), np.linspace(ymin, ymax, 100) x, y = np.meshgrid(x, y) # compute density estimate at each of the grid points query_feats=sg.create_features(np.array([x[0,:],y[0,:]])) z=np.array([kdestimator.get_log_density(query_feats)]) z=np.exp(z) for i in range(1,x.shape[0]): query_feats=sg.create_features(np.array([x[i,:],y[i,:]])) zi=np.exp(kdestimator.get_log_density(query_feats)) z=np.vstack((z,zi)) return (x,y,z) def plot_pdf(kdestimator,title): # compute interpolation points and corresponding kde values x,y,z=density_estimate_grid(kdestimator) # plot pdf plt.imshow(z, vmin=z.min(), vmax=z.max(), origin='lower',extent=[x.min(), x.max(), y.min(), y.max()]) plt.title(title) plt.colorbar(shrink=0.5) plt.xlabel('petal length') plt.ylabel('petal width') plt.show() kde1=get_kde(obsmatrix[:,0:50]) plot_pdf(kde1,'pdf for Iris Sentosa') kde2=get_kde(obsmatrix[:,50:100]) plot_pdf(kde2,'pdf for Iris Versicolour') kde3=get_kde(obsmatrix[:,100:150]) plot_pdf(kde3,'pdf for Iris Virginica') kde=get_kde(obsmatrix[:,0:150]) plot_pdf(kde,'Combined pdf') """ Explanation: Next, let us use the samples to estimate the probability density functions of each category of plant. End of explanation """ # get 3 likelihoods for each test point in grid x,y,z1=density_estimate_grid(kde1) x,y,z2=density_estimate_grid(kde2) x,y,z3=density_estimate_grid(kde3) # classify using our decision rule z=[] for i in range(0,x.shape[0]): zj=[] for j in range(0,x.shape[1]): if ((z1[i,j]>z2[i,j]) and (z1[i,j]>z3[i,j])): zj.append(1) elif (z2[i,j]>z3[i,j]): zj.append(2) else: zj.append(0) z.append(zj) z=np.array(z) # plot results plt.imshow(z, vmin=z.min(), vmax=z.max(), origin='lower',extent=[x.min(), x.max(), y.min(), y.max()]) plt.title("Classified regions") plt.xlabel('petal length') plt.ylabel('petal width') plot_samples(marker='x',plot_show=False) plt.show() """ Explanation: The above contour plots depict the pdf of respective categories of iris plant. These probability density functions can be used as generative models to estimate the likelihood of any test sample belonging to a particular category. We use these likelihoods for classification by forming a simple decision rule: a test sample is assigned the class for which it's likelihood is maximum. With this in mind, let us try to segregate the entire 2-D space into 3 regions : <ul><li>Iris Sentosa (green)</li><li>Iris Versicolour (red)</li><li>Iris Virginica (blue)</li></ul> End of explanation """
amirziai/learning
deep-learning/fully-convolutional-networks.ipynb
mit
import numpy as np import tensorflow as tf import collections """ Explanation: Fully Convolutional Networks (FCN) Notes from Udacity's Self-Driving Car Nanodegree - Encoder extracts features that the decoder uses layer Pieces: - Pre-train encoder on VGG/ResNet - Do a 1x1 convolution - Tansposed convolutions to upsample Skip connections are added. If VGG is used then only 3rd and 4th pooling layers are used as skip connections. Too many skip connections can lead to an explosion of the model size. End of explanation """ # custom init with the seed set to 0 by default def custom_init(shape, dtype=tf.float32, partition_info=None, seed=0): return tf.random_normal(shape, dtype=dtype, seed=seed) # TODO: Use `tf.layers.conv2d` to reproduce the result of `tf.layers.dense`. # Set the `kernel_size` and `stride`. def conv_1x1(x, num_outputs): kernel_size = 1 stride = 1 return tf.layers.conv2d(x, num_outputs, kernel_size, stride, kernel_initializer=custom_init) num_outputs = 2 x = tf.constant(np.random.randn(1, 2, 2, 1), dtype=tf.float32) # `tf.layers.dense` flattens the input tensor if the rank > 2 and reshapes it back to the original rank # as the output. dense_out = tf.layers.dense(x, num_outputs, kernel_initializer=custom_init) conv_out = conv_1x1(x, num_outputs) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) a = sess.run(dense_out) b = sess.run(conv_out) print("Dense Output =", a) print("Conv 1x1 Output =", b) print("Same output? =", np.allclose(a, b, atol=1.e-5)) a.shape b.shape """ Explanation: 1- Replace Fully Connected (FC) with 1x1 convolutions End of explanation """ def upsample(x): """ Apply a two times upsample on x and return the result. :x: 4-Rank Tensor :return: TF Operation """ # TODO: Use `tf.layers.conv2d_transpose` return tf.layers.conv2d_transpose(x, x.shape[3], kernel_size=(3, 3), strides=2, padding='SAME') x = tf.constant(np.random.randn(1, 4, 4, 3), dtype=tf.float32) conv = upsample(x) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) result = sess.run(conv) print('Input Shape: {}'.format(x.get_shape())) print('Output Shape: {}'.format(result.shape)) """ Explanation: 2- Upsampling through transposed convolution Reverse convolution in which forward and backward passes are swapped aka deconvolution Differentiability retained and training exactly the same as before https://dspguru.com/dsp/faqs/multirate/interpolation/ https://github.com/vdumoulin/conv_arithmetic <img src="https://d17h27t6h515a5.cloudfront.net/topher/2017/October/59d8670c_transposed-conv/transposed-conv.png"> End of explanation """ truth = np.array( [[0, 0, 0, 0], [1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3] ] ) prediction = np.array([ [0, 0, 0, 0], [1, 0, 0, 1], [1, 2, 2, 1], [3, 3, 0, 3] ]) def iou1(truth, pred): t = truth + 1 p = pred + 1 classes = np.unique(t) a = ((t == p) * t).flatten() tp = Counter(a[a > 0]) b = ((t != p) * t).flatten() fn = Counter(b[b > 0]) c = ((t != p) * p).flatten() fp = Counter(c[c > 0]) ious = { class_: tp.get(class_) / count for class_, count in (tp + fp + fn).items() } print(ious) return sum(ious.values()) / len(ious) iou1(truth, prediction) """ Explanation: 3- Skip connection Retrain information Use info from multiple resolutions Semantic Segmentation Bounding boxes for object detection, easier than segmentation YOLO and SSD which work well: High frames per second (FPS) Can detect cars, people, traffic signs, etc Semantic segmentation Pixel level Scene understanding Multiple decoders for different tasks (e.g. segmentation, depth) Intersection over Union (IoU) Intersection => TP Union => classified T (TP + FP) + actually T (TP + FN) TensorFlow Implementation End of explanation """ def mean_iou(ground_truth, prediction, num_classes): # TODO: Use `tf.metrics.mean_iou` to compute the mean IoU. iou, iou_op = tf.metrics.mean_iou(ground_truth, prediction, num_classes) return iou, iou_op ground_truth = tf.constant([ [0, 0, 0, 0], [1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]], dtype=tf.float32) prediction = tf.constant([ [0, 0, 0, 0], [1, 0, 0, 1], [1, 2, 2, 1], [3, 3, 0, 3]], dtype=tf.float32) # TODO: use `mean_iou` to compute the mean IoU iou, iou_op = mean_iou(ground_truth, prediction, 4) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # need to initialize local variables for this to run `tf.metrics.mean_iou` sess.run(tf.local_variables_initializer()) sess.run(iou_op) # should be 0.53869 print("Mean IoU =", sess.run(iou)) """ Explanation: Tensorflow implementation End of explanation """
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
tutorial_part1/IIR Filter Design and C Headers.ipynb
bsd-2-clause
fs = 48000 f_pass = 5000 f_stop = 8000 b_but,a_but,sos_but = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'butter') b_cheb1,a_cheb1,sos_cheb1 = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'cheby1') b_cheb2,a_cheb2,sos_cheb2 = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'cheby2') b_elli,a_elli,sos_elli = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'ellip') """ Explanation: IIR Filter Design Floating point IIR filters as a cascade of second-order biquadratic sections is the objective here. We will also need a means to export the filter coefficients to header files for use in embedded systems designs. Header export functions for float32_t cascade of second-order sections (SOS) format are provided in the module coeff2header.py. The next step is to actually design some filters using functions found in scipy.signal. Specifically signal.iirdesign() serves as a design function for IIR filters from amplitude response requirements. The code module iir_design_helper.py contains lowpass, highpass, bandpass, and bandstop wrapper functions over iirdesign() to make the design process consistent with the FIR design process established in the module fir_design_helper.py. Understand also that iirdesign() itself wraps other lower level IIR filter design functions found in scipy.signal. In the end the key functions to be used for the design of IIR digital filters from classical analog prototypes have their doc strings shown below: ```python def IIR_lpf(f_pass, f_stop, Ripple_pass, Atten_stop, fs = 1.00, ftype = 'butter'): """ Design an IIR lowpass filter using scipy.signal.iirdesign. The filter order is determined based on f_pass Hz, f_stop Hz, and the desired stopband attenuation d_stop in dB, all relative to a sampling rate of fs Hz. Mark Wickert October 2016 """ b,a = signal.iirdesign(2*float(f_pass)/fs, 2*float(f_stop)/fs, Ripple_pass, Atten_stop, ftype = ftype, output='ba') sos = signal.iirdesign(2*float(f_pass)/fs, 2*float(f_stop)/fs, Ripple_pass, Atten_stop, ftype = ftype, output='sos') tag = 'IIR ' + ftype + ' order' print('%s = %d.' % (tag,len(a)-1)) return b, a, sos ``` IIR Design Based on the Bilinear Transformation There are multiple ways of designing IIR filters based on amplitude response requirements. When the desire is to have the filter approximation follow an analog prototype such as Butterworth, Chebychev, etc., is using the bilinear transformation. The function signal.iirdesign() described above does exactly this. In the example below we consider lowpass amplitude response requirements and see how the filter order changes when we choose different analog prototypes. Example: Lowpass Design Comparison The lowpass amplitude response requirements given $f_s = 48$ kHz are: 1. $f_\text{pass} = 5$ kHz 2. $f_\text{stop} = 8$ kHz 3. Passband ripple of 0.5 dB 4. Stopband attenuation of 60 dB End of explanation """ iir_d.freqz_resp_cas_list([sos_but,sos_cheb1,sos_cheb2,sos_elli],'dB',fs=48) ylim([-80,5]) title(r'IIR Lowpass Compare') ylabel(r'Filter Gain (dB)') xlabel(r'Frequency in kHz') legend((r'Butter order: %d' % (len(a_but)-1), r'Cheby1 order: %d' % (len(a_cheb1)-1), r'Cheby2 order: %d' % (len(a_cheb2)-1), r'Elliptic order: %d' % (len(a_elli)-1)),loc='best') grid(); """ Explanation: Using Frequency Response List for Cascade Designs Similar to frequency response list for the standard rational function form of $H(z)$, but now using the cascade of second-order sections form instead of the large a and b polynomials. The tradeoff between the design is easily seen. End of explanation """ iir_d.sos_zplane(sos_but) """ Explanation: Pole-zero Plot of Butterworth Design End of explanation """ # Write a C header file c2h.IIR_sos_header('IIR_elliptic_lpf_5_8.h',sos_elli) """ Explanation: Exporting Coefficients to Header Files Once a filter design is complete it can be exported as a C header file using IIR_sos_header() for floating-point cascade of second-order sections design. This function is available in coeff2header.py. End of explanation """ # Export the Chebyshev 1 Design c2h.IIR_sos_header('IIR_cheby1_lpf_5_8.h',sos_cheb1) """ Explanation: The header file, remez_8_14_bpf_f32.h written above takes the form: ```c //define a IIR SOS CMSIS-DSP coefficient array include <stdint.h> ifndef STAGES define STAGES 3 endif /********/ / IIR SOS Filter Coefficients / float32_t ba_coeff[15] = { //b0,b1,b2,a1,a2,... by stage +3.405534e-03, +2.879634e-03, +3.405534e-03, +1.555966e+00, -6.373943e-01, +1.000000e+00, -8.884023e-01, +1.000000e+00, +1.525341e+00, -7.895790e-01, +1.000000e+00, -1.232239e+00, +1.000000e+00, +1.530379e+00, -9.375340e-01 }; /********/ ``` As in the FIR design notebook, the design can be run on an ARM Cortex M4 micro controller using the Cypress FM4 $50 dev kit End of explanation """ f_AD,Mag_AD, Phase_AD = loadtxt('IIR_elliptic_5_8_48k.csv', delimiter=',',skiprows=6,unpack=True) iir_d.freqz_resp_cas_list([sos_elli],'dB',fs=48) ylim([-80,5]) plot(f_AD/1e3,Mag_AD) title(r'Third-Order Elliptic Lowpass: $f_{pass}=5$kHz and $f_{stop}=8$kHz') ylabel(r'Filter Gain (dB)') xlabel(r'Frequency in kHz') legend((r'Theory',r'AD Measured'),loc='upper right') grid(); """ Explanation: Comparing with Measured Data Here we use the Analog Discovery as a means to capture measured results with the original design. The Elliptic Design End of explanation """ f_AD,Mag_AD, Phase_AD = loadtxt('IIR_cheby1_5_8_48k.csv', delimiter=',',skiprows=6,unpack=True) iir_d.freqz_resp_cas_list([sos_cheb1],'dB',fs=48) ylim([-80,5]) plot(f_AD/1e3,Mag_AD) title(r'Fourth-Order Cheby1 Lowpass: $f_{pass}=5$kHz and $f_{stop}=8$kHz') ylabel(r'Filter Gain (dB)') xlabel(r'Frequency in kHz') legend((r'Theory',r'AD Measured'),loc='upper right') grid(); """ Explanation: The Chebyshev Type 1 Design End of explanation """ Image('images/IIR_LPF_Design.png',width='100%') """ Explanation: IIR Design Problem In this first problem you will implement a specific IIR design to meet certain amplitude response requirements. The filter topology will be a cascade of second-order sections that follows from the example given earlier in the this notebook. Design your filter to meet specific amplitude response requirements given in Figure 16, using an elliptic lowpass filter prototype. End of explanation """
nansencenter/nansat-lectures
notebooks/12 Nansat Use Case 03.ipynb
gpl-3.0
# download sample files !wget -P data -nc ftp://ftp.nersc.no/nansat/test_data/obpg_l2/A2015121113500.L2_LAC.NorthNorwegianSeas.hdf !wget -P data -nc ftp://ftp.nersc.no/nansat/test_data/obpg_l2/A2015122122000.L2_LAC.NorthNorwegianSeas.hdf import numpy as np import matplotlib.pyplot as plt from IPython.display import Image %matplotlib inline from nansat import * """ Explanation: Use Case 3: Colocate different data End of explanation """ n1 = Nansat('data/A2015121113500.L2_LAC.NorthNorwegianSeas.hdf') chlor_a1 = n1['chlor_a'] n2 = Nansat('data/A2015122122000.L2_LAC.NorthNorwegianSeas.hdf') chlor_a2 = n2['chlor_a'] """ Explanation: Open MODIS/Aqua files with chlorophyll in the North Sea and fetch data End of explanation """ plt.figure(figsize=(5,5)) plt.subplot(121) plt.imshow(chlor_a1, vmin=0, vmax=3) plt.subplot(122) plt.imshow(chlor_a2, vmin=0, vmax=3) plt.show() """ Explanation: Plot chlorophyll-a maps in swath projection End of explanation """ # define domain in longlat projection d = Domain('+proj=stere +lat_0=58 +lon_0=5 +no_defs', '-te -300000 -300000 300000 300000 -tr 3000 3000') # reproject first image and get matrix with reprojected chlorophyll n1.reproject(d) chlor_a1 = n1['chlor_a'] # reproject second image and get matrix with reprojected chlorophyll n2.reproject(d) chlor_a2 = n2['chlor_a'] # get mask of land and set values of land pixels to NAN (not-a-number) mask1 = n1.watermask()[1] chlor_a1[mask1 == 2] = np.nan chlor_a2[mask1 == 2] = np.nan # prepare landmask for plotting: land pixels=1, water pixels=NaN landmask = 1 - mask1.astype(float) landmask[landmask == 0] = np.nan plt.figure(figsize=(10,10)) plt.subplot(121) plt.imshow(chlor_a1, vmin=0, vmax=5) plt.imshow(landmask, cmap='gray') plt.subplot(122) plt.imshow(chlor_a2, vmin=0, vmax=5) plt.imshow(landmask, cmap='gray') plt.show() # replace negative values (clouds) by NAN chlor_a1[chlor_a1 < 0] = np.nan chlor_a2[chlor_a2 < 0] = np.nan # find difference chlor_diff = chlor_a2 - chlor_a1 # plot plt.figure(figsize=(5,5)) plt.imshow(chlor_diff, vmin=-0.1, vmax=2);plt.colorbar() plt.imshow(landmask, cmap='gray') plt.show() # get transect - vector of data from 2D matrix from known locations points = [[200, 75], [150, 150]] t1 = n1.get_transect(points, ['chlor_a'], lonlat=False) chl1 = t1['chlor_a'] lon1 = t1['lon'] lat1 = t1['lat'] t2 = n2.get_transect(points, ['chlor_a'], lonlat=False) chl2 = t2['chlor_a'] # replace negative values with NAN chl1 = np.array(chl1) chl2 = np.array(chl2) chl1[(chl1 < 0) + (chl1 > 5)] = np.nan chl2[(chl2 < 0) + (chl2 > 5)] = np.nan print (n1.time_coverage_start) # plot plt.plot(lon1, chl1, '-', label=n1.time_coverage_start) plt.plot(lon1, chl2, '-', label=n2.time_coverage_start) plt.legend() plt.xlabel('longitude') plt.ylabel('chlorphyll-a') plt.show() """ Explanation: Colocate data. Reproject both images onto the same Domain. End of explanation """
sebp/scikit-survival
doc/user_guide/boosting.ipynb
gpl-3.0
import numpy as np import matplotlib.pyplot as plt import pandas as pd %matplotlib inline from sklearn.model_selection import train_test_split from sksurv.datasets import load_breast_cancer from sksurv.ensemble import ComponentwiseGradientBoostingSurvivalAnalysis from sksurv.ensemble import GradientBoostingSurvivalAnalysis from sksurv.preprocessing import OneHotEncoder """ Explanation: Gradient Boosted Models Gradient Boosting does not refer to one particular model, but a versatile framework to optimize many loss functions. It follows the strength in numbers principle by combining the predictions of multiple base learners to obtain a powerful overall model. The base learners are often very simple models that are only slightly better than random guessing, which is why they are also referred to as weak learners. The predictions are combined in an additive manner, where the addition of each base model improves (or "boosts") the overall model. Therefore, the overall model $f$ is an additive model of the form: $$ \begin{equation} f(\mathbf{x}) = \sum_{m=1}^M \beta_m g(\mathbf{x}; {\theta}_m), \end{equation} $$ where $M > 0$ denotes the number of base learners, and $\beta_m \in \mathbb{R}$ is a weighting term. The function $g$ refers to a base learner and is parameterized by the vector ${\theta}$. Individual base learners differ in the configuration of their parameters ${\theta}$, which is indicated by a subscript $m$. A gradient boosted model is similar to a Random Survival Forest, in the sense that it relies on multiple base learners to produce an overall prediction, but differs in how those are combined. While a Random Survival Forest fits a set of Survival Trees independently and then averages their predictions, a gradient boosted model is constructed sequentially in a greedy stagewise fashion. Base Learners Depending on the loss function to be minimized and base learner used, different models arise. sksurv.ensemble.GradientBoostingSurvivalAnalysis implements gradient boosting with regression tree base learner, and sksurv.ensemble.ComponentwiseGradientBoostingSurvivalAnalysis uses component-wise least squares as base learner. The former is very versatile and can account for complicated non-linear relationships between features and time to survival. When using component-wise least squares as base learner, the final model will be a linear model, but only a small subset of features will be selected, similar to the LASSO penalized Cox model. Losses Cox's Partial Likelihood The loss function can be specified via the loss argument loss; the default loss function is the partial likelihood loss of Cox's proportional hazards model (coxph). Therefore, the objective is to maximize the log partial likelihood function, but replacing the traditional linear model $\mathbf{x}^\top \beta$ with the additive model $f(\mathbf{x})$: $$ \begin{equation} \arg \min_{f} \quad \sum_{i=1}^n \delta_i \left[ f(\mathbf{x}i) - \log \left( \sum{j \in \mathcal{R}_i} \exp(f(\mathbf{x}_j)) \right) \right] . \end{equation} $$ End of explanation """ X, y = load_breast_cancer() Xt = OneHotEncoder().fit_transform(X) X_train, X_test, y_train, y_test = train_test_split(Xt, y, test_size=0.25, random_state=0) """ Explanation: To demonstrate its use we are going to use the breast cancer data, which contains the expression levels of 76 genes, age, estrogen receptor status (er), tumor size and grade for 198 individuals. The objective is to predict the time to distant metastasis. First, we load the data and perform one-hot encoding of categorical variables er and grade. End of explanation """ est_cph_tree = GradientBoostingSurvivalAnalysis( n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0 ) est_cph_tree.fit(X_train, y_train) cindex = est_cph_tree.score(X_test, y_test) print(round(cindex, 3)) """ Explanation: Next, we are using gradient boosting on Cox's partial likelihood with regression trees base learners, which we restrict to using only a single split (so-called stumps). End of explanation """ scores_cph_tree = {} est_cph_tree = GradientBoostingSurvivalAnalysis( learning_rate=1.0, max_depth=1, random_state=0 ) for i in range(1, 31): n_estimators = i * 5 est_cph_tree.set_params(n_estimators=n_estimators) est_cph_tree.fit(X_train, y_train) scores_cph_tree[n_estimators] = est_cph_tree.score(X_test, y_test) x, y = zip(*scores_cph_tree.items()) plt.plot(x, y) plt.xlabel("n_estimator") plt.ylabel("concordance index") plt.grid(True) """ Explanation: This model achieves a concordance index of 0.756 on the test data. Let's see how the test performance changes with the ensemble size (n_estimators). End of explanation """ scores_cph_ls = {} est_cph_ls = ComponentwiseGradientBoostingSurvivalAnalysis( learning_rate=1.0, random_state=0 ) for i in range(1, 31): n_estimators = i * 10 est_cph_ls.set_params(n_estimators=n_estimators) est_cph_ls.fit(X_train, y_train) scores_cph_ls[n_estimators] = est_cph_ls.score(X_test, y_test) x, y = zip(*scores_cph_ls.items()) plt.plot(x, y) plt.xlabel("n_estimator") plt.ylabel("concordance index") plt.grid(True) """ Explanation: We can see that the performance quickly improves, but also that the performance starts to decrease if the ensemble becomes too big. Let's repeat the analysis using component-wise least squares base learners. End of explanation """ coef = pd.Series(est_cph_ls.coef_, ["Intercept"] + Xt.columns.tolist()) print("Number of non-zero coefficients:", (coef != 0).sum()) coef_nz = coef[coef != 0] coef_order = coef_nz.abs().sort_values(ascending=False).index coef_nz.loc[coef_order] """ Explanation: The performance increase is much slower here and its maximum performance seems to be below that of the ensemble of tree-based learners. This is not surprising, because with component-wise least squares base learners the overall ensemble is a linear model, whereas with tree-based learners it will be a non-linear model. The coefficients of the model can be retrieved as follows: End of explanation """ est_aft_ls = ComponentwiseGradientBoostingSurvivalAnalysis( loss="ipcwls", n_estimators=300, learning_rate=1.0, random_state=0 ).fit(X_train, y_train) cindex = est_aft_ls.score(X_test, y_test) print(round(cindex, 3)) """ Explanation: Despite using hundreds of iterations, the resulting model is very parsimonious and easy to interpret. Accelerated Failure Time Model The Accelerated Failure Time (AFT) model is an alternative to Cox's proportional hazards model. The latter assumes that features only influence the hazard function via a constant multiplicative factor. In contrast, features in an AFT model can accelerate or decelerate the time to an event by a constant factor. The figure below depicts the predicted hazard functions of a proportional hazards model in blue and that of an AFT model in orange. We can see that the hazard remains constant for the proportional hazards model and varies for the AFT model. The objective function in an AFT model can be expressed as a weighted least squares problem with respect to the logarithm of the survival time: $$ \begin{equation} \arg \min_{f} \quad \frac{1}{n} \sum_{i=1}^n \omega_i (\log y_i - f(\mathbf{x}_i)) . \end{equation} $$ The weight $\omega_i$ associated with the $i$-th sample is the inverse probability of being censored after time $y_i$: $$ \begin{equation} \omega_i = \frac{\delta_i}{\hat{G}(y_i)} , \end{equation} $$ where $\hat{G}(\cdot)$ is an estimator of the censoring survivor function. Such a model can be fit with sksurv.ensemble.GradientBoostingSurvivalAnalysis or sksurv.ensemble.ComponentwiseGradientBoostingSurvivalAnalysis by specifying the loss="ipcwls" argument. End of explanation """ n_estimators = [i * 5 for i in range(1, 21)] estimators = { "no regularization": GradientBoostingSurvivalAnalysis( learning_rate=1.0, max_depth=1, random_state=0 ), "learning rate": GradientBoostingSurvivalAnalysis( learning_rate=0.1, max_depth=1, random_state=0 ), "dropout": GradientBoostingSurvivalAnalysis( learning_rate=1.0, dropout_rate=0.1, max_depth=1, random_state=0 ), "subsample": GradientBoostingSurvivalAnalysis( learning_rate=1.0, subsample=0.5, max_depth=1, random_state=0 ), } scores_reg = {k: [] for k in estimators.keys()} for n in n_estimators: for name, est in estimators.items(): est.set_params(n_estimators=n) est.fit(X_train, y_train) cindex = est.score(X_test, y_test) scores_reg[name].append(cindex) scores_reg = pd.DataFrame(scores_reg, index=n_estimators) ax = scores_reg.plot(xlabel="n_estimators", ylabel="concordance index") ax.grid(True) """ Explanation: Regularization The most important parameter in gradient boosting is the number of base learner to use (n_estimators argument). A higher number will lead to a more complex model. However, this can easily lead to overfitting on the training data. The easiest way would be to just use less base estimators, but there are three alternatives to combat overfitting: Use a learning_rate less than 1 to restrict the influence of individual base learners, similar to the Ridge penalty. Use a non-zero dropout_rate, which forces base learners to also account for some of the previously fitted base learners to be missing. Use subsample less than 1 such that each iteration only a portion of the training data is used. This is also known as stochastic gradient boosting. End of explanation """ class EarlyStoppingMonitor: def __init__(self, window_size, max_iter_without_improvement): self.window_size = window_size self.max_iter_without_improvement = max_iter_without_improvement self._best_step = -1 def __call__(self, iteration, estimator, args): # continue training for first self.window_size iterations if iteration < self.window_size: return False # compute average improvement in last self.window_size iterations. # oob_improvement_ is the different in negative log partial likelihood # between the previous and current iteration. start = iteration - self.window_size + 1 end = iteration + 1 improvement = np.mean(estimator.oob_improvement_[start:end]) if improvement > 1e-6: self._best_step = iteration return False # continue fitting # stop fitting if there was no improvement # in last max_iter_without_improvement iterations diff = iteration - self._best_step return diff >= self.max_iter_without_improvement est_early_stopping = GradientBoostingSurvivalAnalysis( n_estimators=1000, learning_rate=0.05, subsample=0.5, max_depth=1, random_state=0 ) monitor = EarlyStoppingMonitor(25, 50) est_early_stopping.fit(X_train, y_train, monitor=monitor) print("Fitted base learners:", est_early_stopping.n_estimators_) cindex = est_early_stopping.score(X_test, y_test) print("Performance on test set", round(cindex, 3)) """ Explanation: The plot reveals that using dropout or a learning rate are most effective in avoiding overfitting. Moreover, the learning rate and ensemble size are strongly connected, choosing smaller a learning rate suggests increasing n_estimators. Therefore, it is recommended to use a relatively small learning rate and select the number of estimators via early stopping. Note that we can also apply multiple types of regularization, such as regularization by learning rate and subsampling. Since not all training data is used, this allows using the left-out data to evaluate whether we should continue adding more base learners or stop training. End of explanation """ improvement = pd.Series( est_early_stopping.oob_improvement_, index=np.arange(1, 1 + len(est_early_stopping.oob_improvement_)) ) ax = improvement.plot(xlabel="iteration", ylabel="oob improvement") ax.axhline(0.0, linestyle="--", color="gray") cutoff = len(improvement) - monitor.max_iter_without_improvement ax.axvline(cutoff, linestyle="--", color="C3") _ = improvement.rolling(monitor.window_size).mean().plot(ax=ax, linestyle=":") """ Explanation: The monitor looks at the average improvement of the last 25 iterations, and if it was negative for the last 50 iterations, it will abort training. In this case, this occurred after 119 iterations. We can plot the improvement per base learner and the moving average. End of explanation """
phobson/statsmodels
examples/notebooks/regression_diagnostics.ipynb
bsd-3-clause
%matplotlib inline from __future__ import print_function from statsmodels.compat import lzip import statsmodels import numpy as np import pandas as pd import statsmodels.formula.api as smf import statsmodels.stats.api as sms import matplotlib.pyplot as plt # Load data url = 'http://vincentarelbundock.github.io/Rdatasets/csv/HistData/Guerry.csv' dat = pd.read_csv(url) # Fit regression model (using the natural log of one of the regressaors) results = smf.ols('Lottery ~ Literacy + np.log(Pop1831)', data=dat).fit() # Inspect the results print(results.summary()) """ Explanation: Regression diagnostics This example file shows how to use a few of the statsmodels regression diagnostic tests in a real-life context. You can learn about more tests and find out more information abou the tests here on the Regression Diagnostics page. Note that most of the tests described here only return a tuple of numbers, without any annotation. A full description of outputs is always included in the docstring and in the online statsmodels documentation. For presentation purposes, we use the zip(name,test) construct to pretty-print short descriptions in the examples below. Estimate a regression model End of explanation """ name = ['Jarque-Bera', 'Chi^2 two-tail prob.', 'Skew', 'Kurtosis'] test = sms.jarque_bera(results.resid) lzip(name, test) """ Explanation: Normality of the residuals Jarque-Bera test: End of explanation """ name = ['Chi^2', 'Two-tail probability'] test = sms.omni_normtest(results.resid) lzip(name, test) """ Explanation: Omni test: End of explanation """ from statsmodels.stats.outliers_influence import OLSInfluence test_class = OLSInfluence(results) test_class.dfbetas[:5,:] """ Explanation: Influence tests Once created, an object of class OLSInfluence holds attributes and methods that allow users to assess the influence of each observation. For example, we can compute and extract the first few rows of DFbetas by: End of explanation """ from statsmodels.graphics.regressionplots import plot_leverage_resid2 fig, ax = plt.subplots(figsize=(8,6)) fig = plot_leverage_resid2(results, ax = ax) """ Explanation: Explore other options by typing dir(influence_test) Useful information on leverage can also be plotted: End of explanation """ np.linalg.cond(results.model.exog) """ Explanation: Other plotting options can be found on the Graphics page. Multicollinearity Condition number: End of explanation """ name = ['Lagrange multiplier statistic', 'p-value', 'f-value', 'f p-value'] test = sms.het_breushpagan(results.resid, results.model.exog) lzip(name, test) """ Explanation: Heteroskedasticity tests Breush-Pagan test: End of explanation """ name = ['F statistic', 'p-value'] test = sms.het_goldfeldquandt(results.resid, results.model.exog) lzip(name, test) """ Explanation: Goldfeld-Quandt test End of explanation """ name = ['t value', 'p value'] test = sms.linear_harvey_collier(results) lzip(name, test) """ Explanation: Linearity Harvey-Collier multiplier test for Null hypothesis that the linear specification is correct: End of explanation """
mne-tools/mne-tools.github.io
0.14/_downloads/plot_read_and_write_raw_data.ipynb
bsd-3-clause
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # # License: BSD (3-clause) import mne from mne.datasets import sample print(__doc__) data_path = sample.data_path() fname = data_path + '/MEG/sample/sample_audvis_raw.fif' raw = mne.io.read_raw_fif(fname) # Set up pick list: MEG + STI 014 - bad channels want_meg = True want_eeg = False want_stim = False include = ['STI 014'] raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bad channels + 2 more picks = mne.pick_types(raw.info, meg=want_meg, eeg=want_eeg, stim=want_stim, include=include, exclude='bads') some_picks = picks[:5] # take 5 first start, stop = raw.time_as_index([0, 15]) # read the first 15s of data data, times = raw[some_picks, start:(stop + 1)] # save 150s of MEG data in FIF file raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks, overwrite=True) """ Explanation: Reading and writing raw files In this example, we read a raw file. Plot a segment of MEG data restricted to MEG channels. And save these data in a new raw file. End of explanation """ raw.plot() """ Explanation: Show MEG data End of explanation """
VokhmintcevKirill/ti-nic_competition2
Titanic_3.0.ipynb
mit
train.info() test.info() """ Explanation: В ходе решения Titanic_2.0 мы получили baseline- 0.76077, проведем глубокое исследование данных и попытаемся улучшить эти значения Только 2 непрерывных признака, остальные дискретные. Возмодно можно будет понастраивать кодирование признаков End of explanation """ train.Survived.describe() plt.title(u'Количесто погибших/выжевших на титанике', y= 1.1, size=15) sns.countplot(x='Survived', data = train) """ Explanation: В обоих выборках много пропусков занчений признака "Age", практически отсутствуют значения признака "Cabin", пропущено одно значение прзнака Fare в тестовой выборке Survived End of explanation """ train.Pclass.describe() plt.title(u'Распределение выживших в зависимости от класса', y= 1.1, size = 20 ) sns.barplot(x = 'Pclass', y = 'Survived', data = train) """ Explanation: Классы получились не сбалансированными, учесть это при обучении моделей Pclass End of explanation """ plt.title(u'Количество людей путешествубщих в каждоим классе', y= 1.1, size=15) sns.countplot(x='Pclass', data = train) """ Explanation: сильный признак End of explanation """ sex_encoder = LabelEncoder() train['Sex_enc'] = sex_encoder.fit_transform(train.Sex) test['Sex_enc'] = sex_encoder.transform(test.Sex) plt.title(u'Число мужчин и женщин на борту', y =1.1, size = 20) sns.countplot(x='Sex', data = train) plt.title(u'Распределение выживших в зависимости от класса и пола', y= 1.1, size = 20 ) sns.barplot(x = 'Pclass', y = 'Survived', data = train, hue = 'Sex') """ Explanation: Sex End of explanation """ sns.countplot(x='SibSp', data = train) sns.countplot(x='Parch', data = train) """ Explanation: Класс и пол - очень сильные признаки SibSp and Parch End of explanation """ for df in [train,test]: df['Fam_size'] = df.Parch + df.SibSp sns.countplot(x='Fam_size', data = train) fig = plt.figure(figsize = (10,10)) sns.barplot(x='Fam_size', y = 'Survived', hue = 'Sex',data = train) fig = plt.figure(figsize = (10,10)) sns.barplot(x='Fam_size', y = 'Survived', hue = 'Pclass',data = train) """ Explanation: создадим признак Family Size, чтобы добавить вес правому краю распредления End of explanation """ train.Ticket.describe() for elem in df.Ticket: print elem #print elem[0] """ Explanation: Очень сильный признак. Бездетные люди из 3 класса практически не имели шансов выжить Ticket End of explanation """ train['letter'] = df.Ticket.apply(lambda x: x[0]) fig = plt.figure(figsize = (10,10)) plt.title(u'Распределение пасажиров по классам в завимимость от первой буквы номер билета', size = 20) sns.countplot(x='letter',hue='Pclass', data = train) fig = plt.figure(figsize = (10,10)) sns.barplot(x='letter',y = 'Survived', hue='Pclass', data = train) """ Explanation: Первая буква билета может означать серию или еще какую-то информаци о продавце билетов, что возможно могло повлиять на групировку пассажиров на борту End of explanation """ ticket = pd.concat((train.Ticket, test.Ticket), ignore_index = True) ticket_freq = ticket.value_counts() for df in (train,test): fticket_list = [] for elem in df.Ticket: if ticket_freq[str(elem)]>1: fticket_list.append(1) else: fticket_list.append(0) df['ticket_freq'] = fticket_list plt.title(u'Доля выживших в зависимости от пола и наличия у человека повторяющегося билета', size = 20) sns.barplot(x = 'ticket_freq', y = 'Survived', data = train, hue = 'Sex') plt.title(u'Доля выживших в зависимости от пола и наличия у человека повторяющегося билета', size = 20) sns.barplot(x = 'ticket_freq', y = 'Survived', data = train, hue = 'Pclass') """ Explanation: Признак слабоват Некоторые номера билетов повторяются, разделим их на повторяющиеся и нет End of explanation """ fig = plt.figure(figsize=(10,10)) plt.title(u'Распределение стоимости билета в обучающей выборке', y= 1.1, size = 20 ) sns.distplot(train.Fare) fig = plt.figure(figsize=(10,10)) plt.title(u'Распределение стоимости билета в тестовой выборке', y= 1.1, size = 20 ) sns.distplot(test.Fare.loc[test.Fare.isnull() == False]) """ Explanation: Признак средней силы, так как перекрываются доверительные интервалы Fare End of explanation """ train.Fare.loc[train.Fare == 0] train.Survived.loc[train.Fare == 0] sns.barplot(x =train.Fare.loc[train.Fare == 0] , y = 'Survived', data = train, hue = 'Pclass') sns.barplot(x =train.Fare.loc[train.Fare == 0] , y = 'Survived', data = train, hue = 'Sex') """ Explanation: На обучении и на тесте распределение признака Fare явно не является нормальным. Объеденим значения и попытаемся привести их к нормальному распределению В ходе исследования признака оказальсь что есть пасажиры, которые не платили за билет, исследуем их End of explanation """ for df in (train, test): df['free_ticket'] = df.Fare.apply(lambda x: 1 if x==0 else 0) # заменим значения 0 на 0,01 так как дальнейшие преобразования с 0 будут плохо работать for df in (train, test): df.Fare = df.Fare.apply(lambda x:0.01 if x==0 else x) train['log_fare'] = train.Fare.apply(np.log10) sns.distplot(train.log_fare) sns.distplot(boxcox(train.Fare)[0]) shapiro(boxcox(train.Fare)[0]) train.Fare.loc[train.Fare <= 0] """ Explanation: Пасажиры с бесплатными билетами - мужчины из 3-его класса с практически нулевым шансом на выживание. Созжадим признак free_ticket, 1- бесплтный билет, 0 - платный билет End of explanation """ plt.title(u'Распределение людей по возрасту', y= 1.1, size = 20 ) sns.distplot(train.Age.loc[train.Age.isnull()== False]) train.Age.loc[train.Age.isnull()== False].describe() """ Explanation: Age возьмемся под конец, так как надо заполнять пробелы End of explanation """
dahlend/Physics77Fall17
Workshop 3 - Practice Makes Perfect.ipynb
gpl-3.0
import matplotlib.pyplot as plt import numpy as np # Base Python range() doesn't allow decimal numbers # numpy improved and made thier own: t = np.arange(0.0, 1., 0.01) y = t**3. plt.plot(100 * t, y) plt.xlabel('Time (% of semester)') plt.ylabel('Enjoyment of Fridays') plt.title('Happiness over Time') plt.show() """ Explanation: Workshop 3 - Practice Makes Perfect There is a sign-in sheet, sign in or you wont get credit for attendance today! Today: Today is about practicing what we were introduced to last friday. First I will review what we did, then talk a little about whitespace, and packages. Github Repo for Workshops - Where you can download the notebooks used in this class - https://github.com/dahlend/Physics77Fall17 Goals for Today: You can download todays notebook from github.com/dahlend/Physics77Fall17 - 0-Review - 1-Whitespace - 2-Packages! - 3-Problems - Problem 0 - Problem 1 - Problem 2 - Problem 3 - Problem 4 0 - Review Reminder cheat sheet: Types of Variables (not complete list) |Type | Name | Example | | ----- | -------------- | -------:| |int() | Integer | 1337 | |float() | decimal number | -2345.12| |complex() | complex number | 7-1j | |string() | text | "Hello World"| |list() | list of things| ['c', 1, 3, 1j]| |bool() | boolean (True or False)| True | Comparing things |Operator| Name| |:------:| ----| | > | greater than| | < | less than| | == | equal| | >= | greater than or equal| | <= | less than or equal| | != | not equal | If Statements if (some condition): if the condition is true, run code which is indented here else: run this code if the condition is not met For Loops some_list = [1, 2, 'text', -51.2] for x in some_list: print(x) if type(x) == int: print("x is an integer!") 1 - Whitespace Whitespace is a name for the space character '&nbsp;&nbsp;' or a tab, which is written '\t'. Whitespace is very important in python (This is not always true with other languages), it is used to signify heirarchy. What does that mean? Lets say we have an 'if' statement: if 6 &lt; 5: print("Hi im filler text") print("see any good movies lately?") print("ok, 6 is less than 5. That's not true!") print("If statement is over") Whitespace is how you tell python what lines of code are associated with the if statement, so in this case we should see only the output: "If statement is over" Python treats all code which is at the same number of spaces as being in the same sort of 'block', so above, the 3 print lines 'inside' the if statement will be run when the statement is true. Having to add space before lines like this happens any time you have a line of code that ends with a ':' Examples of this are: # If statements if True: print("Doing stuff") else: print("Doing other stuff") # For loops for x in some_list: print('x is equal to ', x) print('x + 5 = ', x + 5) # Defining your own function def my_function(x): print("my_function was given the variable x=", x) return x When you combine multiples of these together, you have to keep indenting! x = 6 for y in range(10): print("y = ", y) # This will run for every y in [0,...,9] if x &gt; y: print(x) # this only runs if x &gt; y, for every y in [0,...,9] else: print("y is bigger than x!") 2 - Packages (AKA Libraries, Modules, etc.) Python itself has a limited number of tools available to you. For example, lets say I want to know the result of some bessel function. The average python user doesn't care about that, so its not included in base python. But if you recall, on the first day I said, 'someone, somewhere, has done what I'm trying to do'. What we can do is use their solution. Libraries, like their namesakes, are collections of information, in this case they generally contain functions and tools that other people have built and made available to the world. The main packages we will use in this class are: - Numpy (Numerical Python) - essential mathematical methods for number crunching - MatPlotLib - the standard plotting toolset used in python - Scipy (Scientific Python) - advanced mathematical tools built on numpy Lets take a look at matplotlib's website https://matplotlib.org/ All of these packages are EXTREMELY well documented, hopefully you will get comfortable looking at the documentation for these tools by the end of the semester. Example: End of explanation """ # Your Code Here """ Explanation: Breaking it down: I want access to numpy's functions, so I need to 'import' the package. But 'numpy' is anoying to type, so I'll name it np when I use it. import numpy as np This means all of numpy is available to me, but I have to tell python when I'm using it. np.arange(0, 1, 0.01) This says, there is a function called arange() inside numpy, so I'm telling python to use it. Periods here are used to signify something is INSIDE something else, IE: np.arange is saying look for a thing called 'arange' inside np (which is a shorthand for numpy) Links to documentation for some functions used above: np.arange plt.plot 3 - Problems Problem 0 Plot a sin wave using np.sin(), the example above is a good starting point! Hint: np.sin can accept a list of numbers and returns the sin for each of the numbers as another list. End of explanation """ n = 20 length = 100 # Your code here """ Explanation: Problem 1 We are going to plot a sawtooth wave. This is trickier, since there isn't a numpy function for it! You will have to construct a list of numbers which contains the correct y values. This list of values will have to be something like: y = [0, 1, 2, 3, 4, 5, 0, 1, 2 ...] In this case, there we have the numbers from 0 to 5 being repeated over and over. Goals for the sawtooth plot: - go from 0 to n-1 - in the example above, n=6, where n is how many numbers are being repeated - have a total of length numbers in the list y Steps to pull this off: 1) Start with an empty list, this can be done with y = [ ] We will then loop length times, adding the right value to the end of the list y 2) Make a for loop going from 0 to length, and call the iteration value i 3) Now we have to add the correct value to the end of the list y, for the first n steps of the loop this is easy, we just are adding i. Thinking this through: i = 0, we add 0 to the end of the list i = 1, we add 1 to the end of the list ... i = n, we add 0 to the end of the list i = n + 1, we add 1 to the end of the list i = n + 2, we add 2 to the end of the list ... i = 2*n, we add 0 to the end of the list i = 2*n+1, we add 1 to the end of the list Hint Remember the % operator from last week? 5 % 2 = 1 ( the remainder after division is 1) (3*n) % n = 0 $\qquad$ $\frac{3n}{n}$ is 3 remainder 0 (3*n + 1) % n = 1 $\qquad$ $\frac{3n+1}{n}$ is 3 remainder 1 4) Once we know the correct value from (3), we can add it to the list y with y.append(value_from_3) 5) Plot it! Lists can have values "appended" to them, in other words you can add more things to the list you have already made. End of explanation """ list_of_single_thing = ['hello'] 5 * list_of_single_thing # Look familiar? 4 * [0, 1, 2, 3, 4, 5] """ Explanation: Problem 2 Fun with lists! Lists can do all sorts of things, for example, we can repeat entries of a list many times: End of explanation """ n = 20 length = 100 # Your code here """ Explanation: Now redo problem 1 but using this, to get this to work you will have to build a list of the correct length. range() isn't actually a list, but you can turn it into one! list(range()) There are many ways to achieve the same goal when programming, some take less effort than others. End of explanation """ # Here is a list of 100 zeros x = 100*[0] """ Explanation: Problem 3 Now we will play with another aspect of lists, indexing. Given a list of numbers x, set every 5th number to 2. In order to do this, we need to have a way of accessing an element of the list. x[0] is the first element of the list x[1] is the second x[2] is the third ... x[n] is the n+1 th this is called 0-indexing, because we started counting at 0 (if you were wondering why i always start counting from 0 in this class, this is why) We can set a value of a list to something like so: x[7] = 2 now the 8th element of the list x is 2. So to solve this problem, I'm providing you a list x, containing 100 zeros. Set every 5th to a 2, and plot the result. Steps: 1) Make a for loop going from 0 to 100 in steps of 5 hint range(0, 100, 5) 2) for each step in the for loop set the x[i] number to 2 3) plot the result End of explanation """ x = np.log(np.arange(1, 100) ** 3) # Your code here """ Explanation: Problem 4 Tell me the average value, and standard deviation, of a list of numbers I provide: numpy has a function for this, google is your friend. End of explanation """
rafburzy/Python_EE
RL_and_RLC_circuit/RLC_circuit_current_v2.ipynb
bsd-3-clause
#importing all required modules #important otherwise pop-up window may not work %matplotlib inline import numpy as np import scipy as sp from scipy.integrate import odeint, ode, romb, cumtrapz import matplotlib as mpl import matplotlib.pyplot as plt from math import * import seaborn from IPython.display import Image #bokeh from bokeh.plotting import figure, output_file, output_notebook, show """ Explanation: Solving for current in R-L-C circuit End of explanation """ # RMS value of voltage u = 230 #time vector t = np.linspace(0,0.4, 1000) #frequency & angular frequency f = 50 omega = 2 * pi * f #Resitance (values to consider 5 and 10 Ohms) R = 5 #Inductance L = 0.1 XL = 2*pi*f*L #Capacitance (worth to consider 0.01 - two inertia or 0.001 - oscillator) C = 0.01 XC = 1/(omega*C) #Phase angle phi=atan((XL-XC)/R) #closing angle [rad] alpha = 0 XL, XC """ Explanation: RLC circuit is governed by the following formulas: <img src="formula_1.png"> To put the last equation in order: <img src="formula_2.png"> This will be a starting point for the analysis, which includes two cases: RLC circuit fed with a dc voltage RLC circuit fed with an ac voltage First we need to define auxiliary variables End of explanation """ ua = [u for k in t] #definition of the function dp/dt def di(y,t): #x = i, p = di/dt x, p = y[0], y[1] dx = p dp = 1/L*(-R*p-(1/C)*x) return [dx, dp] #initial state #initial capacitor voltage uc0 = 0 y0 = [0.0, 1/L*(u-uc0)] y0 I = odeint(di, y0, t) ia = I[:,0] # Capacitor voltage definition: duc = ia/C uc = cumtrapz(duc, dx=0.4/1000, initial=0) # after integration vectors t and uc had different lengths so I need to append one item np.append(uc, uc[999]) fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(8,8)) ax[0].plot(t,ia, label="Current") ax[0].set_ylabel("Current [A]") ax[0].set_xlabel("Time [s]") ax[0].set_title("Current in R-L-C circuit during switch-on") ax[0].legend() ax[1].plot(t,ua, label="Supply voltage", color="green") ax[1].plot(t,uc, label="Capacitor voltage", color="orange") ax[1].set_ylabel("Voltage [V]") ax[1].set_xlabel("Time [s]") ax[1].set_title("Supply voltage") ax[1].legend() fig.tight_layout() #checking damping factor: if below 1 - underdamped, if above 1 - overdamped damp = (R/2)*sqrt(C/L) damp """ Explanation: RLC circuit fed with dc voltage For a dc voltage case it has a constant value, so its derivative over time is equal to zero: <img src="formula_4.png"> Consequently our equation becomes: <img src="formula_5.png"> Voltage is given as: End of explanation """ ub = [sqrt(2)*u*sin(omega*k + alpha) for k in t] # definition of the function dp/dt def di(y,t): #x = i, p = di/dt x, p = y[0], y[1] dx = p dp = 1/L*(omega*sqrt(2)*u*cos(omega*t + alpha)-R*p-(1/C)*x) return [dx, dp] #initial state #initial capacitor voltage uc0 = 0 y0 = [0.0, 1/L*(ua[0]-uc0)] I = odeint(di, y0, t) ib = I[:,0] #Capacitor voltage derivative duc2 = ib/C uc2 = cumtrapz(duc2, dx=0.4/1000, initial=0) fig, ax = plt.subplots(nrows=2, ncols=1, figsize=(8,8)) ax[0].plot(t,ib, label="Current") ax[0].set_ylabel("Current [A]") ax[0].set_xlabel("Time [s]") ax[0].set_title("Current in R-L-C circuit during switch-on") ax[0].legend() ax[1].plot(t,ub, label="Line voltage", color="green") ax[1].plot(t,uc2, label="Capacitor voltage", color="orange") ax[1].set_ylabel("Voltage [V]") ax[1].set_xlabel("Time [s]") ax[1].set_title("Supply voltage") ax[1].legend() fig.tight_layout() #checking the amplitude value in steady state Im = sqrt(2)*u/(sqrt(R**2+(XL-XC)**2)) Im """ Explanation: RLC Circuit with sinusoidal voltage Now voltage and its derivative over time (not equal to zero in this case) are given as: <img src="formula_3.png"> End of explanation """
eford/rebound
ipython_examples/FourierSpectrum.ipynb
gpl-3.0
import rebound import numpy as np sim = rebound.Simulation() sim.units = ('AU', 'yr', 'Msun') sim.add("Sun") sim.add("Jupiter") sim.add("Saturn") """ Explanation: Fourier analysis & resonances A great benefit of being able to call rebound from within python is the ability to directly apply sophisticated analysis tools from scipy and other python libraries. Here we will do a simple Fourier analysis of a reduced Solar System consisting of Jupiter and Saturn. Let's begin by setting our units and adding these planets using JPL's horizons database: End of explanation """ sim.integrator = "whfast" sim.dt = 1. # in years. About 10% of Jupiter's period sim.move_to_com() """ Explanation: Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\%$ of Jupiter's orbital period. End of explanation """ Nout = 100000 tmax = 3.e5 Nplanets = 2 x = np.zeros((Nplanets,Nout)) ecc = np.zeros((Nplanets,Nout)) longitude = np.zeros((Nplanets,Nout)) varpi = np.zeros((Nplanets,Nout)) times = np.linspace(0.,tmax,Nout) ps = sim.particles for i,time in enumerate(times): sim.integrate(time) # note we used above the default exact_finish_time = 1, which changes the timestep near the outputs to match # the output times we want. This is what we want for a Fourier spectrum, but technically breaks WHFast's # symplectic nature. Not a big deal here. os = sim.calculate_orbits() for j in range(Nplanets): x[j][i] = ps[j+1].x # we use the 0 index in x for Jup and 1 for Sat, but the indices for ps start with the Sun at 0 ecc[j][i] = os[j].e longitude[j][i] = os[j].l varpi[j][i] = os[j].Omega + os[j].omega """ Explanation: The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies. Now let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-positions (for the Fourier analysis). Additionally, we store the mean longitudes and pericenter longitudes (varpi) for reasons that will become clear below. Having some idea of what the secular timescales are in the Solar System, we'll run the integration for $3\times 10^5$ yrs. We choose to collect $10^5$ outputs in order to resolve the planets' orbital periods ($\sim 10$ yrs) in the Fourier spectrum. End of explanation """ %matplotlib inline labels = ["Jupiter", "Saturn"] import matplotlib.pyplot as plt fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) plt.plot(times,ecc[0],label=labels[0]) plt.plot(times,ecc[1],label=labels[1]) ax.set_xlabel("Time (yrs)") ax.set_ylabel("Eccentricity") plt.legend(); """ Explanation: Let's see what the eccentricity evolution looks like with matplotlib: End of explanation """ from scipy import signal Npts = 3000 logPmin = np.log10(10.) logPmax = np.log10(1.e5) Ps = np.logspace(logPmin,logPmax,Npts) ws = np.asarray([2*np.pi/P for P in Ps]) periodogram = signal.lombscargle(times,x[0],ws) fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xscale('log') ax.set_xlim([10**logPmin,10**logPmax]) ax.set_ylim([0,0.15]) ax.set_xlabel("Period (yrs)") ax.set_ylabel("Power") """ Explanation: Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be used when storing outputs at each timestep using the integrator IAS15 (which uses adaptive and therefore nonuniform timesteps). Let's check for periodicities with periods logarithmically spaced between 10 and $10^5$ yrs. From the documentation, we find that the lombscargle function requires a list of corresponding angular frequencies (ws), and we obtain the appropriate normalization for the plot. To avoid conversions to orbital elements, we analyze the time series of Jupiter's x-position. End of explanation """ fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(Ps,np.sqrt(4*periodogram/Nout)) ax.set_xscale('log') ax.set_xlim([600,1600]) ax.set_ylim([0,0.003]) ax.set_xlabel("Period (yrs)") ax.set_ylabel("Power") """ Explanation: We pick out the obvious signal in the eccentricity plot with a period of $\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighboring frequencies due to the short integration duration, with contributions from the second secular timescale, which is out at $\sim 2\times10^5$ yrs and causes a slower, low-amplitude modulation of the eccentricity signal plotted above (we limited the time of integration so that the example runs in a few seconds). Additionally, though it was invisible on the scale of the eccentricity plot above, we clearly see a strong signal at Jupiter's orbital period of about 12 years. But wait! Even on this scale set by the dominant frequencies of the problem, we see an additional blip just below $10^3$ yrs. Such a periodicity is actually visible in the above eccentricity plot if you inspect the thickness of the lines. Let's investigate by narrowing the period range: End of explanation """ def zeroTo360(val): while val < 0: val += 2*np.pi while val > 2*np.pi: val -= 2*np.pi return val*180/np.pi """ Explanation: This is the right timescale to be due to resonant perturbations between giant planets ($\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5:2 mean-motion resonance. This is the famous great inequality that Laplace showed was responsible for slight offsets in the predicted positions of the two giant planets. Let's check whether this is in fact responsible for the peak. In this case, we have that the mean longitude of Jupiter $\lambda_J$ cycles approximately 5 times for every 2 of Saturn's ($\lambda_S$). The game is to construct a slowly-varying resonant angle, which here could be $\phi_{5:2} = 5\lambda_S - 2\lambda_J - 3\varpi_J$, where $\varpi_J$ is Jupiter's longitude of pericenter. This last term is a much smaller contribution to the variation of $\phi_{5:2}$ than the first two, but ensures that the coefficients in the resonant angle sum to zero and therefore that the physics do not depend on your choice of coordinates. To see a clear trend, we have to shift each value of $\phi_{5:2}$ into the range $[0,360]$ degrees, so we define a small helper function that does the wrapping and conversion to degrees: End of explanation """ phi = [zeroTo360(5.*longitude[1][i] - 2.*longitude[0][i] - 3.*varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)") ax.set_ylabel(r"$\phi_{5:2}$") """ Explanation: Now we construct $\phi_{5:2}$ and plot it over the first 5000 yrs. End of explanation """ phi2 = [zeroTo360(2*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)] fig = plt.figure(figsize=(12,5)) ax = plt.subplot(111) ax.plot(times,phi2) ax.set_xlim([0,5.e3]) ax.set_ylim([0,360.]) ax.set_xlabel("time (yrs)") ax.set_ylabel(r"$\phi_{2:1}$") """ Explanation: We see that the resonant angle $\phi_{5:2}$ circulates, but with a long period of $\approx 900$ yrs (compared to the orbital periods of $\sim 10$ yrs), which precisely matches the blip we saw in the Lomb-Scargle periodogram. This is approximately the same oscillation period observed in the Solar System, despite our simplified setup! This resonant angle is able to have a visible effect because its (small) effects build up coherently over many orbits. As a further illustration, other resonance angles like those at the 2:1 will circulate much faster (because Jupiter and Saturn's period ratio is not close to 2). We can easily plot this. Taking one of the 2:1 resonance angles $\phi_{2:1} = 2\lambda_S - \lambda_J - \varpi_J$, End of explanation """
vzg100/Post-Translational-Modification-Prediction
.ipynb_checkpoints/Phosphorylation Sequence Tests -isolation_forest -dbptm+ELM-checkpoint.ipynb
mit
from pred import Predictor from pred import sequence_vector from pred import chemical_vector """ Explanation: Template for test End of explanation """ par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: try: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_s_filtered.csv") y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0) y.supervised_training("isolation_forest") y.benchmark("Data/Benchmarks/phos.csv", "S") del y except: print("failed") try: print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_s_filtered.csv") x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1) x.supervised_training("isolation_forest") x.benchmark("Data/Benchmarks/phos.csv", "S") del x except: print("failed") """ Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation. Included is N Phosphorylation however no benchmarks are available, yet. Training data is from phospho.elm and benchmarks are from dbptm. Note: SMOTEEN seems to preform best End of explanation """ par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: try: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_Y_filtered.csv") y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0) y.supervised_training("isolation_forest") y.benchmark("Data/Benchmarks/phos.csv", "Y") del y except: print("failed") try: print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_Y_filtered.csv") x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1) x.supervised_training("isolation_forest") x.benchmark("Data/Benchmarks/phos.csv", "Y") del x except: print("failed") """ Explanation: Y Phosphorylation End of explanation """ par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"] for i in par: try: print("y", i) y = Predictor() y.load_data(file="Data/Training/clean_t_filtered.csv") y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0) y.supervised_training("isolation_forest") y.benchmark("Data/Benchmarks/phos.csv", "T") del y except: print("failed") try: print("x", i) x = Predictor() x.load_data(file="Data/Training/clean_t_filtered.csv") x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1) x.supervised_training("isolation_forest") x.benchmark("Data/Benchmarks/phos.csv", "T") del x except: print("failed") """ Explanation: T Phosphorylation End of explanation """
mitdbg/modeldb
client/workflows/demos/census-end-to-end-s3-example.ipynb
mit
# restart your notebook if prompted on Colab try: import verta except ImportError: !pip install verta """ Explanation: Logistic Regression with Grid Search (scikit-learn) <a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/demos/census-end-to-end-s3-example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> End of explanation """ HOST = "app.verta.ai" PROJECT_NAME = "Census Income Classification - S3 Data" EXPERIMENT_NAME = "Logistic Regression" # import os # os.environ['VERTA_EMAIL'] = '' # os.environ['VERTA_DEV_KEY'] = '' """ Explanation: This example builds on our basic census income classification example by incorporating S3 data versioning. End of explanation """ from __future__ import print_function import warnings from sklearn.exceptions import ConvergenceWarning warnings.filterwarnings("ignore", category=ConvergenceWarning) warnings.filterwarnings("ignore", category=FutureWarning) import itertools import os import time import six import numpy as np import pandas as pd import sklearn from sklearn import model_selection from sklearn import linear_model from sklearn import metrics try: import wget except ImportError: !pip install wget # you may need pip3 import wget """ Explanation: Imports End of explanation """ from verta import Client from verta.utils import ModelAPI client = Client(HOST) proj = client.set_project(PROJECT_NAME) expt = client.set_experiment(EXPERIMENT_NAME) """ Explanation: Log Workflow This section demonstrates logging model metadata and training artifacts to ModelDB. Instantiate Client End of explanation """ from verta.dataset import S3 dataset = client.set_dataset(name="Census Income S3") version = dataset.create_version(S3("s3://verta-starter")) DATASET_PATH = "./" train_data_filename = DATASET_PATH + "census-train.csv" test_data_filename = DATASET_PATH + "census-test.csv" def download_starter_dataset(bucket_name): if not os.path.exists(DATASET_PATH + "census-train.csv"): train_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-train.csv" if not os.path.isfile(train_data_filename): wget.download(train_data_url) if not os.path.exists(DATASET_PATH + "census-test.csv"): test_data_url = "http://s3.amazonaws.com/" + bucket_name + "/census-test.csv" if not os.path.isfile(test_data_filename): wget.download(test_data_url) download_starter_dataset("verta-starter") df_train = pd.read_csv(train_data_filename) X_train = df_train.iloc[:,:-1] y_train = df_train.iloc[:, -1] df_train.head() """ Explanation: <h2 style="color:blue">Prepare Data</h2> End of explanation """ hyperparam_candidates = { 'C': [1e-6, 1e-4], 'solver': ['lbfgs'], 'max_iter': [15, 28], } hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values)) for values in itertools.product(*hyperparam_candidates.values())] """ Explanation: Prepare Hyperparameters End of explanation """ def run_experiment(hyperparams): # create object to track experiment run run = client.set_experiment_run() # create validation split (X_val_train, X_val_test, y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train, test_size=0.2, shuffle=True) # log hyperparameters run.log_hyperparameters(hyperparams) print(hyperparams, end=' ') # create and train model model = linear_model.LogisticRegression(**hyperparams) model.fit(X_train, y_train) # calculate and log validation accuracy val_acc = model.score(X_val_test, y_val_test) run.log_metric("val_acc", val_acc) print("Validation accuracy: {:.4f}".format(val_acc)) # create deployment artifacts model_api = ModelAPI(X_train, y_train) requirements = ["scikit-learn"] # save and log model run.log_model(model, model_api=model_api) run.log_requirements(requirements) # log dataset snapshot as version run.log_dataset_version("train", version) for hyperparams in hyperparam_sets: run_experiment(hyperparams) """ Explanation: Train Models End of explanation """ best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0] print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc"))) best_hyperparams = best_run.get_hyperparameters() print("Hyperparameters: {}".format(best_hyperparams)) """ Explanation: Revisit Workflow This section demonstrates querying and retrieving runs via the Client. Retrieve Best Run End of explanation """ model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams) model.fit(X_train, y_train) """ Explanation: Train on Full Dataset End of explanation """ train_acc = model.score(X_train, y_train) print("Training accuracy: {:.4f}".format(train_acc)) """ Explanation: Calculate Accuracy on Full Training Set End of explanation """ model_id = 'YOUR_MODEL_ID' run = client.set_experiment_run(id=model_id) """ Explanation: Deployment and Live Predictions This section demonstrates model deployment and predictions, if supported by your version of ModelDB. End of explanation """ df_test = pd.read_csv(test_data_filename) X_test = df_test.iloc[:,:-1] """ Explanation: Prepare "Live" Data End of explanation """ run.deploy(wait=True) run """ Explanation: Deploy Model End of explanation """ deployed_model = run.get_deployed_model() for x in itertools.cycle(X_test.values.tolist()): print(deployed_model.predict([x])) time.sleep(.5) """ Explanation: Query Deployed Model End of explanation """
tuanavu/coursera-university-of-washington
machine_learning/3_classification/assigment/week2/module-3-linear-classifier-learning-assignment-blank.ipynb
mit
import graphlab """ Explanation: Implementing logistic regression from scratch The goal of this notebook is to implement your own logistic regression classifier. You will: Extract features from Amazon product reviews. Convert an SFrame into a NumPy array. Implement the link function for logistic regression. Write a function to compute the derivative of the log likelihood function with respect to a single coefficient. Implement gradient ascent. Given a set of coefficients, predict sentiments. Compute classification accuracy for the logistic regression model. Let's get started! Fire up GraphLab Create Make sure you have the latest version of GraphLab Create. Upgrade by pip install graphlab-create --upgrade See this page for detailed instructions on upgrading. End of explanation """ products = graphlab.SFrame('amazon_baby_subset.gl/') """ Explanation: Load review dataset For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews. End of explanation """ products['sentiment'] """ Explanation: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment. End of explanation """ products.head(10)['name'] print '# of positive reviews =', len(products[products['sentiment']==1]) print '# of negative reviews =', len(products[products['sentiment']==-1]) """ Explanation: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews. End of explanation """ import json with open('important_words.json', 'r') as f: # Reads the list of most frequent words important_words = json.load(f) important_words = [str(s) for s in important_words] print important_words """ Explanation: Note: For this assignment, we eliminated class imbalance by choosing a subset of the data with a similar number of positive and negative reviews. Apply text cleaning on the review data In this section, we will perform some simple feature cleaning using SFrames. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file. Now, we will load these words from this JSON file: End of explanation """ def remove_punctuation(text): import string return text.translate(None, string.punctuation) products['review_clean'] = products['review'].apply(remove_punctuation) """ Explanation: Now, we will perform 2 simple data transformations: Remove punctuation using Python's built-in string functionality. Compute word counts (only for important_words) We start with Step 1 which can be done as follows: End of explanation """ for word in important_words: products[word] = products['review_clean'].apply(lambda s : s.split().count(word)) """ Explanation: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text. Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted. End of explanation """ products['perfect'] """ Explanation: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews. End of explanation """ import numpy as np """ Explanation: Now, write some code to compute the number of product reviews that contain the word perfect. Hint: * First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect) is >= 1. * Sum the number of 1s in the column contains_perfect. Quiz Question. How many reviews contain the word perfect? Convert SFrame to NumPy array As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices. First, make sure you can perform the following import. End of explanation """ def get_numpy_data(data_sframe, features, label): data_sframe['intercept'] = 1 features = ['intercept'] + features features_sframe = data_sframe[features] feature_matrix = features_sframe.to_numpy() label_sarray = data_sframe[label] label_array = label_sarray.to_numpy() return(feature_matrix, label_array) """ Explanation: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term. End of explanation """ # Warning: This may take a few minutes... feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment') """ Explanation: Let us convert the data into NumPy arrays. End of explanation """ feature_matrix.shape """ Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section) It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands: arrays = np.load('module-3-assignment-numpy-arrays.npz') feature_matrix, sentiment = arrays['feature_matrix'], arrays['sentiment'] End of explanation """ sentiment """ Explanation: Quiz Question: How many features are there in the feature_matrix? Quiz Question: Assuming that the intercept is present, how does the number of features in feature_matrix relate to the number of features in the logistic regression model? Now, let us see what the sentiment column looks like: End of explanation """ ''' produces probablistic estimate for P(y_i = +1 | x_i, w). estimate ranges between 0 and 1. ''' def predict_probability(feature_matrix, coefficients): # Take dot product of feature_matrix and coefficients # YOUR CODE HERE ... # Compute P(y_i = +1 | x_i, w) using the link function # YOUR CODE HERE predictions = ... # return predictions return predictions """ Explanation: Estimating conditional probability with link function Recall from lecture that the link function is given by: $$ P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}, $$ where the feature vector $h(\mathbf{x}_i)$ represents the word counts of important_words in the review $\mathbf{x}_i$. Complete the following function that implements the link function: End of explanation """ dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]]) dummy_coefficients = np.array([1., 3., -1.]) correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] ) correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] ) print 'The following outputs must match ' print '------------------------------------------------' print 'correct_predictions =', correct_predictions print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients) """ Explanation: Aside. How the link function works with matrix algebra Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$: $$ [\text{feature_matrix}] = \left[ \begin{array}{c} h(\mathbf{x}_1)^T \ h(\mathbf{x}_2)^T \ \vdots \ h(\mathbf{x}_N)^T \end{array} \right] = \left[ \begin{array}{cccc} h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \ h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \ \vdots & \vdots & \ddots & \vdots \ h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N) \end{array} \right] $$ By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying feature_matrix and the coefficient vector $\mathbf{w}$. $$ [\text{score}] = [\text{feature_matrix}]\mathbf{w} = \left[ \begin{array}{c} h(\mathbf{x}_1)^T \ h(\mathbf{x}_2)^T \ \vdots \ h(\mathbf{x}_N)^T \end{array} \right] \mathbf{w} = \left[ \begin{array}{c} h(\mathbf{x}_1)^T\mathbf{w} \ h(\mathbf{x}_2)^T\mathbf{w} \ \vdots \ h(\mathbf{x}_N)^T\mathbf{w} \end{array} \right] = \left[ \begin{array}{c} \mathbf{w}^T h(\mathbf{x}_1) \ \mathbf{w}^T h(\mathbf{x}_2) \ \vdots \ \mathbf{w}^T h(\mathbf{x}_N) \end{array} \right] $$ Checkpoint Just to make sure you are on the right track, we have provided a few examples. If your predict_probability function is implemented correctly, then the outputs will match: End of explanation """ def feature_derivative(errors, feature): # Compute the dot product of errors and feature derivative = ... # Return the derivative return derivative """ Explanation: Compute derivative of log likelihood with respect to a single coefficient Recall from lecture: $$ \frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) $$ We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments: * errors vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$. * feature vector containing $h_j(\mathbf{x}_i)$ for all $i$. Complete the following code block: End of explanation """ def compute_log_likelihood(feature_matrix, sentiment, coefficients): indicator = (sentiment==+1) scores = np.dot(feature_matrix, coefficients) logexp = np.log(1. + np.exp(-scores)) # Simple check to prevent overflow mask = np.isinf(logexp) logexp[mask] = -scores[mask] lp = np.sum((indicator-1)*scores - logexp) return lp """ Explanation: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm. The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation): $$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$ We provide a function to compute the log likelihood for the entire dataset. End of explanation """ dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]]) dummy_coefficients = np.array([1., 3., -1.]) dummy_sentiment = np.array([-1, 1]) correct_indicators = np.array( [ -1==+1, 1==+1 ] ) correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] ) correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] ) correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] ) correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] ) print 'The following outputs must match ' print '------------------------------------------------' print 'correct_log_likelihood =', correct_ll print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients) """ Explanation: Checkpoint Just to make sure we are on the same page, run the following code block and check that the outputs match. End of explanation """ from math import sqrt def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter): coefficients = np.array(initial_coefficients) # make sure it's a numpy array for itr in xrange(max_iter): # Predict P(y_i = +1|x_i,w) using your predict_probability() function # YOUR CODE HERE predictions = ... # Compute indicator value for (y_i = +1) indicator = (sentiment==+1) # Compute the errors as indicator - predictions errors = indicator - predictions for j in xrange(len(coefficients)): # loop over each coefficient # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]. # Compute the derivative for coefficients[j]. Save it in a variable called derivative # YOUR CODE HERE derivative = ... # add the step size times the derivative to the current coefficient ## YOUR CODE HERE ... # Checking whether log likelihood is increasing if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \ or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0: lp = compute_log_likelihood(feature_matrix, sentiment, coefficients) print 'iteration %*d: log likelihood of observed labels = %.8f' % \ (int(np.ceil(np.log10(max_iter))), itr, lp) return coefficients """ Explanation: Taking gradient steps Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum. Complete the following function to solve the logistic regression model using gradient ascent: End of explanation """ coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194), step_size=1e-7, max_iter=301) """ Explanation: Now, let us run the logistic regression solver. End of explanation """ # Compute the scores as a dot product between feature_matrix and coefficients. scores = np.dot(feature_matrix, coefficients) """ Explanation: Quiz question: As each iteration of gradient ascent passes, does the log likelihood increase or decrease? Predicting sentiments Recall from lecture that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula: $$ \hat{y}_i = \left{ \begin{array}{ll} +1 & \mathbf{x}_i^T\mathbf{w} > 0 \ -1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \ \end{array} \right. $$ Now, we will write some code to compute class predictions. We will do this in two steps: * Step 1: First compute the scores using feature_matrix and coefficients using a dot product. * Step 2: Using the formula above, compute the class predictions from the scores. Step 1 can be implemented as follows: End of explanation """ num_mistakes = ... # YOUR CODE HERE accuracy = ... # YOUR CODE HERE print "-----------------------------------------------------" print '# Reviews correctly classified =', len(products) - num_mistakes print '# Reviews incorrectly classified =', num_mistakes print '# Reviews total =', len(products) print "-----------------------------------------------------" print 'Accuracy = %.2f' % accuracy """ Explanation: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above: Quiz question: How many reviews were predicted to have positive sentiment? Measuring accuracy We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows: $$ \mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}} $$ Complete the following code block to compute the accuracy of the model. End of explanation """ coefficients = list(coefficients[1:]) # exclude intercept word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)] word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True) """ Explanation: Quiz question: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy) Which words contribute most to positive & negative sentiments? Recall that in Module 2 assignment, we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following: * Treat each coefficient as a tuple, i.e. (word, coefficient_value). * Sort all the (word, coefficient_value) tuples by coefficient_value in descending order. End of explanation """
dwhswenson/contact_map
examples/custom_plotting.ipynb
lgpl-2.1
%matplotlib inline import matplotlib.pyplot as plt import mdtraj as md traj = md.load("5550217/kras.xtc", top="5550217/kras.pdb") from contact_map import ContactFrequency traj_contacts = ContactFrequency(traj) frame_contacts = ContactFrequency(traj[0]) diff = traj_contacts - frame_contacts """ Explanation: Customizing contact map plots The main plotting methods use matplotlib. If you're already familiar with matplotlib, you'll have a good background already. If not, what you learn about Contact Map Explorer will transfer to other projects. End of explanation """ # Make a subplot with 1 row but three columns and a bigger figsize fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(15, 4)) # Flatten the axis to a list axs = axs.flatten() # Make a list of the contact objects we want to plot in this case: frame, trajectory, # difference contacts = [frame_contacts, traj_contacts, diff] # Now loop and make the plot for ax, contact in zip(axs, contacts): contact.residue_contacts.plot_axes(ax=ax) """ Explanation: Putting multiple plots in one figure You may want to plot multiple maps in one figure. This can easily be done by using the plot_axes method, which acts just like plot except that it requires that you give it a Matplotlib Axes object to plot into. End of explanation """ fig, axs = plt.subplots(1, 2, figsize=(10, 4)) traj_contacts.residue_contacts.plot_axes(ax=axs[0]); diff.residue_contacts.plot_axes(ax=axs[1]); """ Explanation: Changing the color map You can use any Matplotlib color map, see the Matplotlib documentation for details. In general, the color schemes that you are likely to use with Contact Map Explorer are either "diverging" or "sequential." Diverging color maps are useful when looking at contact differences. The default color map is seismic, which is a diverging contact map. Contact Map Explorer has some extra tricks to make it so that the same color map works for either contact frequencies or contact differences: if given a diverging map, and if there are no negative values in the data, Contact Map Explorer will only use the upper half of the contact map. End of explanation """ fig, axs = plt.subplots(1, 2, figsize=(10, 4)) traj_contacts.residue_contacts.plot_axes(ax=axs[0], cmap='PRGn'); diff.residue_contacts.plot_axes(ax=axs[1], cmap='PRGn'); """ Explanation: Note that you're using the same color map in both, but when there's no negative data, we only use the upper half. We can do the same with any diverging color map; next we'll use the purple-green diverging map PRGn. End of explanation """ traj_contacts.residue_contacts.plot(cmap="Blues"); """ Explanation: For most of matplotlib's built-in color maps, Contact Map Explorer knows whether to treat the color map as diverging or sequential. The "Blues" color map is sequential: End of explanation """ fig, ax = traj_contacts.residue_contacts.plot() ax.set_xlim(0, 166) ax.set_ylim(166, 167) # also available as plt.xlim(0, 166); plt.ylim(166, 167) """ Explanation: More advanced aspects of color maps, particularly dealing with custom color maps, are discussed in Advanced matplotlib tricks. Common matplotlib customizations In this section, we'll show several matplotlib tricks that are frequently useful. None of this is specific to Contact Map Explorer; these are all just directly using matplotlib to achieve common goals. Overall, the limits here are the limits of matplotlib -- which is to say not many, but it isn't always obvious how to do it. Be aware that each contact is plotted as a 1-by-1 rectangle with lower limit at the index of the residue. That is to say, a contact between residue 0 on the x-axis and residue 15 on the y-axis is a box between lower-left coordinate (0, 15) and upper-right coordinate (1, 16). Especially important: plotted residue locations are based on residue index. This means: The first residue plots between 0 and 1. The can cause confusion when thinking about PDB sequences, which start at residue 1. If there is a gap in residue sequences, this gap does not appear in the plot. For example, if you had a system that only included PDB residue numbers 1, 2, 7, 8, these would show up on the plot with lower edges at 0, 1, 2, 3, respectively. This does not apply to atoms/residues that are not included in the query or haystack of your system. The plotting is based on the numbering in the topology from the MDTraj trajectory you load into Contact Map Explorer. A quick conceptual introduction to matplotlib The main objects you may deal with in matplotlib are Figures (fig) and Axes (ax). A Figure is the whole plot; this may include multiple "subplots" with different contact maps (see examples above). An Axes is the rectangular region you're plotting in. The color bar is actually a second Axes in our Figure. Our main plotting function follows the behavior of other matplotlib plotting functions in returning the Figure and the main Axes, i.e., the Axes object with the contact map. That Axes object has many methods that allow you to further customize the plot. Many of those functions can also be accessed through the matplotlib.pyplot interface, which unifies the Figure and Axes. The pyplot interface was designed to be easy for beginners to use, but it is much less flexible than directly interacting with the Figure and Axes. Changing the range of the plot End of explanation """ fig, ax = traj_contacts.residue_contacts.plot() ax.set_xlabel('Residue Index') ax.set_ylabel('Residue Index'); # also available as plt.xlabel; plt.ylabel """ Explanation: Customizing the axes labels and tick marks It's easy to label your axes. By default, we do not label axes because the shift by one (due to PDB counts starting at 1) and the possibility of gaps in the residue sequence could lead to misinterpretation. End of explanation """ fig, ax = traj_contacts.residue_contacts.plot() ticklocations = [0, 20, 40, 60, 80, 100, 120, 140, 166.5] ticklabels = ['0', '20', '40', '60', '80', '100', '120', '140', 'GTP'] for axis in [ax.xaxis, ax.yaxis]: axis.set_ticks(ticklocations) axis.set_ticklabels(ticklabels) ax.set_xlim(0, 168) ax.set_ylim(0, 168); """ Explanation: Matplotlib allows a great deal of customization in how the axes are presented. Here I'll make it so that GTP is labeled as "GTP" instead of as a number. End of explanation """ fig, ax = traj_contacts.residue_contacts.plot() # color='k' is black; lw=0.5 sets line width ax.axvline(166, color='k', lw=0.5) ax.axhline(166, color='k', lw=0.5) # also available as plt.axvline, plt.axhline """ Explanation: Adding lines to guide the eye Let's say we wanted to add a line to separate the protein (residues 0 to 165) from the GTP (residue 166) and the ions. We can do this with matplotlib's axvline and axhline to create vertical and horizontal lines, respectively. End of explanation """ fig, ax = traj_contacts.residue_contacts.plot() # Scale the labels of the colorbar. fig.axes[1].yaxis.set_tick_params(labelsize=20) """ Explanation: Changing attributes of the colorbar The ax we return is the same as fig.axes[0]. To get to the colorbar, use fig.axes[1]. End of explanation """ %%time large_cutoff = ContactFrequency(trajectory=traj[::10], cutoff=1.5) %%time large_cutoff.residue_contacts.plot(); %%time import matplotlib import copy cmap = copy.copy(plt.get_cmap('seismic')) norm = matplotlib.colors.Normalize(vmin=-1, vmax=1) plot = plt.pcolor(large_cutoff.residue_contacts.df, cmap=cmap, vmin=-1, vmax=1) plot.cmap.set_under(cmap(norm(0))); """ Explanation: Performance when plotting While residue_contact.plot() is obviously a very easy way to make a plot, you can always convert the contact data to another format and then plot using other tools. For more on various methods to export data, see the notebook on integrations with other tools. Sometimes different plotting methods will be faster than the built-in version. For example, when the contact matrix is relatively dense, as with the example with a larger cutoff, it can be faster to go by way of exporting to pandas and plotting the DataFrame using matplotlib.pyplot.pcolor. End of explanation """
davek44/Basset
tutorials/prepare_compendium.ipynb
mit
!cd ../data; preprocess_features.py -y -m 200 -s 600 -o er -c genomes/human.hg19.genome sample_beds.txt """ Explanation: In this tutorial, we'll walk through downloading and preprocessing the compendium of ENCODE and Epigenomics Roadmap data. This part won't be very iPython tutorial-ly... First cd in the terminal over to the data directory and run the script get_dnase.sh. That will download all of the BED files from ENCODE and Epigenomics Roadmap. Read the script to see where I'm getting those files from. Perhaps there will be more in the future, and you'll want to manipulate the links. Once that has finished, we need to merge all of the BED files into one BED and an activity table. I typically use the -y option to avoid the Y chromosome, since I don't know which samples sequenced male or female cells. I'll use my default of extending the sequences to 600 bp, and merging sites that overlap by more than 200 bp. But you might want to edit these. End of explanation """ !bedtools getfasta -fi ../data/genomes/hg19.fa -bed ../data/er.bed -s -fo ../data/er.fa """ Explanation: To convert the sequences to the format needed by Torch, we'll first convert to FASTA. End of explanation """ !seq_hdf5.py -c -t 71886 -v 70000 ../data/er.fa ../data/er_act.txt ../data/er.h5 """ Explanation: Finally, we convert to HDF5 for Torch and set aside some data for validation and testing. -r permutes the sequences. -c informs the script we're providing raw counts. -v specifies the size of the validation set. -t specifies the size of the test set. End of explanation """
Small-Bodies-Node/pds4-python
notebooks/spectrum-example-hyakutake.ipynb
bsd-3-clause
from urllib.request import urlretrieve # to download the data from pds4_tools import pds4_read # to read and inspect the data and metadata import matplotlib.pyplot as plt # for plotting # for plotting in Jupyter notebooks %matplotlib notebook # Download data from PDS SBN label_fn, headers = urlretrieve('https://pdssbn.astro.umd.edu/holdings/pds4-gbo-kpno:hyakutake_spectra-v1.0/data/offset_0_arcsec.xml', filename='offset_0_arcsec.xml') table_fn, headers = urlretrieve('https://pdssbn.astro.umd.edu/holdings/pds4-gbo-kpno:hyakutake_spectra-v1.0/data/offset_0_arcsec.tab', filename='offset_0_arcsec.tab') # Read in the label (meta-data) and data. If the data file is saved with the correct file name, # then reading in the label with pds4_read will also read in the data. data = pds4_read(label_fn) # The data is a table named "Spectrum". Print out a summary of the data, including field (i.e., # column) names. print() data['Spectrum'].info() # Plot the spectrum, including automatic labeling of the axis units. fig, ax = plt.subplots() ax.plot(data['Spectrum']['Wavelength'], data['Spectrum']['Flux Density']) labels = plt.setp(ax, xlabel='Wavelength ({})'.format(data['Spectrum']['Wavelength'].meta_data['unit']), ylabel='Flux density ({})'.format(data['Spectrum']['Flux Density'].meta_data['unit'])) plt.show() plt.savefig('spectrum-example-hyakutake.png') """ Explanation: Examine a spectrum of comet C/1996 B2 (Hyakutake) For this exercise, we will examine a spectrum of comet C/1996 B2 (Hyakutake) taken from Kitt Peak. The data are archived at the PDS Small Bodies Node: "Spectra of C/1996 B2 (Hyakutake) for Multiple Offsets from Photocenter" by A'Hearn et al. (2015), urn:nasa:pds:gbo-kpno:hyakutake_spectra::1.0. The data are available at: https://pdssbn.astro.umd.edu/holdings/pds4-gbo-kpno:hyakutake_spectra-v1.0/SUPPORT/dataset.html. The data set consists of a series of spectra with high-spectral resolving power between 3040 to 4500 Å at multiple offsets from the nucleus. The data are text-based tables with PDS4 labels. The following code is written for Python 3. The short-short version: Download, read, inspect, and plot the spectrum End of explanation """ from urllib.request import urlretrieve label_fn, headers = urlretrieve('https://pdssbn.astro.umd.edu/holdings/pds4-gbo-kpno:hyakutake_spectra-v1.0/data/offset_0_arcsec.xml', filename='offset_0_arcsec.xml') table_fn, headers = urlretrieve('https://pdssbn.astro.umd.edu/holdings/pds4-gbo-kpno:hyakutake_spectra-v1.0/data/offset_0_arcsec.tab', filename='offset_0_arcsec.tab') print('Downloaded label and table:', label_fn, table_fn) """ Explanation: Detailed example with PDS4 label inspection Download offset = 0 data from SBN End of explanation """ from pds4_tools import pds4_read data = pds4_read(label_fn) """ Explanation: Use PDS4 Python tools to read in the label and data End of explanation """ for e in data.label.findall('Observation_Area/*'): print(e.tag) # note, this text description is presented exactly as it is formatted in the label print(data.label.find('Observation_Area/comment').text) print(data.label.find('Observation_Area/Time_Coordinates/start_date_time').text) print(data.label.find('Observation_Area/Time_Coordinates/stop_date_time').text) """ Explanation: Inspect the label: The &lt;Observation_Area&gt; class The <Observation_Area> class describes the overall parameters of observational data (target, observing instrument, UTC, etc.). It is required in observational products. Below we print: * the immediate children of the &lt;Observation_Area&gt; tag, * the start and stop times of the data set, and * the names of the observing system components used to create the data. End of explanation """ names = data.label.findall('Observation_Area/Observing_System/Observing_System_Component/name') for names in names: print('*', names.text) """ Explanation: The &lt;Observing_System&gt; class documents the significant pieces of the observing equipment. It is used, for example, to associate instruments, spacecraft, or telescopes with the product. A user unfamiliar with the source of the data would discover these components via: End of explanation """ data.info() # The data is a table named "Spectrum". Print out a summary of the data, including field (i.e., # column) names. import textwrap data['Spectrum'].info() print() print('Field descriptions:') for k in ['Wavelength', 'Flux Density']: desc = data['Spectrum'][k].meta_data['description'] # remove leading and trailing whitespace, tabs and newlines desc = desc.strip() desc = desc.replace('\n', ' ').replace('\t', '') # re-wrap the description with an indent and "bullet" print(textwrap.fill(desc, initial_indent='* ', subsequent_indent=' ')) """ Explanation: Inspect the data structure The file format can be discovered via the archive documentation or the label. In this case, it is a fixed-width text-based table. However, this knowledge is not critical beforehand if one using the PDS4 tools. Product metadata is read in by the module, and can be inspected to help the user understand the data structure: End of explanation """ import matplotlib.pyplot as plt # to display plots inline in Jupyter notebook: %matplotlib notebook # Create a new figure and axis fig, ax = plt.subplots() # Plot the spectrum ax.plot(data['Spectrum']['Wavelength'], data['Spectrum']['Flux Density']) # Note the automatic labeling of the axis units via metadata labels = plt.setp(ax, xlabel='Wavelength ({})'.format(data['Spectrum']['Wavelength'].meta_data['unit']), ylabel='Flux density ({})'.format(data['Spectrum']['Flux Density'].meta_data['unit'])) plt.show() """ Explanation: Plot the spectrum with matplotlib End of explanation """
mathnathan/notebooks
dissertation/.ipynb_checkpoints/tests_for_colloquium-checkpoint.ipynb
mit
p = GMM([1.0], np.array([[0.5,0.05]])) num_samples = 1000 beg = 0.0 end = 1.0 t = np.linspace(beg,end,num_samples) num_neurons = len(p.pis) colors = [np.random.rand(num_neurons,) for i in range(num_neurons)] p_y = p(t) p_max = p_y.max() np.random.seed(110) num_neurons = 1 network = Net(1,1,num_neurons, bias=0.0006, decay=[0.05], kernels=[[1,1]], locs=[[0,0]], sleep_cycle=2000) samples, labels = p.sample(10000) ys = [] lbls = [] colors = [np.random.rand(3,) for i in range(num_neurons)] def f(i=0): x = np.array(samples[i]) l = labels[i] y = network(x.reshape(1,1,1)) ys.append(y) c = 'b' if l else 'g' lbls.append(c) fig, ax = plt.subplots(figsize=(15,5)) ax.plot(t, p_y/p_max, c='r', lw=3, label='$p(x)$') ax.plot([x,x],[0,p_max],label="$x\sim p(x)$", lw=4) y = network(t.reshape(num_samples,1,1),update=0) for j,yi in enumerate(y): yj_max = y[j].max() ax.plot(t, y[j]/yj_max, c=colors[j], lw=3, label="$q(x)$") ax.set_ylim(0.,1.5) ax.set_xlim(beg,end) plt.savefig('for_colloquium/fig%03i.png'%(i)) plt.show() interactive_plot = interactive(f, i=(0, 9999)) output = interactive_plot.children[-1] output.layout.height = '450px' interactive_plot [n.weights for n in list(network.neurons.items())[0][1]] [np.sqrt(n.bias) for n in list(network.neurons.items())[0][1]] [n.pi for n in list(network.neurons.items())[0][1]] """ Explanation: This entire theory is built on the idea that everything is normalized as input into the brain. i.e. all values are between 0 and 1. This is necessary because the learning rule has an adaptive learning rate that is $\sigma^4$. If everything is normalized, the probability of $\sigma^2$ being greater than 1 is very low End of explanation """ def s(x): return (1/(1+np.exp(-10*(x-0.25)))) x = np.linspace(0,1,100) plt.plot(x,s(x)) plt.show() """ Explanation: I can assume $q(x)$ has two forms $$q(x) = \frac{1}{\sqrt{2 \pi \sigma^2}}exp{-\frac{(x-\mu)^2}{2\sigma^2}}$$ or $$q(x) = exp{-\frac{(x-\mu)^2}{\sigma^2}}$$ When I assume the second form and remove the extra $\sigma$ term from the learning equations it no longer converges smoothly. However, if I add an 'astrocyte' to normalize all of them periodically by averaging over the output it works again. Perhaps astrocytes 'normalizing' the neurons is the biological mechanism for keeping the output roughly normal. End of explanation """
prk327/CoAca
6_Grouping_and_Summarising.ipynb
gpl-3.0
# Loading libraries and files import numpy as np import pandas as pd market_df = pd.read_csv("../global_sales_data/market_fact.csv") customer_df = pd.read_csv("../global_sales_data/cust_dimen.csv") product_df = pd.read_csv("../global_sales_data/prod_dimen.csv") shipping_df = pd.read_csv("../global_sales_data/shipping_dimen.csv") orders_df = pd.read_csv("../global_sales_data/orders_dimen.csv") """ Explanation: Grouping and Summarising Dataframes Grouping and aggregation are some of the most frequently used operations in data analysis, especially while doing exploratory data analysis (EDA), where comparing summary statistics across groups of data is common. For e.g., in the retail sales data we are working with, you may want to compare the average sales of various regions, or compare the total profit of two customer segments. Grouping analysis can be thought of as having three parts: 1. Splitting the data into groups (e.g. groups of customer segments, product categories, etc.) 2. Applying a function to each group (e.g. mean or total sales of each customer segment) 3. Combining the results into a data structure showing the summary statistics Let's work through some examples. End of explanation """ # Merging the dataframes one by one df_1 = pd.merge(market_df, customer_df, how='inner', on='Cust_id') df_2 = pd.merge(df_1, product_df, how='inner', on='Prod_id') df_3 = pd.merge(df_2, shipping_df, how='inner', on='Ship_id') master_df = pd.merge(df_3, orders_df, how='inner', on='Ord_id') master_df.head() """ Explanation: Say you want to understand how well or poorly the business is doing in various customer segments, regions, product categories etc. Specifically, you want to identify areas of business where you are incurrring heavy losses, and want to take action accordingly. To do that, we will answer questions such as: * Which customer segments are the least profitable? * Which product categories and sub-categories are the least profitable? * Customers in which geographic region cause the most losses? * Etc. First, we will merge all the dataframes, so we have all the data in one master_df. End of explanation """ # Which customer segments are the least profitable? # Step 1. Grouping: First, we will group the dataframe by customer segments df_by_segment = master_df.groupby('Customer_Segment') df_by_segment """ Explanation: Step 1. Grouping using df.groupby() Typically, you group the data using a categorical variable, such as customer segments, product categories, etc. This creates as many subsets of the data as there are levels in the categorical variable. For example, in this case, we will group the data along Customer_Segment. End of explanation """ # Step 2. Applying a function # We can choose aggregate functions such as sum, mean, median, etc. df_by_segment['Profit'].sum() """ Explanation: Note that df.groupby returns a DataFrameGroupBy object. Step 2. Applying a Function After grouping, you apply a function to a numeric variable, such as mean(Sales), sum(Profit), etc. End of explanation """ # Alternatively df_by_segment.Profit.sum() """ Explanation: Notice that we have indexed the Profit column in the DataFrameGroupBy object exactly as we index a normal column in a dataframe. Alternatively, you could also use df_by_segment.Profit. End of explanation """ # For better readability, you may want to sort the summarised series: df_by_segment.Profit.sum().sort_values(ascending = False) """ Explanation: So this tells us that profits are the least in the CONSUMER segment, and highest in the CORPORATE segment. End of explanation """ # Converting to a df pd.DataFrame(df_by_segment['Profit'].sum()) # Let's go through some more examples # E.g.: Which product categories are the least profitable? # 1. Group by product category by_product_cat = master_df.groupby('Product_Category') # 2. This time, let's compare average profits # Apply mean() on Profit by_product_cat['Profit'].mean() """ Explanation: Step 3. Combining the results into a Data Structure You can optionally show the results as a dataframe. End of explanation """ # E.g.: Which product categories and sub-categories are the least profitable? # 1. Group by category and sub-category by_product_cat_subcat = master_df.groupby(['Product_Category', 'Product_Sub_Category']) by_product_cat_subcat['Profit'].mean() """ Explanation: FURNITURE is the least profitable, TECHNOLOGY the most. Let's see which product sub-cetgories within FURNITURE are less profitable. End of explanation """ # Recall the df.describe() method? # To apply multiple functions simultaneously, you can use the describe() function on the grouped df object by_product_cat['Profit'].describe() # Some other summary functions to apply on groups by_product_cat['Profit'].count() by_product_cat['Profit'].min() # E.g. Customers in which geographic region are the least profitable? master_df.groupby('Region').Profit.mean() # Note that the resulting object is a Series, thus you can perform vectorised computations on them # E.g. Calculate the Sales across each region as a percentage of total Sales # You can divide the entire series by a number (total sales) easily (master_df.groupby('Region').Sales.sum() / sum(master_df['Sales']))*100 """ Explanation: Thus, within FURNITURE, TABLES are the least profitable, followed by BOOKCASES. End of explanation """
tensorflow/docs-l10n
site/zh-cn/federated/tutorials/simulations.ipynb
apache-2.0
#@test {"skip": true} !pip install --quiet --upgrade tensorflow-federated-nightly !pip install --quiet --upgrade nest-asyncio import nest_asyncio nest_asyncio.apply() import collections import time import tensorflow as tf import tensorflow_federated as tff source, _ = tff.simulation.datasets.emnist.load_data() def map_fn(example): return collections.OrderedDict( x=tf.reshape(example['pixels'], [-1, 784]), y=example['label']) def client_data(n): ds = source.create_tf_dataset_for_client(source.client_ids[n]) return ds.repeat(10).shuffle(500).batch(20).map(map_fn) train_data = [client_data(n) for n in range(10)] element_spec = train_data[0].element_spec def model_fn(): model = tf.keras.models.Sequential([ tf.keras.layers.InputLayer(input_shape=(784,)), tf.keras.layers.Dense(units=10, kernel_initializer='zeros'), tf.keras.layers.Softmax(), ]) return tff.learning.from_keras_model( model, input_spec=element_spec, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) trainer = tff.learning.build_federated_averaging_process( model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02)) def evaluate(num_rounds=10): state = trainer.initialize() for _ in range(num_rounds): t1 = time.time() state, metrics = trainer.next(state, train_data) t2 = time.time() print('metrics {m}, round time {t:.2f} seconds'.format( m=metrics, t=t2 - t1)) """ Explanation: High-performance simulations with TFF This tutorial will describe how to setup high-performance simulations with TFF in a variety of common scenarios. TODO(b/134543154): Populate the content, some of the things to cover here: - using GPUs in a single-machine setup, - multi-machine setup on GCP/GKE, with and without TPUs, - interfacing MapReduce-like backends, - current limitations and when/how they will be relaxed. <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/federated/tutorials/simulations"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/federated/tutorials/simulations.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/federated/tutorials/simulations.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/federated/tutorials/simulations.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />Download notebook</a> </td> </table> 准备工作 首先,请确保您的笔记本电脑连接到具有已编译相关组件(包括用于多机场景的 gRPC 依赖关系)的后端。 现在,我们从加载 TFF 网站上的 MNIST 示例开始,然后声明 Python 函数,该函数将在一组 10 个客户端上运行一个小型实验循环。 End of explanation """ evaluate() """ Explanation: 单机模拟 现在默认开启。 End of explanation """
DS-100/sp17-materials
sp17/disc/disc12/disc12.ipynb
gpl-3.0
import numpy as np import matplotlib %matplotlib inline import matplotlib.pyplot as plt import ds100 """ Explanation: K-means End of explanation """ np.random.seed(13337) c1 = np.random.randn(25, 2) c2 = np.array([2, 8]) + np.random.randn(25, 2) c3 = np.array([8, 4]) + np.random.randn(25, 2) x1 = np.vstack((c1, c2, c3)) g1 = np.repeat([0, 1, 2], 25) ds100.scatter2d_grouped(x1, g1) """ Explanation: Vanilla Example The setting: three groups, completely separated with the variables given, same number of points per group, and same variance in each group. A classic example for K-means. End of explanation """ example1 = ds100.kmeans(x1, 3) example1.run() example1.plot() """ Explanation: Let's just run this algorithm as a black box for the moment to see how reasonably it performs. End of explanation """ manual_centers = np.array([[0,1], [1,1], [2,2]]) example2 = ds100.kmeans(x1, k = 3, centers = manual_centers) example2.plot(colored=False) """ Explanation: Dang, looks pretty good up to label permutation. With such promising results, we should pry under the hood a little. We discover that K-means can be described as follows: Provide initial coordinates for K cluster centers Update cluster assignments Update cluster centers Repeat 2 and 3 until satisfied Initialize k Cluster Centers This seems fairly innocuous. There seems to be a couple of ways for basic k-means to start off: Randomly pick k data points. The kmeans object included in the ds100 module does this by default. Manually enter k cluster centers. This can be specified with the centers argument when instantiating the kmeans object. What do you think? When would you pick one method over another? Notation: For the remainder of the discussion, we'll refer to the K clusters as $C_1, C_2, ..., C_K$ and their specific coordinates as $c_1, c_2, ..., c_K$. Update Cluster Assignments How do we measure "closeness"? With k-means, we use the square euclidean distance. Formally, for any two points $x$ and $c$, each a vector with $p$ coordinates (for the $p$ features) we can write this "dissimilarity" as: $$d(x, c) = \lVert x-c \rVert_2^2 = \sum_{j=1}^p (x_j - c_j)^2$$ With this measure, assign each of the $n$ data points, $x_i$, $i \in {1, 2, 3, ..., n}$, to the cluster $C_k$ that is closest to it. It turns out that this "rule" isn't well-defined. When is there ambiguity? How do you propose we fix this? Update Cluster Centers Now that the cluster assignments have changed, we need to find their centers. This is a straightfoward calculation. For each cluster, we just take the average of all the points assigned to that cluster: $$c_k = \frac{1}{|C_k|}\sum_{i \in C_k} x_i$$ where $|C_k|$ is the size of the cluster. Repeat until satisfied Sounds good, but when should we be satisfied? And does our gratification come in a single lifetime? Based on the two update steps above, what are stopping criteria would you suggest? Exploration: Does initialization matter? We're in the same setting as before, but we've manually entered some initial cluster centers. They look pretty bad, but maybe k-means can salvage the situation. End of explanation """ example2._update_clusters() example2.show_clusters() example2.plot() """ Explanation: We'll run the algorithm one step at a time to see what happens. First, let's assign clusters. End of explanation """ example2._update_centers() example2.show_centers() example2.plot() """ Explanation: Now to update the cluster centers. It seems like at least one center is moving in a reasonable direction. End of explanation """ example2._update_clusters() example2.show_clusters() example2.plot() """ Explanation: Continuing with a second cluster assignment... Doesn't look like much has changed. End of explanation """ example2 = ds100.kmeans(x1, k = 3, centers = manual_centers) example2.run() example2.summary() example2.plot() """ Explanation: Letting k-means run on its own reveals that we could have run this for one more step, but it still stops in a pretty bad place. So indeed, k-means can fail to find a global optimum if it is seeded with a bad start. End of explanation """ x3 = np.genfromtxt('example3.csv', delimiter=',') x3, g3 = x3[:,:2], x3[:,2] plt.scatter(x3[:,0], x3[:,1], color=plt.cm.Dark2(.5)) """ Explanation: Why k-means? Despite its shortcomings, we should talk about its advantages. Good intro to clustering. Easy to explain, easy to implement. It's fast. Sometimes you only want rough groups. It's simple --- only one easy-to-understand parameter to choose. Can be modified to be more robust. Can you spot things that could be changed? With that said, let's build some more intuition behind what the algorithm is doing. Remember from lecture that the k-means objective function can be written as: $$argmin_{C_1,...,C_K} \sum_{k=1}^K\sum_{i \in C_k} d(x_i, c_k) = argmin_{C_1,...,C_K} \sum_{k=1}^K\sum_{i \in C_k} \lVert x_i - c_k \rVert_2^2$$ In words: find the cluster assignments such that the sum of squares within clusters is minimized. Imagine drawing squares at each data point where one vertex is on the data point and the other is on its cluster center. Add up the area of all those squares. That is what we're trying to minimize by shuffling the data around to different clusters. Implicit Preference 1 Consider the following data. Do you see any "natural" groupings? End of explanation """ ds100.scatter2d_grouped(x3, g3) """ Explanation: Most people would pick out the following pattern. Doesn't seem too unreasonable. End of explanation """ example3 = ds100.kmeans(x3, 2) example3.run() example3.plot() """ Explanation: It turns out that k-means will pick something completely different. That pokeball though... End of explanation """ r = np.sqrt(x3[:,0]**2 + x3[:,1]**2) theta = np.arctan(x3[:,0] / x3[:,1]) x3_xformed = np.hstack((r[:, np.newaxis], theta[:, np.newaxis])) example3xf = ds100.kmeans(x3_xformed, 2) example3xf.run() example3xf.plot() """ Explanation: So what's happening here? Remember k-mean's objective function: minimize the sum of squares within clusters. Placing both centers at the origin and assigning the "natural" clusters would produce one "tight" cluster with small squares, but this is heavily overshadowed by the large squares resulting from the data points on the outer ring. In other words, k-means prefers clusters that are "separate balls of points". Aside: This particular situation can actually be salvaged with k-means if we want to recover the "natural" clusters by transforming the data to polar coordinates. End of explanation """ ds100.scatter2d_grouped(x3, example3xf.clusters) """ Explanation: Transforming back to cartesian coordinates: End of explanation """ c1 = 0.5 * np.random.randn(25, 2) c2 = np.array([10, 10]) + 3*np.random.randn(475, 2) x4 = np.vstack((c1, c2)) g4 = np.repeat([0, 1], [25, 475]) ds100.scatter2d_grouped(x4, g4) """ Explanation: Implicit Preference 2 Consider the data below. There are two groups of different sizes in two different senses. The smaller group has both smaller variability and is less numerous. The larger of the two groups is more diffuse and populated. What do you think happens when we run k-means and why? End of explanation """ example4 = ds100.kmeans(x4, 2) example4.run() example4.plot() """ Explanation: Oi, it looks like it split up the larger group. Again this is all due to the nature of the objective function. k-means, in its quest for tightness, will happily split big clouds to minimize the sum of squares. End of explanation """ smart_centers = [[0, 0], [10, 10]] example4 = ds100.kmeans(x4, 2, centers = smart_centers) example4.run() example4.plot() """ Explanation: Even with the true centers of the data generating process chosen, we still observe the k-means really wants to leech points off the large cluster. End of explanation """ c1 = 0.5 * np.random.randn(250, 2) c2 = np.array([10, 10]) + 3*np.random.randn(250, 2) x5 = np.vstack((c1, c2)) g5 = np.repeat([0, 1], [250, 250]) ds100.scatter2d_grouped(x5, g5) example5 = ds100.kmeans(x5, 2) example5.run() example5.plot() """ Explanation: It's worth noting that this is mitigated if the different clusters are of the same size. The inertial mass of the data keeps the cluster center from moving too far away. Notice the outlier point that does get swallowed up in the orbit of the bottom-left cloud though. End of explanation """ c1 = np.random.multivariate_normal([-1.5,0], [[.5,0],[0,4]], 100) c2 = np.random.multivariate_normal([1.5,0], [[.5,0],[0,4]], 100) c3 = np.random.multivariate_normal([0, 6], [[4,0],[0,.5]], 100) x6 = np.vstack((c1, c2, c3)) g6 = np.repeat([0, 1, 2], 100) ds100.scatter2d_grouped(x6, g6) """ Explanation: Implicit Preference 3 Let's take a look at this data. Qualitatively, what are some properties of the groups? End of explanation """ example6 = ds100.kmeans(x6, 3) example6.run() example6.plot() example6 = ds100.kmeans(x6, 3) example6.run() example6.plot() example6 = ds100.kmeans(x6, 3) example6.run() example6.plot() """ Explanation: There are two groups with more variability in the vertical direction than the horizontal and one group where the opposite is true. Is this an issue for k-means? If so, what do you think is the root cause? End of explanation """
Kaggle/learntools
notebooks/deep_learning/raw/ex4_transfer_learning.ipynb
apache-2.0
# Set up code checking from learntools.core import binder binder.bind(globals()) from learntools.deep_learning.exercise_4 import * print("Setup Complete") """ Explanation: Exercise Introduction The cameraman who shot our deep learning videos mentioned a problem that we can solve with deep learning. He offers a service that scans photographs to store them digitally. He uses a machine that quickly scans many photos. But depending on the orientation of the original photo, many images are digitized sideways. He fixes these manually, looking at each photo to determine which ones to rotate. In this exercise, you will build a model that distinguishes which photos are sideways and which are upright, so an app could automatically rotate each image if necessary. If you were going to sell this service commercially, you might use a large dataset to train the model. But you'll have great success with even a small dataset. You'll work with a small dataset of dog pictures, half of which are rotated sideways. Specifying and compiling the model look the same as in the example you've seen. But you'll need to make some changes to fit the model. Run the following cell to set up automatic feedback. End of explanation """ from tensorflow.keras.applications import ResNet50 from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Flatten, GlobalAveragePooling2D num_classes = ____ resnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5' my_new_model = Sequential() my_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path)) my_new_model.add(Dense(num_classes, activation='softmax')) # Indicate whether the first layer should be trained/changed or not. my_new_model.layers[0].trainable = ____ # Check your answer step_1.check() #%%RM_IF(PROD)%% num_classes = 2 resnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5' my_new_model = Sequential() my_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path)) my_new_model.add(Dense(num_classes, activation='softmax')) # Indicate whether the first layer should be trained/changed or not. my_new_model.layers[0].trainable = False step_1.assert_check_passed() # step_1.hint() # step_1.solution() """ Explanation: 1) Specify the Model Since this is your first time, we'll provide some starter code for you to modify. You will probably copy and modify code the first few times you work on your own projects. There are some important parts left blank in the following code. Fill in the blanks (marked with ____) and run the cell End of explanation """ my_new_model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy']) """ Explanation: 2) Compile the Model You now compile the model with the following line. Run this cell. End of explanation """ # Check your answer (Run this code cell to receive credit!) step_2.solution() """ Explanation: That ran nearly instantaneously. Deep learning models have a reputation for being computationally demanding. Why did that run so quickly? After thinking about this, check your answer by uncommenting the cell below. End of explanation """ # Check your answer (Run this code cell to receive credit!) step_3.solution() """ Explanation: 3) Review the Compile Step You provided three arguments in the compile step. - optimizer - loss - metrics Which arguments could affect the accuracy of the predictions that come out of the model? After you have your answer, run the cell below to see the solution. End of explanation """ from tensorflow.keras.applications.resnet50 import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator image_size = 224 data_generator = ImageDataGenerator(preprocess_input) train_generator = data_generator.flow_from_directory( directory=____, target_size=(image_size, image_size), batch_size=10, class_mode='categorical') validation_generator = data_generator.flow_from_directory( directory=____, target_size=(image_size, image_size), class_mode='categorical') # fit_stats below saves some statistics describing how model fitting went # the key role of the following line is how it changes my_new_model by fitting to data fit_stats = my_new_model.fit_generator(train_generator, steps_per_epoch=____, validation_data=____, validation_steps=1) # Check your answer step_4.check() # step_4.solution() #%%RM_IF(PROD)%% from tensorflow.keras.applications.resnet50 import preprocess_input from tensorflow.keras.preprocessing.image import ImageDataGenerator image_size = 224 data_generator = ImageDataGenerator(preprocess_input) train_generator = data_generator.flow_from_directory( directory='../input/dogs-gone-sideways/images/train', target_size=(image_size, image_size), batch_size=10, class_mode='categorical') validation_generator = data_generator.flow_from_directory( directory='../input/dogs-gone-sideways/images/val', target_size=(image_size, image_size), class_mode='categorical') # fit_stats below saves some statistics describing how model fitting went # the key role of the following line is how it changes my_new_model by fitting to data fit_stats = my_new_model.fit_generator(train_generator, steps_per_epoch=22, validation_data=validation_generator, validation_steps=1) step_4.assert_check_passed() """ Explanation: 4) Fit Model Your training data is in the directory ../input/dogs-gone-sideways/images/train. The validation data is in ../input/dogs-gone-sideways/images/val. Use that information when setting up train_generator and validation_generator. You have 220 images of training data and 217 of validation data. For the training generator, we set a batch size of 10. Figure out the appropriate value of steps_per_epoch in your fit_generator call. Fill in all the blanks (again marked as ____). Then run the cell of code. Watch as your model trains the weights and the accuracy improves. End of explanation """
cxhernandez/msmbuilder
examples/advanced/hmm-and-msm.ipynb
lgpl-2.1
from __future__ import print_function import os %matplotlib inline from matplotlib.pyplot import * from msmbuilder.featurizer import SuperposeFeaturizer from msmbuilder.example_datasets import AlanineDipeptide from msmbuilder.hmm import GaussianHMM from msmbuilder.cluster import KCenters from msmbuilder.msm import MarkovStateModel """ Explanation: This example builds HMM and MSMs on the alanine_dipeptide dataset using varing lag times and numbers of states, and compares the relaxation timescales End of explanation """ print(AlanineDipeptide.description()) dataset = AlanineDipeptide().get() trajectories = dataset.trajectories topology = trajectories[0].topology indices = [atom.index for atom in topology.atoms if atom.element.symbol in ['C', 'O', 'N']] featurizer = SuperposeFeaturizer(indices, trajectories[0][0]) sequences = featurizer.transform(trajectories) """ Explanation: First: load and "featurize" Featurization refers to the process of converting the conformational snapshots from your MD trajectories into vectors in some space $\mathbb{R}^N$ that can be manipulated and modeled by subsequent analyses. The Gaussian HMM, for instance, uses Gaussian emission distributions, so it models the trajectory as a time-dependent mixture of multivariate Gaussians. In general, the featurization is somewhat of an art. For this example, we're using MSMBuilder's SuperposeFeaturizer, which superposes each snapshot onto a reference frame (trajectories[0][0] in this example), and then measure the distance from each atom to its position in the reference conformation as the 'feature' End of explanation """ lag_times = [1, 10, 20, 30, 40] hmm_ts0 = {} hmm_ts1 = {} n_states = [3, 5] for n in n_states: hmm_ts0[n] = [] hmm_ts1[n] = [] for lag_time in lag_times: strided_data = [s[i::lag_time] for s in sequences for i in range(lag_time)] hmm = GaussianHMM(n_states=n, n_init=1).fit(strided_data) timescales = hmm.timescales_ * lag_time hmm_ts0[n].append(timescales[0]) hmm_ts1[n].append(timescales[1]) print('n_states=%d\tlag_time=%d\ttimescales=%s' % (n, lag_time, timescales)) print() figure(figsize=(14,3)) for i, n in enumerate(n_states): subplot(1,len(n_states),1+i) plot(lag_times, hmm_ts0[n]) plot(lag_times, hmm_ts1[n]) if i == 0: ylabel('Relaxation Timescale') xlabel('Lag Time') title('%d states' % n) show() msmts0, msmts1 = {}, {} lag_times = [1, 10, 20, 30, 40] n_states = [4, 8, 16, 32, 64] for n in n_states: msmts0[n] = [] msmts1[n] = [] for lag_time in lag_times: assignments = KCenters(n_clusters=n).fit_predict(sequences) msm = MarkovStateModel(lag_time=lag_time, verbose=False).fit(assignments) timescales = msm.timescales_ msmts0[n].append(timescales[0]) msmts1[n].append(timescales[1]) print('n_states=%d\tlag_time=%d\ttimescales=%s' % (n, lag_time, timescales[0:2])) print() figure(figsize=(14,3)) for i, n in enumerate(n_states): subplot(1,len(n_states),1+i) plot(lag_times, msmts0[n]) plot(lag_times, msmts1[n]) if i == 0: ylabel('Relaxation Timescale') xlabel('Lag Time') title('%d states' % n) show() """ Explanation: Now sequences is our featurized data. End of explanation """
getsmarter/bda
module_2/M2_NB3_CollectYourOwnData.ipynb
mit
import pandas as pd import numpy as np import matplotlib import os import bandicoot as bc from IPython.display import IFrame %matplotlib inline matplotlib.rcParams['figure.figsize'] = (10, 8) """ Explanation: <div align="right">Python 3.6 Jupyter Notebook</div> Collect your own data Your completion of the notebook exercises will be graded based on your ability to do the following: Evaluate: Are you able to interpret the results and justify your interpretation based on the observed data? Notebook objectives By the end of this notebook, you will be expected to: Manually perform exploratory data analysis on call data; Leverage the Bandicoot module to automate analysis; and Know resources for building your own Funf applications. List of exercises Exercise 1: Using Bandicoot for analysis. Exercise 2: Interpreting calls of zero duration in call records. Notebook introduction This notebook introduces two tools that will be discussed in detail in upcoming video content. You can complete the exercise using the sample dataset or generate your own using the instructions below, if you have access to an Android device. To demonstrate the different lengths of time it takes to gain insights when performing an analysis, you will start to explore the provided dataset (or your own) through a manual analysis cycle, before switching to automated analysis using the Bandicoot framework. You will be introduced to this tool in more detail in Module 5. <div class="alert alert-warning"> <b>Note</b>:<br> It is strongly recommended that you save and checkpoint after applying significant changes or completing exercises. This allows you to return the notebook to a previous state should you wish to do so. On the Jupyter menu, select "File", then "Save and Checkpoint" from the dropdown menu that appears. </div> Load libraries and set options End of explanation """ # Load the dataset. # You can change the filename to reflect the name of your generated dataset, # if you downloaded the application in the previous step. The default filename # is "metadata.csv". calls = pd.read_csv("data/metadata_sample.csv",parse_dates=['datetime'], index_col=['datetime']) """ Explanation: 1. Collect your own data This notebook begins with an example dataset, on which you will perform similar activities to those demonstrated in Section 1 of Module 2's Notebook 2. You are welcome to share your dataset with fellow students, in cases where they do not have access to android devices, if you are comfortable to do so. Building applications is a separate topic, and you will begin with using another open source project from MIT to process your data in a format that can be utilized for analysis. Bandicoot is an open-source Python toolbox, which analyzes mobile phone metadata. This section demonstrates how it can be used to collect your own data. Additional examples, as well as how Bandicoot is used to analyze mobile phone data, will be demonstrated in Module 5. Important: The demonstration below requires the use of an Android phone. If you do not have access to an Android phone, a file, named "metadata_sample.csv", in the "data" directory under "module_2", has been provided that you can use for your analysis. Bandicoot is not available on Apple phones due to restrictions in the operating system. If you have an Android phone, you can export your own metadata by following these steps: 1. Go to http://bandicoot.mit.edu/android and install the application on your phone; 2. Export your data, email it to yourself, and upload the CSV file to the "data" directory in "module_2" on your virtual analysis environment. 3. You can then complete the example using your own dataset. Note: You can upload files from the directory view in your Jupyter notebook. Ensure that you select the file and then click "upload" to start the upload process. 1.1 Loading the data First, load the supplied CSV file using additional options in the Pandas read_csv function. It is possible to set the index, and instruct the function to parse the datetime column when loading the file. You can read more about the function in the Pandas documentation. End of explanation """ calls.head(5) """ Explanation: Review the data. End of explanation """ # Add a column where the week is derived from the datetime column. calls['week'] = calls.index.map(lambda observation_timestamp: observation_timestamp.week) # Display the head of the new dataset. calls.head(5) """ Explanation: 1.2 Adding derived features Similarly to the previous notebook, you can add derived features to your dataset, as demonstrated in the example below. End of explanation """ calls.interaction.unique() """ Explanation: 1.3 Display the list of interaction types. This function can be useful when working with large or dirty datasets. End of explanation """ vis = calls.hist() """ Explanation: 1.4 Visualizing your data You can make use of the default options, as demonstrated below, to get a quick overview of possible data visualizations. Alternatively, you can start performing a manual analysis on the data set (demonstrated in the previous notebook). End of explanation """ # Load the input file. U = bc.read_csv("data/metadata_sample", "") # Export the visualization to a new directory, "viz". bc.visualization.export(U, "viz") """ Explanation: 1.5 Manual analysis vs. using Bandicoot While libraries such as Pandas are great at general data wrangling and analysis, for bespoke applications, this method of analysis can be somewhat tedious. In many cases you have to define what it is that you would like to visualize, and then manually complete the steps. This is where Bandicoot comes in. Using a module that has been created specifically to look at a certain type of data (in this example, mobile phone data) can save you a significant amount of time. More content about Bandicoot will be provided in Module 5. However, the following section will give you an idea of how powerful these tools are, when used correctly. 1.5.1 Load the input file End of explanation """
tien-le/kaggle-titanic
Applying Machine Learning Techniques-Regression.ipynb
gpl-3.0
import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns import random """ Explanation: Applying Machine Learning Techniques-Regression Homepage: https://github.com/tien-le/kaggle-titanic Updating later ... End of explanation """ #Training Corpus trn_corpus_after_preprocessing = pd.read_csv("output/trn_corpus_after_preprocessing.csv") #Testing Corpus tst_corpus_after_preprocessing = pd.read_csv("output/tst_corpus_after_preprocessing.csv") #tst_corpus_after_preprocessing[tst_corpus_after_preprocessing["Fare"].isnull()] trn_corpus_after_preprocessing.info() print("-"*36) tst_corpus_after_preprocessing.info() """ Explanation: Load Corpus After Preprocessing ... End of explanation """ trn_corpus_after_preprocessing.columns list_of_non_preditor_variables = ['Survived','PassengerId'] #Method 1 #x_train = trn_corpus_after_preprocessing.ix[:, trn_corpus_after_preprocessing.columns != 'Survived'] #y_train = trn_corpus_after_preprocessing.ix[:,"Survived"] #Method 2 x_train = trn_corpus_after_preprocessing[trn_corpus_after_preprocessing.columns.difference(list_of_non_preditor_variables)].copy() y_train = trn_corpus_after_preprocessing['Survived'].copy() #y_train = trn_corpus_after_preprocessing.iloc[:,-1] #y_train = trn_corpus_after_preprocessing[trn_corpus_after_preprocessing.columns[-1]] #x_train #y_train x_train.columns # check the types of the features and response #print(type(x_train)) #print(type(x_test)) #Method 1 #x_test = tst_corpus_after_preprocessing.ix[:, trn_corpus_after_preprocessing.columns != 'Survived'] #y_test = tst_corpus_after_preprocessing.ix[:,"Survived"] #Method 2 x_test = tst_corpus_after_preprocessing[tst_corpus_after_preprocessing.columns.difference(list_of_non_preditor_variables)].copy() y_test = tst_corpus_after_preprocessing['Survived'].copy() #y_test = tst_corpus_after_preprocessing.iloc[:,-1] #y_test = tst_corpus_after_preprocessing[tst_corpus_after_preprocessing.columns[-1]] #x_test #y_test # display the first 5 rows x_train.head() # display the last 5 rows x_train.tail() # check the shape of the DataFrame (rows, columns) x_train.shape """ Explanation: Basic & Advanced machine learning tools Agenda What is machine learning? What are the two main categories of machine learning? What are some examples of machine learning? How does machine learning "work"? What is machine learning? One definition: "Machine learning is the semi-automated extraction of knowledge from data" Knowledge from data: Starts with a question that might be answerable using data Automated extraction: A computer provides the insight Semi-automated: Requires many smart decisions by a human What are the two main categories of machine learning? Supervised learning: Making predictions using data Example: Is a given email "spam" or "ham"? There is an outcome we are trying to predict Unsupervised learning: Extracting structure from data Example: Segment grocery store shoppers into clusters that exhibit similar behaviors There is no "right answer" How does machine learning "work"? High-level steps of supervised learning: First, train a machine learning model using labeled data "Labeled data" has been labeled with the outcome "Machine learning model" learns the relationship between the attributes of the data and its outcome Then, make predictions on new data for which the label is unknown The primary goal of supervised learning is to build a model that "generalizes": It accurately predicts the future rather than the past! Questions about machine learning How do I choose which attributes of my data to include in the model? How do I choose which model to use? How do I optimize this model for best performance? How do I ensure that I'm building a model that will generalize to unseen data? Can I estimate how well my model is likely to perform on unseen data? Benefits and drawbacks of scikit-learn Benefits: Consistent interface to machine learning models Provides many tuning parameters but with sensible defaults Exceptional documentation Rich set of functionality for companion tasks Active community for development and support Potential drawbacks: Harder (than R) to get started with machine learning Less emphasis (than R) on model interpretability Further reading: Ben Lorica: Six reasons why I recommend scikit-learn scikit-learn authors: API design for machine learning software Data School: Should you teach Python or R for data science? Types of supervised learning Classification: Predict a categorical response Regression: Predict a ordered/continuous response Note that each value we are predicting is the response (also known as: target, outcome, label, dependent variable) Model evaluation metrics Regression problems: Mean Absolute Error, Mean Squared Error, Root Mean Squared Error Classification problems: Classification accuracy Load Corpus End of explanation """ from sklearn import tree clf = tree.DecisionTreeClassifier() clf = clf.fit(x_train, y_train) #Once trained, we can export the tree in Graphviz format using the export_graphviz exporter. #Below is an example export of a tree trained on the entire iris dataset: with open("output/titanic.dot", 'w') as f: f = tree.export_graphviz(clf, out_file=f) #Then we can use Graphviz’s dot tool to create a PDF file (or any other supported file type): #dot -Tpdf titanic.dot -o titanic.pdf. import os os.unlink('output/titanic.dot') #Alternatively, if we have Python module pydotplus installed, we can generate a PDF file #(or any other supported file type) directly in Python: import pydotplus dot_data = tree.export_graphviz(clf, out_file=None) graph = pydotplus.graph_from_dot_data(dot_data) graph.write_pdf("output/titanic.pdf") #The export_graphviz exporter also supports a variety of aesthetic options, #including coloring nodes by their class (or value for regression) #and using explicit variable and class names if desired. #IPython notebooks can also render these plots inline using the Image() function: """from IPython.display import Image dot_data = tree.export_graphviz(clf, out_file=None, feature_names= list(x_train.columns[1:]), #iris.feature_names, class_names= ["Survived"], #iris.target_names, filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data) Image(graph.create_png())""" print("accuracy score: ", clf.score(x_test,y_test)) """ Explanation: What are the features? - AgeClass: - AgeClassSquared: - AgeSquared: - ... What is the response? - Survived: 1-Yes, 0-No What else do we know? - Because the response variable is dicrete, this is a Classification problem. - There are 200 observations (represented by the rows), and each observation is a single market. Note that if the response variable is continuous, this is a regression problem. Decision Trees Classification End of explanation """ #After being fitted, the model can then be used to predict the class of samples: y_pred_class = clf.predict(x_test); #Alternatively, the probability of each class can be predicted, #which is the fraction of training samples of the same class in a leaf: clf.predict_proba(x_test); # calculate accuracy from sklearn import metrics print(metrics.accuracy_score(y_test, y_pred_class)) """ Explanation: Classification accuracy: percentage of correct predictions End of explanation """ # examine the class distribution of the testing set (using a Pandas Series method) y_test.value_counts() # calculate the percentage of ones y_test.mean() # calculate the percentage of zeros 1 - y_test.mean() # calculate null accuracy (for binary classification problems coded as 0/1) max(y_test.mean(), 1 - y_test.mean()) # calculate null accuracy (for multi-class classification problems) y_test.value_counts().head(1) / len(y_test) """ Explanation: Null accuracy: accuracy that could be achieved by always predicting the most frequent class End of explanation """ # print the first 25 true and predicted responses from __future__ import print_function print('True:', y_test.values[0:25]) print('Pred:', y_pred_class[0:25]) """ Explanation: Comparing the true and predicted response values End of explanation """ # IMPORTANT: first argument is true values, second argument is predicted values print(metrics.confusion_matrix(y_test, y_pred_class)) """ Explanation: Conclusion: ??? Classification accuracy is the easiest classification metric to understand But, it does not tell you the underlying distribution of response values And, it does not tell you what "types" of errors your classifier is making Confusion matrix Table that describes the performance of a classification model End of explanation """ # save confusion matrix and slice into four pieces confusion = metrics.confusion_matrix(y_test, y_pred_class) TP = confusion[1, 1] TN = confusion[0, 0] FP = confusion[0, 1] FN = confusion[1, 0] print(TP, TN, FP, FN) """ Explanation: Basic terminology True Positives (TP): we correctly predicted that they do have diabetes True Negatives (TN): we correctly predicted that they don't have diabetes False Positives (FP): we incorrectly predicted that they do have diabetes (a "Type I error") False Negatives (FN): we incorrectly predicted that they don't have diabetes (a "Type II error") End of explanation """ print((TP + TN) / float(TP + TN + FP + FN)) print(metrics.accuracy_score(y_test, y_pred_class)) """ Explanation: Metrics computed from a confusion matrix Classification Accuracy: Overall, how often is the classifier correct? End of explanation """ print((FP + FN) / float(TP + TN + FP + FN)) print(1 - metrics.accuracy_score(y_test, y_pred_class)) """ Explanation: Classification Error: Overall, how often is the classifier incorrect? Also known as "Misclassification Rate" End of explanation """ print(TN / float(TN + FP)) """ Explanation: Specificity: When the actual value is negative, how often is the prediction correct? How "specific" (or "selective") is the classifier in predicting positive instances? End of explanation """ print(FP / float(TN + FP)) """ Explanation: False Positive Rate: When the actual value is negative, how often is the prediction incorrect? End of explanation """ print(TP / float(TP + FP)) print(metrics.precision_score(y_test, y_pred_class)) print("Presicion: ", metrics.precision_score(y_test, y_pred_class)) print("Recall: ", metrics.recall_score(y_test, y_pred_class)) print("F1 score: ", metrics.f1_score(y_test, y_pred_class)) """ Explanation: Precision: When a positive value is predicted, how often is the prediction correct? How "precise" is the classifier when predicting positive instances? End of explanation """ from sklearn import svm model = svm.LinearSVC() model.fit(x_train, y_train) acc_score = model.score(x_test, y_test) print("Accuracy score: ", acc_score) y_pred_class = model.predict(x_test) from sklearn import metrics confusion_matrix = metrics.confusion_matrix(y_test, y_pred_class) print(confusion_matrix) """ Explanation: Many other metrics can be computed: F1 score, Matthews correlation coefficient, etc. Conclusion: Confusion matrix gives you a more complete picture of how your classifier is performing Also allows you to compute various classification metrics, and these metrics can guide your model selection Which metrics should you focus on? Choice of metric depends on your business objective Spam filter (positive class is "spam"): Optimize for precision or specificity because false negatives (spam goes to the inbox) are more acceptable than false positives (non-spam is caught by the spam filter) Fraudulent transaction detector (positive class is "fraud"): Optimize for sensitivity because false positives (normal transactions that are flagged as possible fraud) are more acceptable than false negatives (fraudulent transactions that are not detected) Support Vector Machine (SVM) Linear Support Vector Classification. Similar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. Ref: http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html#sklearn.svm.LinearSVC End of explanation """ from sklearn.neighbors import KNeighborsClassifier from sklearn.svm import SVC from sklearn.gaussian_process import GaussianProcessClassifier from sklearn.gaussian_process.kernels import RBF from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.neural_network import MLPClassifier from sklearn.naive_bayes import GaussianNB from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.datasets import make_classification from sklearn.preprocessing import StandardScaler from matplotlib.colors import ListedColormap #classifiers #x_train #sns.pairplot(x_train) x_train_scaled = StandardScaler().fit_transform(x_train) x_test_scaled = StandardScaler().fit_transform(x_test) x_train_scaled[0] len(x_train_scaled[0]) df_x_train_scaled = pd.DataFrame(columns=x_train.columns, data=x_train_scaled) df_x_train_scaled.head() #sns.pairplot(df_x_train_scaled) names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Decision Tree", "Random Forest", "Neural Net", "AdaBoost", "Naive Bayes", "QDA", "Gaussian Process"] classifiers = [ KNeighborsClassifier(3), SVC(kernel="linear", C=0.025), SVC(gamma=2, C=1), DecisionTreeClassifier(max_depth=5), RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1), MLPClassifier(alpha=1), AdaBoostClassifier(), GaussianNB(), QuadraticDiscriminantAnalysis() #, GaussianProcessClassifier(1.0 * RBF(1.0), warm_start=True), # Take too long... ] # iterate over classifiers for name, model in zip(names, classifiers): model.fit(x_train_scaled, y_train) acc_score = model.score(x_test_scaled, y_test) print(name, " - accuracy score: ", acc_score) #end for """ Explanation: Classifier comparison http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html A comparison of a several classifiers in scikit-learn on synthetic datasets. The point of this example is to illustrate the nature of decision boundaries of different classifiers. This should be taken with a grain of salt, as the intuition conveyed by these examples does not necessarily carry over to real datasets. Particularly in high-dimensional spaces, data can more easily be separated linearly and the simplicity of classifiers such as naive Bayes and linear SVMs might lead to better generalization than is achieved by other classifiers. The plots show training points in solid colors and testing points semi-transparent. The lower right shows the classification accuracy on the test set. End of explanation """ from sklearn import tree clf = tree.DecisionTreeRegressor() clf = clf.fit(x_train, y_train) clf.score(x_test,y_test) #clf.predict(x_test) """ Explanation: Decision Tree Regressor Ref: http://scikit-learn.org/stable/modules/tree.html Decision trees can also be applied to regression problems, using the DecisionTreeRegressor class. As in the classification setting, the fit method will take as argument arrays X and y, only that in this case y is expected to have floating point values instead of integer values: End of explanation """ from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(x_train, y_train) r_squared = model.score(x_test, y_test) print("R-squared: %.4f" %r_squared) """ Explanation: Random Forests Naive Bayes Simple Linear Regression Recall that Simple Linear Regression is given by the following equation: $y = \alpha + \beta x$ Our goal is to solve the values $\alpha$ and $\beta$ that minimize the cost function. $$\beta = \frac{cov(x,y)}{var(x)}$$ where $cov(x,y)$ denotes a measure of how far a set of values is spread out. Note that: * Variance is zero if all of the features are spread out equally. * A SMALL variance indicates that the numbers are NEAR the mean of the set * A LARGE variance when the numbers are FAR the mean of the set $$var(x) = \frac{\sum\limits_{i=1}^{n}{\left( x_i - \overline{x} \right)}}{n-1}$$ $$cov(x,y) = \frac{\sum\limits_{i=1}^{n}{\left( x_i - \overline{x} \right)\left( y_i - \overline{y} \right)}}{n-1}$$ Having solved $\beta$, we can estimate $\alpha$ using the following formula: $$\alpha = \overline{y} - \beta \overline{x}$$ Evaluating the Model Using r-squared - that measures how well the observed values of the response variables are predicted by the model. In the case of simple linear regression, r-squared is equal to Pearson's r. In this method, r-squared must be a positive number between zero and one. In others, r-squared can return a negative number if the model performs extremely poorly. End of explanation """ from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(x_train, y_train) predictions = model.predict(x_test) #for i in range(predictions.size): # print("Predicted: %.2f, Target: %.2f" %(predictions[i], y_test[i])) r_squared = model.score(x_test, y_test) print("R-squared: %.4f" %r_squared) """ Explanation: Multiple Linear Regresssion Formally, multiple linear regression is the following model: $$y = \alpha+\beta_1x_1+\beta_2x_2+...+\beta_nx_n$$ or $$Y = X\beta$$ where $Y$ denotes a column vector of the values of the response variables for training, $\beta$ denotes a column vector of the values of the model's parameters, $X$ is called the design matrix, an $m \times n$ dimensional matrix of the values of the features. We can solve $\beta$ as follows: $$\beta = \left( X^TX \right)^{-1}X^TY$$ Note that - code python: python from numpy import dot, transpose beta = dot(inv(dot(transpose(X),X)), dot(transpose(X), Y)) End of explanation """ import numpy as np import matplotlib.pyplot as plt from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures model = LinearRegression() model.fit(x_train, y_train) xx = np.linspace(0, 26, 100) #yy = np.linspace(0, 26, 100) #yy = model.predict(xx.reshape(xx.shape[0],1)) #plt.plot(xx, yy) quadratic_featurizer = PolynomialFeatures(degree=2) x_train_quadratic = quadratic_featurizer.fit_transform(x_train) x_test_quadratic = quadratic_featurizer.fit(x_test) x_train.head() model_quadratic = LinearRegression() model_quadratic.fit(x_train_quadratic, y_train) #predictions = model_quadratic.predict(x_test_quadratic) #r_squared = model_quadratic.score(x_test_quadratic, y_test) #r_squared #print("R-squared: %.4f" %r_squared) """ Explanation: Polynomialy Regression Quadratic Regression, regession with a second order polynomial, is given by the following formula: $$y = \alpha +\beta_1x^1+\beta_2x^2$$ End of explanation """
vinhqdang/my_mooc
coursera/advanced_machine_learning_spec/4_nlp/natural-language-processing-master/week1/week1-MultilabelClassification.ipynb
mit
import sys sys.path.append("..") from download_utils import download_week1_resources download_week1_resources() """ Explanation: Predict tags on StackOverflow with linear models In this assignment you will learn how to predict tags for posts from StackOverflow. To solve this task you will use multilabel classification approach. Libraries In this task you will need the following libraries: - Numpy — a package for scientific computing. - Pandas — a library providing high-performance, easy-to-use data structures and data analysis tools for the Python - scikit-learn — a tool for data mining and data analysis. - NLTK — a platform to work with natural language. Data The following cell will download all data required for this assignment into the folder week1/data. End of explanation """ from grader import Grader grader = Grader() """ Explanation: Grading We will create a grader instace below and use it to collect your answers. Note that these outputs will be stored locally inside grader and will be uploaded to platform only after running submiting function in the last part of this assignment. If you want to make partial submission, you can run that cell any time you want. End of explanation """ import nltk nltk.download('stopwords') from nltk.corpus import stopwords """ Explanation: Text preprocessing For this and most of the following assignments you will need to use a list of stop words. It can be downloaded from nltk: End of explanation """ from ast import literal_eval import pandas as pd import numpy as np def read_data(filename): data = pd.read_csv(filename, sep='\t') data['tags'] = data['tags'].apply(literal_eval) return data train = read_data('data/train.tsv') validation = read_data('data/validation.tsv') test = pd.read_csv('data/test.tsv', sep='\t') train.head() validation.iloc[4]['title'] """ Explanation: In this task you will deal with a dataset of post titles from StackOverflow. You are provided a split to 3 sets: train, validation and test. All corpora (except for test) contain titles of the posts and corresponding tags (100 tags are available). The test set is provided for Coursera's grading and doesn't contain answers. Upload the corpora using pandas and look at the data: End of explanation """ X_train, y_train = train['title'].values, train['tags'].values X_val, y_val = validation['title'].values, validation['tags'].values X_test = test['title'].values """ Explanation: As you can see, title column contains titles of the posts and tags colum countains the tags. It could be noticed that a number of tags for a post is not fixed and could be as many as necessary. For a more comfortable usage, initialize X_train, X_val, X_test, y_train, y_val. End of explanation """ import re REPLACE_BY_SPACE_RE = re.compile('[/(){}\[\]\|@,;]') BAD_SYMBOLS_RE = re.compile('[^0-9a-z #+_]') STOPWORDS = set(stopwords.words('english')) def text_prepare(text): """ text: a string return: modified initial string """ text = text.lower () text = REPLACE_BY_SPACE_RE.sub (' ', text) text = BAD_SYMBOLS_RE.sub ('', text) text = ' '.join([c for c in text.split() if c not in STOPWORDS]) return text def test_text_prepare(): examples = ["SQL Server - any equivalent of Excel's CHOOSE function?", "How to free c++ memory vector<int> * arr?"] answers = ["sql server equivalent excels choose function", "free c++ memory vectorint arr"] for ex, ans in zip(examples, answers): if text_prepare(ex) != ans: # print (text_prepare(ex)) return "Wrong answer for the case: '%s'" % ex return 'Basic tests are passed.' print(test_text_prepare()) """ Explanation: One of the most known difficulties when working with natural data is that it's unstructured. For example, if you use it "as is" and extract tokens just by splitting the titles by whitespaces, you will see that there are many "weird" tokens like 3.5?, "Flip, etc. To prevent the problems, it's usually useful to prepare the data somehow. In this task you'll write a function, which will be also used in the other assignments. Task 1 (TextPrepare). Implement the function text_prepare following the instructions. After that, run the function test_test_prepare to test it on tiny cases and submit it to Coursera. End of explanation """ prepared_questions = [] for line in open('data/text_prepare_tests.tsv', encoding='utf-8'): line = text_prepare(line.strip()) prepared_questions.append(line) text_prepare_results = '\n'.join(prepared_questions) grader.submit_tag('TextPrepare', text_prepare_results) """ Explanation: Run your implementation for questions from file text_prepare_tests.tsv to earn the points. End of explanation """ X_train = [text_prepare(x) for x in X_train] X_val = [text_prepare(x) for x in X_val] X_test = [text_prepare(x) for x in X_test] X_train[:3] y_train[:3] """ Explanation: Now we can preprocess the titles using function text_prepare and making sure that the headers don't have bad symbols: End of explanation """ # Dictionary of all tags from train corpus with their counts. tags_counts = {} # Dictionary of all words from train corpus with their counts. words_counts = {} ###################################### ######### YOUR CODE HERE ############# ###################################### for question in X_train: words = question.split () for word in words: if word in words_counts.keys(): words_counts[word] = words_counts[word] + 1 else: words_counts[word] = 1 for tags in y_train: for tag in tags: if tag in tags_counts.keys(): tags_counts [tag] = tags_counts [tag] + 1 else: tags_counts [tag] = 1 most_common_tags = sorted(tags_counts.items(), key=lambda x: x[1], reverse=True)[:3] most_common_words = sorted(words_counts.items(), key=lambda x: x[1], reverse=True)[:3] grader.submit_tag('WordsTagsCount', '%s\n%s' % (','.join(tag for tag, _ in most_common_tags), ','.join(word for word, _ in most_common_words))) """ Explanation: For each tag and for each word calculate how many times they occur in the train corpus. Task 2 (WordsTagsCount). Find 3 most popular tags and 3 most popular words in the train data and submit the results to earn the points. End of explanation """ DICT_SIZE = 5000 common_words = sorted(words_counts.items(), key=lambda x: x[1], reverse=True)[:DICT_SIZE] WORDS_TO_INDEX = dict (zip (common_words, range (DICT_SIZE))) INDEX_TO_WORDS = dict (zip (range (DICT_SIZE), common_words)) ALL_WORDS = WORDS_TO_INDEX.keys() def my_bag_of_words(text, words_to_index, dict_size): """ text: a string dict_size: size of the dictionary return a vector which is a bag-of-words representation of 'text' """ result_vector = np.zeros(dict_size) ###################################### ######### YOUR CODE HERE ############# ###################################### for word in text.split (): if word in words_to_index.keys(): result_vector [words_to_index[word]] += 1 return result_vector def test_my_bag_of_words(): words_to_index = {'hi': 0, 'you': 1, 'me': 2, 'are': 3} examples = ['hi how are you'] answers = [[1, 1, 0, 1]] for ex, ans in zip(examples, answers): if (my_bag_of_words(ex, words_to_index, 4) != ans).any(): print (my_bag_of_words(ex, words_to_index, 4)) return "Wrong answer for the case: '%s'" % ex return 'Basic tests are passed.' print(test_my_bag_of_words()) """ Explanation: Transforming text to a vector Machine Learning algorithms work with numeric data and we cannot use the provided text data "as is". There are many ways to transform text data to numeric vectors. In this task you will try to use two of them. Bag of words One of the well-known approaches is a bag-of-words representation. To create this transformation, follow the steps: 1. Find N most popular words in train corpus and numerate them. Now we have a dictionary of the most popular words. 2. For each title in the corpora create a zero vector with the dimension equals to N. 3. For each text in the corpora iterate over words which are in the dictionary and increase by 1 the corresponding coordinate. Let's try to do it for a toy example. Imagine that we have N = 4 and the list of the most popular words is ['hi', 'you', 'me', 'are'] Then we need to numerate them, for example, like this: {'hi': 0, 'you': 1, 'me': 2, 'are': 3} And we have the text, which we want to transform to the vector: 'hi how are you' For this text we create a corresponding zero vector [0, 0, 0, 0] And interate over all words, and if the word is in the dictionary, we increase the value of the corresponding position in the vector: 'hi': [1, 0, 0, 0] 'how': [1, 0, 0, 0] # word 'how' is not in our dictionary 'are': [1, 0, 0, 1] 'you': [1, 1, 0, 1] The resulting vector will be [1, 1, 0, 1] Implement the described encoding in the function my_bag_of_words with the size of the dictionary equals to 5000. To find the most common words use train data. You can test your code using the function test_my_bag_of_words. End of explanation """ from scipy import sparse as sp_sparse X_train_mybag = sp_sparse.vstack([sp_sparse.csr_matrix(my_bag_of_words(text, WORDS_TO_INDEX, DICT_SIZE)) for text in X_train]) X_val_mybag = sp_sparse.vstack([sp_sparse.csr_matrix(my_bag_of_words(text, WORDS_TO_INDEX, DICT_SIZE)) for text in X_val]) X_test_mybag = sp_sparse.vstack([sp_sparse.csr_matrix(my_bag_of_words(text, WORDS_TO_INDEX, DICT_SIZE)) for text in X_test]) print('X_train shape ', X_train_mybag.shape) print('X_val shape ', X_val_mybag.shape) print('X_test shape ', X_test_mybag.shape) """ Explanation: Now apply the implemented function to all samples (this might take up to a minute): End of explanation """ row = X_train_mybag[10].toarray()[0] non_zero_indexes = np.count_nonzero (row) grader.submit_tag('BagOfWords', ','.join(str(non_zero_indexes))) """ Explanation: As you might notice, we transform the data to sparse representation, to store the useful information efficiently. There are many types of such representations, however slkearn algorithms can work only with csr matrix, so we will use this one. Task 3 (BagOfWords). For the 10th row in X_train_mybag find how many non-zero elements it has. End of explanation """ from sklearn.feature_extraction.text import TfidfVectorizer def tfidf_features(X_train, X_val, X_test): """ X_train, X_val, X_test — samples return bag-of-words representation of each sample and vocabulary """ # Create TF-IDF vectorizer with a proper parameters choice # Fit the vectorizer on the train set # Transform the train, test, and val sets and return the result tfidf_vectorizer = TfidfVectorizer (ngram_range = (1,2), min_df = 5, max_df = 0.9).fit (X_train) ###################################### ######### YOUR CODE HERE ############# ###################################### tfidf_vectorizer.fit_transform (X_train) tfidf_vectorizer.fit_transform (X_val) tfidf_vectorizer.fit_transform (X_test) return X_train, X_val, X_test, tfidf_vectorizer.vocabulary_ """ Explanation: TF-IDF The second approach extends the bag-of-words framework by taking into account total frequencies of words in the corpora. It helps to penalize too frequent words and provide better features space. Implement function tfidf_features using class TfidfVectorizer from scikit-learn. Use train corpus to train a vectorizer. Don't forget to take a look into the arguments that you can pass to it. We suggest that you filter out too rare words (occur less than in 5 titles) and too frequent words (occur more than in 90% of the titles). Also, use bigrams along with unigrams in your vocabulary. End of explanation """ X_train_tfidf, X_val_tfidf, X_test_tfidf, tfidf_vocab = tfidf_features(X_train, X_val, X_test) tfidf_reversed_vocab = {i:word for word,i in tfidf_vocab.items()} ######### YOUR CODE HERE ############# try: print (tfidf_reversed_vocab['c++']) except Exception as e: print ('no keyword') """ Explanation: Once you have done text preprocessing, always have a look at the results. Be very careful at this step, because the performance of future models will drastically depend on it. In this case, check whether you have c++ or c# in your vocabulary, as they are obviously important tokens in our tags prediction task: End of explanation """ ######### YOUR CODE HERE ############# """ Explanation: If you can't find it, we need to understand how did it happen that we lost them? It happened during the built-in tokenization of TfidfVectorizer. Luckily, we can influence on this process. Get back to the function above and use '(\S+)' regexp as a token_pattern in the constructor of the vectorizer. Now, use this transormation for the data and check again. End of explanation """ from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer(classes=sorted(tags_counts.keys())) y_train = mlb.fit_transform(y_train) y_val = mlb.fit_transform(y_val) """ Explanation: MultiLabel classifier As we have noticed before, in this task each example can have multiple tags. To deal with such kind of prediction, we need to transform labels in a binary form and the prediction will be a mask of 0s and 1s. For this purpose it is convenient to use MultiLabelBinarizer from sklearn. End of explanation """ from sklearn.multiclass import OneVsRestClassifier from sklearn.linear_model import LogisticRegression, RidgeClassifier def train_classifier(X_train, y_train): """ X_train, y_train — training data return: trained classifier """ # Create and fit LogisticRegression wraped into OneVsRestClassifier. ###################################### ######### YOUR CODE HERE ############# ###################################### """ Explanation: Implement the function train_classifier for training a classifier. In this task we suggest to use One-vs-Rest approach, which is implemented in OneVsRestClassifier class. In this approach k classifiers (= number of tags) are trained. As a basic classifier, use LogisticRegression. It is one of the simplest methods, but often it performs good enough in text classification tasks. It might take some time, because a number of classifiers to train is large. End of explanation """ classifier_mybag = train_classifier(X_train_mybag, y_train) classifier_tfidf = train_classifier(X_train_tfidf, y_train) """ Explanation: Train the classifiers for different data transformations: bag-of-words and tf-idf. End of explanation """ y_val_predicted_labels_mybag = classifier_mybag.predict(X_val_mybag) y_val_predicted_scores_mybag = classifier_mybag.decision_function(X_val_mybag) y_val_predicted_labels_tfidf = classifier_tfidf.predict(X_val_tfidf) y_val_predicted_scores_tfidf = classifier_tfidf.decision_function(X_val_tfidf) """ Explanation: Now you can create predictions for the data. You will need two types of predictions: labels and scores. End of explanation """ y_val_pred_inversed = mlb.inverse_transform(y_val_predicted_labels_tfidf) y_val_inversed = mlb.inverse_transform(y_val) for i in range(3): print('Title:\t{}\nTrue labels:\t{}\nPredicted labels:\t{}\n\n'.format( X_val[i], ','.join(y_val_inversed[i]), ','.join(y_val_pred_inversed[i]) )) """ Explanation: Now take a look at how classifier, which uses TF-IDF, works for a few examples: End of explanation """ from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import roc_auc_score from sklearn.metrics import average_precision_score from sklearn.metrics import recall_score """ Explanation: Now, we would need to compare the results of different predictions, e.g. to see whether TF-IDF transformation helps or to try different regularization techniques in logistic regression. For all these experiments, we need to setup evaluation procedure. Evaluation To evaluate the results we will use several classification metrics: - Accuracy - F1-score - Area under ROC-curve - Area under precision-recall curve Make sure you are familiar with all of them. How would you expect the things work for the multi-label scenario? Read about micro/macro/weighted averaging following the sklearn links provided above. End of explanation """ def print_evaluation_scores(y_val, predicted): ###################################### ######### YOUR CODE HERE ############# ###################################### print('Bag-of-words') print_evaluation_scores(y_val, y_val_predicted_labels_mybag) print('Tfidf') print_evaluation_scores(y_val, y_val_predicted_labels_tfidf) """ Explanation: Implement the function print_evaluation_scores which calculates and prints to stdout: - accuracy - F1-score macro/micro/weighted - Precision macro/micro/weighted End of explanation """ from metrics import roc_auc %matplotlib inline n_classes = len(tags_counts) roc_auc(y_val, y_val_predicted_scores_mybag, n_classes) n_classes = len(tags_counts) roc_auc(y_val, y_val_predicted_scores_tfidf, n_classes) """ Explanation: You might also want to plot some generalization of the ROC curve for the case of multi-label classification. Provided function roc_auc can make it for you. The input parameters of this function are: - true labels - decision functions scores - number of classes End of explanation """ ###################################### ######### YOUR CODE HERE ############# ###################################### """ Explanation: Task 4 (MultilabelClassification). Once we have the evaluation set up, we suggest that you experiment a bit with training your classifiers. We will use F1-score weighted as an evaluation metric. Our recommendation: - compare the quality of the bag-of-words and TF-IDF approaches and chose one of them. - for the chosen one, try L1 and L2-regularization techniques in Logistic Regression with different coefficients (e.g. C equal to 0.1, 1, 10, 100). You also could try other improvements of the preprocessing / model, if you want. End of explanation """ test_predictions = ######### YOUR CODE HERE ############# test_pred_inversed = mlb.inverse_transform(test_predictions) test_predictions_for_submission = '\n'.join('%i\t%s' % (i, ','.join(row)) for i, row in enumerate(test_pred_inversed)) grader.submit_tag('MultilabelClassification', test_predictions_for_submission) """ Explanation: When you are happy with the quality, create predictions for test set, which you will submit to Coursera. End of explanation """ def print_words_for_tag(classifier, tag, tags_classes, index_to_words, all_words): """ classifier: trained classifier tag: particular tag tags_classes: a list of classes names from MultiLabelBinarizer index_to_words: index_to_words transformation all_words: all words in the dictionary return nothing, just print top 5 positive and top 5 negative words for current tag """ print('Tag:\t{}'.format(tag)) # Extract an estimator from the classifier for the given tag. # Extract feature coefficients from the estimator. ###################################### ######### YOUR CODE HERE ############# ###################################### top_positive_words = # top-5 words sorted by the coefficiens. top_negative_words = # bottom-5 words sorted by the coefficients. print('Top positive words:\t{}'.format(', '.join(top_positive_words))) print('Top negative words:\t{}\n'.format(', '.join(top_negative_words))) print_words_for_tag(classifier_tfidf, 'c', mlb.classes, tfidf_reversed_vocab, ALL_WORDS) print_words_for_tag(classifier_tfidf, 'c++', mlb.classes, tfidf_reversed_vocab, ALL_WORDS) print_words_for_tag(classifier_tfidf, 'linux', mlb.classes, tfidf_reversed_vocab, ALL_WORDS) """ Explanation: Analysis of the most important features Finally, it is usually a good idea to look at the features (words or n-grams) that are used with the largest weigths in your logistic regression model. Implement the function print_words_for_tag to find them. Get back to sklearn documentation on OneVsRestClassifier and LogisticRegression if needed. End of explanation """ grader.status() STUDENT_EMAIL = # EMAIL STUDENT_TOKEN = # TOKEN grader.status() """ Explanation: Authorization & Submission To submit assignment parts to Cousera platform, please, enter your e-mail and token into variables below. You can generate token on this programming assignment page. <b>Note:</b> Token expires 30 minutes after generation. End of explanation """ grader.submit(STUDENT_EMAIL, STUDENT_TOKEN) """ Explanation: If you want to submit these answers, run cell below End of explanation """
google/starthinker
colabs/trends_places_to_sheets_via_value.ipynb
apache-2.0
!pip install git+https://github.com/google/starthinker """ Explanation: Trends Places To Sheets Via Values Move using hard coded WOEID values. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - Command: "python starthinker_ui/manage.py colab" - Command: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. End of explanation """ from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) """ Explanation: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials. End of explanation """ FIELDS = { 'auth_write':'service', # Credentials used for writing data. 'secret':'', 'key':'', 'places_dataset':'', 'places_query':'', 'places_legacy':False, 'destination_sheet':'', 'destination_tab':'', } print("Parameters Set To: %s" % FIELDS) """ Explanation: 3. Enter Trends Places To Sheets Via Values Recipe Parameters Provide Twitter Credentials. Provide a comma delimited list of WOEIDs. Specify Sheet url and tab to write API call results to. Writes: WOEID, Name, Url, Promoted_Content, Query, Tweet_Volume Note Twitter API is rate limited to 15 requests per 15 minutes. So keep WOEID lists short. Modify the values below for your use case, can be done multiple times, then click play. End of explanation """ from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'twitter':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'secret':{'field':{'name':'secret','kind':'string','order':1,'default':''}}, 'key':{'field':{'name':'key','kind':'string','order':2,'default':''}}, 'trends':{ 'places':{ 'single_cell':True, 'bigquery':{ 'dataset':{'field':{'name':'places_dataset','kind':'string','order':3,'default':''}}, 'query':{'field':{'name':'places_query','kind':'string','order':4,'default':''}}, 'legacy':{'field':{'name':'places_legacy','kind':'boolean','order':5,'default':False}} } } }, 'out':{ 'sheets':{ 'sheet':{'field':{'name':'destination_sheet','kind':'string','order':6,'default':''}}, 'tab':{'field':{'name':'destination_tab','kind':'string','order':7,'default':''}}, 'range':'A1' } } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) """ Explanation: 4. Execute Trends Places To Sheets Via Values This does NOT need to be modified unless you are changing the recipe, click play. End of explanation """
p-chambers/Python_OOP_Workshop
soln/02-Classes_pt2.ipynb
mit
from IPython.core.display import HTML def css_styling(): sheet = '../css/custom.css' styles = open(sheet, "r").read() return HTML(styles) css_styling() """ Explanation: Python OOP 2: Inheritance and Magic Methods The purpose of this exercise is to test your new found knowledge of inheritance using the classical example of shapes. You have been given the base class (AbstractShape) which has some common functions for certain derived shapes, such as a triangle and a rectangle. End of explanation """ import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d.art3d import Poly3DCollection %matplotlib inline class AbstractShape(object): """A class that shares common methods between rectangles and triangles: Note that some methods raise an error - what we are doing here is actually creating 'abstract methods' which helps achieve a consistent API through all derived classes => This is Polymorphism! See example 07-PredatorPrey for a more robust way of defining abstract base classes """ def __init__(self, base, height, center): # Add all args as attributes- there are quicker/better ways # of doing this, but this is fine self.base = base self.height = height self.center = center # Call some methods: This will do nothing unless the methods are # defined in the base classes! self.vertices = self.get_vertices() self.area = self.get_area() def plot(self, ax): # First point must be repeated for a closed plot x = np.hstack([self.vertices[:, 0], self.vertices[0, 0]]) y = np.hstack([self.vertices[:, 1], self.vertices[0, 1]]) ax.plot(x, y, '-') def get_vertices(self): raise NotImplementedError('Base class method should not be called directly') def get_area(self): raise NotImplementedError('Base class method should not be called directly') # Magic methods extension def __str__(self): return "Shape object - base={}, height={}, Area={}".format(self.base, self.height, self.area) def __lt__(self, shape): return self.area < shape.area # Your classes here class Rectangle(AbstractShape): def __init__(self, base=1, height=1, center=(0., 0.)): super().__init__(base, height, center) def get_vertices(self): pts = np.ones([4, 2]) * self.center xshift = self.base / 2. yshift = self.height / 2. pts[0,:] += np.array([-xshift, -yshift]) pts[1,:] += np.array([xshift, -yshift]) pts[2,:] += np.array([xshift, yshift]) pts[3,:] += np.array([-xshift, yshift]) return pts def get_area(self): return self.base * self.height class Triangle(AbstractShape): def __init__(self, base=1, height=1, center=(0., 0.)): """Obtain the vertices of a triangle (isosceles) given its base, height and the coordinates of the base line mid point""" super().__init__(base, height, center) def get_vertices(self): pts = np.ones([3, 2]) * self.center pts[0,:] += np.array([-self.base/2., 0]) pts[1,:] += np.array([self.base/2., 0]) pts[2,:] += np.array([0, self.height]) return pts def get_area(self): return 0.5 * self.base * self.height # Extension class Cuboid(Rectangle): def __init__(self, base, height, depth, center): self.depth = depth super().__init__(base, height, center) def get_vertices(self): base2d = super().get_vertices() midplane = np.zeros([4,3]) midplane[:,:-1] = base2d zshift = np.array([0, 0, self.depth/2.]) lower_plane = midplane - zshift upper_plane = midplane + zshift return np.vstack([lower_plane, upper_plane]) def plot(self, ax): ax.scatter(self.vertices[:,0], self.vertices[:,1], self.vertices[:,2]) # Helper functions: def init_figure(): fig = plt.figure() ax = fig.add_subplot(111) plt.axis('equal') return fig, ax def init_3dfigure(): fig = plt.figure() ax = fig.add_subplot(111, projection='3d') return fig, ax # Test code for the rectangle class: fig, ax = init_figure() for i in range(1,5): rect = Rectangle(base=i, height=i, center=(0.,0.)) rect.plot(ax) # Test code for the triangle class: fig, ax = init_figure() for height in range(1,5): tri = Triangle(base=4, height=height, center=(0.,0.)) tri.plot(ax) #Tests for the cuboid extension: fig, ax = init_3dfigure() cube = Cuboid(base=2, height=2, center=(0.,0.), depth=2) cube.plot(ax) """ Explanation: Tasks Create a Rectangle class which also derives for AbstractShape, with methods __init__(self, base, height, center), which ONLY passes all arguments to the base class __init__ via super().init(base, height, center) get_vertices(self), which calculates and returns an array of vertices. vertices should contain the vertex (x,y) points of a rectangle centered at the center point self.center: numpy array with shape (4,2) Note that self.vertices in stored in the base class using a call to this method! If you're struggling, the contents of this function are provided at the end of this notebook get_area(self), which calculates and returns the area of self (use attributes, not inputs) Points to note: The code for testing this class has been provided for you Try to understand the order in which the initialisation functions are called. Which methods are being called (from which classes) in the base initialiser? Repeat 1. above for a Triangle class Class layout is identical, but get_vertices and get_area different The center point of the triangle should be the center point of the base line If you're struggling, the contents of this function are provided at the end of this notebook You have also been given a Cuboid class which has inherited a plot method, but this will plot only the 2D square Override the plot(self, ax) method to scatter self.vertices (x, y, z) on the input axes Note that the input axes given in the test code are already 3D axes - you do not need to implement this See the end of this notebook for the magic methods extension End of explanation """ # Test code for __str__ square = Rectangle(4, 4, center=(0.,0.)) print(square) # Did this do what you expected? # Test code for iterator bigsquare = Rectangle(8, 8, (0.,0.)) square < bigsquare """ Explanation: Magic Methods Extension Override the __str__ magic method in the AbstractShape base class so that printing gives information about the area, base and height store in any shape instance Override the 'less than' magic method in the AbstractShape base class so that evaluating shape1 &lt; shape2 evaluates whether the area of shape 1 is less than shape 2 End of explanation """
as595/AllOfYourBases
Nuclear/PoissonMLRealData.ipynb
gpl-3.0
def gauss_fn(p0, x): amp,mu,sigma,gamma = p0 model = SkewedGaussianModel() #amp*=sigma*np.sqrt(2*np.pi) # set initial parameter values params = model.make_params(amplitude=amp, center=mu, sigma=sigma, gamma=gamma) ymod = model.eval(params=params,x=x) return ymod def lnlike(p0, x, y): # get model for these parameters: ymod = gauss_fn(p0,x) # Poisson loglikelihood: ll = np.sum(ymod[np.where(ymeas!=0)]*np.log(ymeas[np.where(ymeas!=0)])) - np.sum(ymeas) - np.sum(ymod[np.where(ymod!=0)]*np.log(ymod[np.where(ymod!=0)]) - ymod[np.where(ymod!=0)]) return ll # use maxvalue to guess amplitude: a0 = np.max(ymeas) # use position of maxvalue to guess mean: m0 = xvals[np.argmax(ymeas)] # just guess width: s0 = 10. # just guess skew: g0 = -2. # adjust the amplitude for the normalisation factor: a0*=s0*np.sqrt(2*np.pi) print a0,m0,s0,g0 p0 = np.array([a0,m0,s0,g0]) bnds = ((-np.infty,np.infty), (-np.infty,np.infty), (0.1,np.infty), (-10., 10.)) nll = lambda *args: -lnlike(*args) result = op.minimize(nll, p0, bounds=bnds, args=(xvals, ymeas)) p1 = result["x"] print p1 yfit = gauss_fn(p1,xvals) pl.subplot(111) pl.scatter(xvals,ymeas) pl.plot(xvals,yfit,c='r') pl.show() print lnlike(p1, xvals, ymeas) res = (yfit - ymeas)/np.sqrt(ymeas) pl.plot(res) pl.ylabel(r"$\sigma$") """ Explanation: https://lmfit.github.io/lmfit-py/builtin_models.html#skewedgaussianmodel End of explanation """ ndim, nwalkers = 4, 100 pos = [result["x"] + 1e-4*np.random.randn(ndim) for i in range(nwalkers)] sampler = emcee.EnsembleSampler(nwalkers, ndim, lnlike, args=(xvals, ymeas)) p0 = sampler.run_mcmc(pos, 500) samples = sampler.chain[:, 50:, :].reshape((-1, ndim)) import corner fig = corner.corner(samples, labels=["$A$", "$\mu$", "$\sigma$","$\gamma$"], truths=[a0, m0, s0, g0]) fig.savefig("triangle.png") """ Explanation: These residuals look a bit weird to me. I'm wondering if the data are actually well represented by a skewed Gaussian. End of explanation """
gsorianob/fiuba-python
Clase 04 - Excepciones, funciones lambda, búsquedas y ordenamientos.ipynb
apache-2.0
lista_de_numeros = [1, 6, 3, 9, 5, 2] lista_ordenada = sorted(lista_de_numeros) print lista_ordenada """ Explanation: <!-- 27/10 Ordenamientos y búsquedas. Excepciones. Funciones anónimas.(Pablo o Andres) --> Ordenamiento de listas Las listas se pueden ordenar fácilmente usando la función sorted: End of explanation """ lista_de_numeros = [1, 6, 3, 9, 5, 2] print sorted(lista_de_numeros, reverse=True) """ Explanation: Pero, ¿y cómo hacemos para ordenarla de mayor a menor?. <br> Simple, interrogamos un poco a la función: ```Python print sorted.doc sorted(iterable, cmp=None, key=None, reverse=False) --> new sorted list `` Entonces, con sólo pasarle el parámetro de *reverse* enTrue` debería alcanzar: End of explanation """ import random def crear_alumnos(cantidad_de_alumnos=5): nombres = ['Javier', 'Pablo', 'Ramiro', 'Lucas', 'Carlos'] apellidos = ['Saviola', 'Aimar', 'Funes Mori', 'Alario', 'Sanchez'] alumnos = [] for i in range(cantidad_de_alumnos): a = { 'nombre': '{}, {}'.format( random.choice(apellidos), random.choice(nombres)), 'padron': random.randint(90000, 100000), 'nota': random.randint(4, 10) } alumnos.append(a) return alumnos def imprimir_curso(lista): for idx, x in enumerate(lista, 1): print ' {pos:2}. {padron} - {nombre}: {nota}'.format( pos=idx, **x) def obtener_padron(alumno): return alumno['padron'] def ordenar_por_padron(alumno1, alumno2): if alumno1['padron'] < alumno2['padron']: return -1 elif alumno2['padron'] < alumno1['padron']: return 1 else: return 0 curso = crear_alumnos() print 'La lista tiene los alumnos:' imprimir_curso(curso) lista_ordenada = sorted(curso, key=obtener_padron) print 'Y la lista ordenada por padrón:' imprimir_curso(lista_ordenada) otra_lista_ordenada = sorted(curso, cmp=ordenar_por_padron) print 'Y la lista ordenada por padrón:' imprimir_curso(otra_lista_ordenada) """ Explanation: ¿Y si lo que quiero ordenar es una lista de registros?. <br> Podemos pasarle una función que sepa cómo comparar esos registros o una que sepa devolver la información que necesita comparar. End of explanation """ lista = [11, 4, 6, 1, 3, 5, 7] if 3 in lista: print '3 esta en la lista' else: print '3 no esta en la lista' if 15 in lista: print '15 esta en la lista' else: print '15 no esta en la lista' """ Explanation: Búsquedas en listas Para saber si un elemento se encuentra en una lista, alcanza con usar el operador in: End of explanation """ lista = [11, 4, 6, 1, 3, 5, 7] if 3 not in lista: print '3 NO esta en la lista' else: print '3 SI esta en la lista' """ Explanation: También es muy fácil saber si un elemento no esta en la lista: End of explanation """ lista = [11, 4, 6, 1, 3, 5, 7] pos = lista.index(3) print 'El 3 se encuentra en la posición', pos pos = lista.index(15) print 'El 15 se encuentra en la posición', pos """ Explanation: En cambio, si lo que queremos es saber es dónde se encuentra el número 3 en la lista es: End of explanation """ help("lambda") mi_funcion = lambda x, y: x+y resultado = mi_funcion(1,2) print resultado """ Explanation: Funciones anónimas Hasta ahora, a todas las funciones que creamos les poníamos un nombre al momento de crearlas, pero cuando tenemos que crear funciones que sólo tienen una línea y no se usan en una gran cantidad de lugares se pueden usar las funciones lambda: End of explanation """ curso = crear_alumnos(15) print 'Curso original' imprimir_curso(curso) lista_ordenada = sorted(curso, key=lambda x: (-x['nota'], x['padron'])) print 'Curso ordenado' imprimir_curso(lista_ordenada) """ Explanation: Si bien no son funciones que se usen todos los días, se suelen usar cuando una función recibe otra función como parámetro (las funciones son un tipo de dato, por lo que se las pueden asignar a variables, y por lo tanto, también pueden ser parámetros). Por ejemplo, para ordenar los alumnos por padrón podríamos usar: Python sorted(curso, key=lambda x: x['padron']) Ahora, si quiero ordenar la lista anterior por nota decreciente y, en caso de igualdad, por padrón podríamos usar: End of explanation """ es_mayor = lambda n1, n2: n1 > n2 es_menor = lambda n1, n2: n1 < n2 def binaria(cmp, lista, clave): """Binaria es una función que busca en una lista la clave pasada. Es un requisito de la búsqueda binaria que la lista se encuentre ordenada, pero no si el orden es ascendente o descendente. Por este motivo es que también recibe una función que le indique en que sentido ir. Si la lista está ordenada en forma ascendente la función que se le pasa tiene que ser verdadera cuando el primer valor es mayor que la segundo; y falso en caso contrario. Si la lista está ordenada en forma descendente la función que se le pasa tiene que ser verdadera cuando el primer valor es menor que la segundo; y falso en caso contrario. """ min = 0 max = len(lista) - 1 centro = (min + max) / 2 while (lista[centro] != clave) and (min < max): if cmp(lista[centro], clave): max = centro - 1 else: min = centro + 1 centro = (min + max) / 2 if lista[centro] == clave: return centro else: return -1 print binaria(es_mayor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 8) print binaria(es_menor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 8) print binaria(es_mayor, [1, 2, 3, 4, 5, 6, 7, 8, 9], 123) print binaria(es_menor, [9, 8, 7, 6, 5, 4, 3, 2, 1], 6) """ Explanation: Otro ejemplo podría ser implementar una búsqueda binaria que permita buscar tanto en listas crecientes como decrecientes: End of explanation """ print 1/0 """ Explanation: Excepciones Una excepción es la forma que tiene el intérprete de que indicarle al programador y/o usuario que ha ocurrido un error. Si la excepción no es controlada por el desarrollador ésta llega hasta el usuario y termina abruptamente la ejecución del sistema. <br> Por ejemplo: End of explanation """ dividendo = 1 divisor = 0 print 'Intentare hacer la división de %d/%d' % (dividendo, divisor) try: resultado = dividendo / divisor print resultado except ZeroDivisionError: print 'No se puede hacer la división ya que el divisor es 0.' """ Explanation: Pero no hay que tenerle miedo a las excepciones, sólo hay que tenerlas en cuenta y controlarlas en el caso de que ocurran: End of explanation """ def dividir(x, y): return x/y def regla_de_tres(x, y, z): return dividir(z*y, x) # Si de 28 alumnos, aprobaron 15, el porcentaje de aprobados es de... porcentaje_de_aprobados = regla_de_tres(28, 15, 100) print 'Porcentaje de aprobados: %0.2f %%' % porcentaje_de_aprobados """ Explanation: Pero supongamos que implementamos la regla de tres de la siguiente forma: End of explanation """ resultado = regla_de_tres(0, 13, 100) print 'Porcentaje de aprobados: %0.2f %%' % resultado """ Explanation: En cambio, si le pasamos 0 en el lugar de x: End of explanation """ def dividir(x, y): return x/y def regla_de_tres(x, y, z): resultado = 0 try: resultado = dividir(z*y, x) except ZeroDivisionError: print 'No se puede calcular la regla de tres ' \ 'porque el divisor es 0' return resultado print regla_de_tres(0, 1, 2) """ Explanation: Acá podemos ver todo el traceback o stacktrace, que son el cómo se fueron llamando las distintas funciones entre sí hasta que llegamos al error. <br> Pero no es bueno que este tipo de excepciones las vea directamente el usuario, por lo que podemos controlarlas en distintos momentos. Se pueden controlar inmediatamente donde ocurre el error, como mostramos antes, o en cualquier parte de este stacktrace. <br> En el caso de la regla_de_tres no nos conviene poner el try/except encerrando la línea x/y, ya que en ese punto no tenemos toda la información que necesitamos para informarle correctamente al usuario, por lo que podemos ponerla en: End of explanation """ def dividir(x, y): return x/y def regla_de_tres(x, y, z): return dividir(z*y, x) try: print regla_de_tres(0, 1, 2) except ZeroDivisionError: print 'No se puede calcular la regla de tres ' \ 'porque el divisor es 0' """ Explanation: Pero en este caso igual muestra 0, por lo que si queremos, podemos poner los try/except incluso más arriba en el stacktrace: End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except ZeroDivisionError: print 'ERROR: Ha ocurrido un error por mezclar tipos de datos' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2) """ Explanation: Todos los casos son distintos y no hay UN lugar ideal dónde capturar la excepción; es cuestión del desarrollador decidir dónde conviene ponerlo para cada problema. Capturar múltiples excepciones Una única línea puede lanzar distintas excepciones, por lo que capturar un tipo de excepción en particular no me asegura que el programa no pueda lanzar un error en esa línea que supuestamente es segura: En algunos casos tenemos en cuenta que el código puede lanzar una excepción como la de ZeroDivisionError, pero eso puede no ser suficiente: End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except TypeError: print 'ERROR: Ha ocurrido un error por mezclar tipos de datos' except ZeroDivisionError: print 'ERROR: Ha ocurrido un error de división por cero' except Exception: print 'ERROR: Ha ocurrido un error inesperado' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2) """ Explanation: En esos casos podemos capturar más de una excepción de la siguiente forma: End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except (ZeroDivisionError, TypeError): print 'ERROR: No se puede calcular la división' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2) """ Explanation: Incluso, si queremos que los dos errores muestren el mismo mensaje podemos capturar ambas excepciones juntas: End of explanation """ try: print 1/0 except ZeroDivisionError: print 'Ha ocurrido un error de división por cero' """ Explanation: Jerarquía de excepciones Existe una <a href="https://docs.python.org/2/library/exceptions.html">jerarquía de excepciones</a>, de forma que si se sabe que puede venir un tipo de error, pero no se sabe exactamente qué excepción puede ocurrir siempre se puede poner una excepción de mayor jerarquía: <img src="excepciones.png"/> Por lo que el error de división por cero se puede evitar como: End of explanation """ try: print 1/0 except Exception: print 'Ha ocurrido un error inesperado' """ Explanation: Y también como: End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es {}'.format(resultado) except ZeroDivisionError: print 'Error: División por cero' else: print 'Este mensaje se mostrará sólo si no ocurre ningún error' finally: print 'Este bloque de código se muestra siempre' dividir_numeros(1, 0) print '-------------' dividir_numeros(10, 2) """ Explanation: Si bien siempre se puede poner Exception en lugar del tipo de excepción que se espera, no es una buena práctica de programación ya que se pueden esconder errores indeseados. Por ejemplo, un error de sintaxis. Además, cuando se lanza una excepción en el bloque try, el intérprete comienza a buscar entre todas cláusulas except una que coincida con el error que se produjo, o que sea de mayor jerarquía. Por lo tanto, es recomendable poner siempre las excepciones más específicas al principio y las más generales al final: Python def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except TypeError: print 'ERROR: Ha ocurrido un error por mezclar tipos de datos' except ZeroDivisionError: print 'ERROR: Ha ocurrido un error de división por cero' except Exception: print 'ERROR: Ha ocurrido un error inesperado' Si el error no es capturado por ninguna clausula se propaga de la misma forma que si no se hubiera puesto nada. Otras cláusulas para el manejo de excepciones Además de las cláusulas try y except existen otras relacionadas con las excepciones que nos permiten manejar de mejor manera el flujo del programa: * else: se usa para definir un bloque de código que se ejecutará sólo si no ocurrió ningún error. * finally: se usa para definir un bloque de código que se ejecutará siempre, independientemente de si se lanzó una excepción o no. End of explanation """ def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es {}'.format(resultado) except ZeroDivisionError: print 'Error: División por cero' else: print 'Ahora hago que ocurra una excepción' print 1/0 finally: print 'Este bloque de código se muestra siempre' dividir_numeros(1, 0) print '-------------' dividir_numeros(10, 2) """ Explanation: Pero entonces, ¿por qué no poner ese código dentro del try-except?. Porque tal vez no queremos capturar con las cláusulas except lo que se ejecute en ese bloque de código: End of explanation """ def dividir_numeros(x, y): if y == 0: raise Exception('Error de división por cero') resultado = x/y print 'El resultado es {0}'.format(resultado) try: dividir_numeros(1, 0) except ZeroDivisionError as e: print 'ERROR: División por cero' except Exception as e: print 'ERROR: ha ocurrido un error del tipo Exception' print '----------' dividir_numeros(1, 0) """ Explanation: Lanzar excepciones Hasta ahora vimos cómo capturar un error y trabajar con él sin que el programa termine abruptamente, pero en algunos casos somos nosotros mismos quienes van a querer lanzar una excepción. Y para eso, usaremos la palabra reservada raise: End of explanation """ class ExcepcionDeDivisionPor2(Exception): def __str__(self): return 'ERROR: No se puede dividir por dos' def dividir_numeros(x, y): if y == 2: raise ExcepcionDeDivisionPor2() resultado = x/y try: dividir_numeros(1, 2) except ExcepcionDeDivisionPor2: print 'No se puede dividir por 2' dividir_numeros(1, 2) """ Explanation: Crear excepciones Pero así como podemos usar las excepciones estándares, también podemos crear nuestras propias excepciones: ```Python class MiPropiaExcepcion(Exception): def __str__(self): return 'Mensaje del error' ``` Por ejemplo: End of explanation """
DavidLeoni/relmath
bq-examples/Marks/Image.ipynb
apache-2.0
import ipywidgets as widgets import os image_path = os.path.abspath('../data_files/trees.jpg') with open(image_path, 'rb') as f: raw_image = f.read() ipyimage = widgets.Image(value=raw_image, format='jpg') ipyimage """ Explanation: The Image Mark Image is a Mark object, used to visualize images in standard format (png, jpg etc...), in a bqplot Figure It takes as input an ipywidgets Image widget The ipywidgets Image End of explanation """ from bqplot import * # Create the scales for the image coordinates scales={'x': LinearScale(), 'y': LinearScale()} # Define the bqplot Image mark image = Image(image=ipyimage, scales=scales) # Create the bqplot Figure to display the mark fig = Figure(title='Trees', marks=[image], padding_x=0, padding_y=0) fig """ Explanation: Displaying the image inside a bqplot Figure End of explanation """ scales = {'x': LinearScale(min=-1, max=2), 'y': LinearScale(min=-0.5, max=2)} image = Image(image=ipyimage, scales=scales) lines = Lines(x=[0, 1, 1, 0, 0], y=[0, 0, 1, 1, 0], scales=scales, colors=['red']) fig = Figure(marks=[image, lines], padding_x=0, padding_y=0, animation_duration=1000) fig.axes = [Axis(scale=scales['x']), Axis(scale=scales['y'], orientation='vertical')] fig """ Explanation: Mixing with other marks Image is a mark like any other, so they can be mixed and matched together. End of explanation """ # Full screen image.x = [-1, 2] image.y = [-.5, 2] """ Explanation: Its traits (attributes) will also respond dynamically to a change from the backend End of explanation """ import bqplot.pyplot as bqp bqp.figure() bqp.imshow(image_path, 'filename') bqp.show() """ Explanation: Pyplot It may seem verbose to first open the image file, create an ipywidgets Image, then create the scales and so forth. The pyplot api does all of that for you, via the imshow function. End of explanation """
doingmathwithpython/pycon-us-2016
notebooks/.ipynb_checkpoints/slides-checkpoint.ipynb
mit
As I will attempt to describe in the next slides, Python is an amazing way to lead to a more fun learning and teaching experience. It can be a basic calculator, a fancy calculator and Math, Science, Geography.. Tools that will help us in that quest are: """ Explanation: <center> Doing Math with Python </center> <center> <p> <b>Amit Saha</b> <p>May 29, PyCon US 2016 Education Summit <p>Portland, Oregon </center> ## About me - Software Engineer at [Freelancer.com](https://www.freelancer.com) HQ in Sydney, Australia - Author of "Doing Math with Python" (No Starch Press, 2015) - Writes for Linux Voice, Linux Journal, etc. - [Blog](http://echorand.me), [GitHub](http://github.com/amitsaha) #### Contact - [@echorand](http://twitter.com/echorand) - [Email](mailto:amitsaha.in@gmail.com) ### This talk - a proposal, a hypothesis, a statement *Python can lead to a more enriching learning and teaching experience in the classroom* End of explanation """ When you bring in SymPy to the picture, things really get awesome. You are suddenly writing computer programs which are capable of speaking algebra. You are no more limited to numbers. # Create graphs from algebraic expressions from sympy import Symbol, plot x = Symbol('x') p = plot(2*x**2 + 2*x + 2) # Solve equations from sympy import solve, Symbol x = Symbol('x') solve(2*x + 1) # Limits from sympy import Symbol, Limit, sin x = Symbol('x') Limit(sin(x)/x, x, 0).doit() # Derivative from sympy import Symbol, Derivative, sin, init_printing x = Symbol('x') init_printing() Derivative(sin(x)**(2*x+1), x).doit() # Indefinite integral from sympy import Symbol, Integral, sqrt, sin, init_printing x = Symbol('x') init_printing() Integral(sqrt(x)).doit() # Definite integral from sympy import Symbol, Integral, sqrt x = Symbol('x') Integral(sqrt(x), (x, 0, 2)).doit() """ Explanation: (Main) Tools <img align="center" src="collage/logo_collage.png"></img> Python - a scientific calculator Python 3 is my favorite calculator (not Python 2 because 1/2 = 0) fabs(), abs(), sin(), cos(), gcd(), log() (See math) Descriptive statistics (See statistics) Python - a scientific calculator Develop your own functions: unit conversion, finding correlation, .., anything really Use PYTHONSTARTUP to extend the battery of readily available mathematical functions $ PYTHONSTARTUP=~/work/dmwp/pycon-us-2016/startup_math.py idle3 -s Unit conversion functions ``` unit_conversion() 1. Kilometers to Miles 2. Miles to Kilometers 3. Kilograms to Pounds 4. Pounds to Kilograms 5. Celsius to Fahrenheit 6. Fahrenheit to Celsius Which conversion would you like to do? 6 Enter temperature in fahrenheit: 98 Temperature in celsius: 36.66666666666667 ``` Finding linear correlation ``` x = [1, 2, 3, 4] y = [2, 4, 6.1, 7.9] find_corr_x_y(x, y) 0.9995411791453812 ``` Python - a really fancy calculator SymPy - a pure Python symbolic math library from sympy import awesomeness - don't try that :) End of explanation """ ### TODO: digit recognition using Neural networks ### Scikitlearn, pandas, scipy, statsmodel """ Explanation: Python - Making other subjects more lively <img align="center" src="collage/collage1.png"></img> matplotlib basemap Interactive Jupyter Notebooks Bringing Science to life Animation of a Projectile motion Drawing fractals Interactively drawing a Barnsley Fern The world is your graph paper Showing places on a digital map Great base for the future Statistics and Graphing data -> Data Science Differential Calculus -> Machine learning Application of differentiation Use gradient descent to find a function's minimum value Predict the college admission score based on high school math score Use gradient descent as the optimizer for single variable linear regression model End of explanation """
takashi-suehiro/rtmtools
rtc_handle_example/script/basic.ipynb
mit
#!/usr/bin/env python # -*- Python -*- import sys import time import subprocess """ Explanation: rtc_handle.py(basic) this ipnb shows a basic usage of rtc_handle.py precondition: rtcs(cin and cout) are prelaunched separetely you can monitor the behavior of the system with openrtp you can access and control rtcs of OpenRTM-aist (written in any languages, ie., c++, python, java) by using rtc_handle.py End of explanation """ # # set up user environment # RtmToolsDir, MyRtcDir, etc. # # from set_env import * : you may provide a setup file like this # RtmToolsDir="../.." MyRtcDir=".." NS0="localhost:9876" """ Explanation: setup user environmet path for rtc_handle path for rtcs and user tools nameservers you may provide a file (ex. set_env.py) for this. End of explanation """ # # import user tools # sys.path.append(".") save_path = sys.path[:] sys.path.append(RtmToolsDir+'/rtc_handle') from rtc_handle import * # from rtc_handle_util import * # sys.path.append(RtmToolsDir+'/embryonic_rtc') # from EmbryonicRtc import * sys.path = save_path # # import stub files # #import _GlobalIDL """ Explanation: import user tools path is modified temporaly to import tools stubs for rtc service ports might be imported (this will be explained another example). End of explanation """ # # user program # # env = RtmEnv(sys.argv,[NS0]) """ Explanation: RtmEnv: rtm environment holder RtmEnv class object contains an orb, name-servers, rtcs, connectors and other rtm environment information. the second arg is a list of cos nameservers. End of explanation """ env.name_space[NS0].list_obj() env.name_space[NS0].obj_list env.name_space[NS0].rtc_handles """ Explanation: NameSpace NameSpace.env.list_obj() retrieves a list of corba objects in the nameserver and put them into the NameSpace.obj_list dictionary. if an object is an rtc, its proxy object(RtcHandl) is created and put into the NameSpace.rtc_handle dictionary. End of explanation """ cin=env.name_space[NS0].rtc_handles['cin0.rtc'] cout=env.name_space[NS0].rtc_handles['cout0.rtc'] """ Explanation: RtcHandle: proxy of rtc RtcHandle is a proxy class of rtc and the center object of this module. an rtc can be accessed by its object reference and information of the rtc can be gathered throug the reference. an RtcHandle object holds those information and proxy the rtc. assign rtc proxies to valiables to ease access to rtc proxies, it may be a good idea assigning them to valiables. End of explanation """ cout.activate() cout.deactivate() cout.activate() """ Explanation: activate and deactivate rtcs End of explanation """ cin.activate() cin.deactivate() """ Explanation: deactivation of some rtcs may fail. those rtcs may wait for resorces(ex. waiting for user input) in onExecute loop. for example, End of explanation """ cin.activate() """ Explanation: but it usually recovers after the resorce is available. please input something at the cin console. then, End of explanation """ cout.inports cout.inports['str_in'].open() cout.inports['str_in'].write('abc') cout.inports['str_in'].close() """ Explanation: direct access to Inports and Outports if the interface_type of the ports is corba_cdr, you can put data to inports and get data from outpors by using rtc_handle.py. put data to inport End of explanation """ cin.outports cin.outports['str_out'].open() cin.outports['str_out'].con.prop_dict cin.outports['str_out'].read() cin.outports['str_out'].close() """ Explanation: get data from outport by connecting to outport with setting 'datapot.dataflow_type' : 'pull' , you can get 'dataport.corba_cdr.outport_ref'. you can directly get the last data (and if not consumed other rtcs) by the ref. End of explanation """ con = IOConnector([cin.outports['str_out'], cout.inports['str_in']]) """ Explanation: IOConnector : connect and disconnect io-ports IOConnector contains information for connecting io-ports create a connector between cin.outports['str_out'] and cout.inports['str_in'] End of explanation """ con.def_prop """ Explanation: default properties of the connector is as follows End of explanation """ con.connect() con.profile con.prop_dict """ Explanation: connect ports End of explanation """ con.disconnect() """ Explanation: disconnect ports End of explanation """ con = IOConnector([cin.outports['str_out'], cout.inports['str_in']], prop_dict={'dataport.inport.buffer.length': '8'}) con.prop_dict_req con.connect() con.prop_dict con.disconnect() """ Explanation: change properties you can change properties by giving prop_dict End of explanation """ b = IOConnector([cin.outports['str_out'], cout.inports['str_in']]) b.connect() con.connect() con.disconnect() """ Explanation: conflict of connections only one connection is permitted between the same ports. so, if another connection exists, you can not control the connection by your connector. for example, End of explanation """ con.connect(force=True) """ Explanation: you can handle this situation by forcing connect/disconnect operation End of explanation """
akloster/table-cleaner
docs/source/Tutorial.ipynb
bsd-2-clause
import numpy as np import pandas as pd from IPython import display import table_cleaner as tc """ Explanation: Tutorial This tutorial will show you how to use the Table-Cleaner validation framework. First, let's import the necessary modules. My personal style is to abbreviate the scientific python libraries with two letters. This avoids namespace cluttering on the one hand, and is still reasonably short. End of explanation """ initial_df = pd.DataFrame(dict(name=["Alice", "Bob", "Wilhelm Alexander", 1, "Mary", "Andy"], email=["alice@example.com", "bob@example.com", "blub", 4, "mary@example.com", "andy k@example .com"], x=[0,3.2,"5","hello", -3,11,], y=[0.2,3.2,1.3,"hello",-3.0,11.0], active=["Y", None, "T", "false", "no", "T"] )) display.display(initial_df) """ Explanation: IPython.display provides us with the means to display Python objects in a "rich" way, especially useful for tables. Introduction Validating tabular data, especially from CSV or Excel files is a very common task in data science and even generic programming. Many times this data isn't "clean" enough for further processing. Writing custom code to transform or clean up this kind of data quickly gets out of hand. Table-Cleaner is a framework to generalize this cleaning process. Basic Example First, let's create a DataFrame with messy data. End of explanation """ initial_df.dtypes """ Explanation: This dataframe contains several columns. Some of the cells don't look much like the other cells in the same column. For Example we have numbers in the email and name columns and strings in the number columns. Looking at the dtypes assigned to the dataframe columns reveals a further issue with this mess: End of explanation """ class MyCleaner(tc.Cleaner): name = tc.String(min_length=2, max_length=10) email = tc.Email() x = tc.Int(min_value=0, max_value=10) y = tc.Float64(min_value=0, max_value=10) active = tc.Bool() """ Explanation: All columns are referred to as "object", which means they are saved as individual Python objects, rather than strings, integers or floats. This can make further processing inefficient, but also error prone, because different Python objects may not work with certain dataframe functionality. Let's define a cleaner: End of explanation """ cleaner = MyCleaner(initial_df) """ Explanation: Cleaner classes contain fields with validators. The tc.String validator validates every input to a string. Because most Python objects have some way of being represented as a string, this will more often work than not. Pretty much the only failure reason is if the string is encoded wrongly. Additionally, it can impose restrictions on minimum and maximum string length. The tc.Int instance tries to turn the input into integer objects. This usually only works with numbers, or strings which look like integers. Here, also, minimum and maximum values can be optionally specified. The cleaner object can now validate the input dataframe like this: End of explanation """ cleaner.cleaned cleaner.cleaned.dtypes """ Explanation: Instantiating the cleaner class with an input dataframe creates a cleaner instance with data and verdicts. The validation happens inside the constructor. End of explanation """ cleaner.verdicts """ Explanation: The DataFrame only contains completely valid rows, because the default behavior is to delete any rows containing an error. See below on how to use missing values instead. The datatypes for the "x" column is now int64 instead of object. "y" is now float64. Pandas uses the dtype system specified in numpy, and numpy references strings as "object". The main reason for this is that numeric data is usually stored in a contiguous way, meaning every value has the same "width" of bytes in memory. Strings, not so much. Their size varies. So arrays containing strings have to reference a string object with a pointer. Then the array of pointers is contiguous with a fixed number of bytes per pointer. The "active" column is validated as a boolean field. There is a dtype called bool, but it only allows True and False. If there are missing values, the column reverts to "object". To force the bool dtype, read the section about booleans below. So far, we have ensured only valid data is in the output table. But Table Cleaner can do more: The errors themselves can be treated as data: End of explanation """ errors = cleaner.verdicts[~cleaner.verdicts.valid] display.display(errors) """ Explanation: In this case there is only one row per cell, or one per row and column. Except for the last row, where there are two warnings/errors for the Email column. In the current set of built-in validators this arises very rarely. Just keep in mind not to sum the errors up naively and call it the "number of invalid data points". Let's filter the verdicts by validity: End of explanation """ errors.groupby(["column", "reason"])["counter",].count() """ Explanation: As this is an ordinary DataFrame, we can do all the known shenanigans to it, for example: End of explanation """ %%html <style> .tc-cell-invalid { background-color: #ff8080 } .tc-highlight { color: red; font-weight: bold; margin: 3px solid black; background-color: #b0b0b0; } .tc-green { background-color: #80ff80 } .tc-blue { background-color: #8080ff; } </style> """ Explanation: This functionality is the main reason why Table Cleaner was initially written. In reproducible data science, it is important not only to validate input data, but also be aware of, analyze and present the errors present in the data. Markup Frames Let's bring some color into our tables. First, define some CSS styles for the notebook, like so: End of explanation """ mdf = tc.MarkupFrame.from_validation(initial_df, cleaner.verdicts) mdf """ Explanation: The MarkupFrame class is subclassed from Pandas' DataFrame class and is used to manipulate and render cell-specific markup. It behaves almost exactly the same as a DataFrame. Caution: This functionality will soon be completely rewritten to have a simpler and cleaner API. It can be created from a validation like this: End of explanation """ mdf.x[1] += "tc-highlight" mdf.y += "tc-green" mdf.ix[0, :] += "tc-blue" mdf """ Explanation: Note that we put in the initial_df table, because the verdicts always relate to the original dataframe, not the output, which has possibly been altered and shortened during the validation process. Now watch this: End of explanation """ np.bool(None) """ Explanation: Booleans The trouble with Booleans Boolean values are either True or False. In Pandas, and data science in general, things are a bit more tricky. There is a third state, which Pandas would refer to as a missing value. Numpy's Bool dtype does not support missing values though. End of explanation """ bools = pd.Series([True, False, None, np.NaN]) bools """ Explanation: What's happening there is that many Python objects have a way of being interpreted as either True or False. An empty list, empty strings, and None, are all considered false, for example. Now, let's try that in Pandas: End of explanation """ bools.astype(bool) """ Explanation: The dtype is not "bool". Instead Pandas refers to the individual Python object, and thus dtype must be "object". We can make it bool, though: End of explanation """ original = pd.Series(range(3)) original[bools] """ Explanation: Notice how np.NaN, which is normally interpreted as a missing value, has been converted to True? If you try to index something with this sequence, this is what happens: End of explanation """ bools = [True, False, None, np.NaN] bool_df = pd.DataFrame(dict(a=bools, b=bools, c=bools, d=bools)) bool_df """ Explanation: Let's take a look at how to bring some sanity into this issue with Table Cleaner. First, define a messy DataFrame, with columns that are identical: End of explanation """ class BoolCleaner(tc.Cleaner): a = tc.Bool() b = tc.Bool(true_values=[True], false_values=[False], allow_nan=False) c = tc.Bool(true_values=[True], false_values=[False, None], allow_nan=False) d = tc.Bool(true_values=[True], false_values=[False, np.nan], nan_values=[None], allow_nan=False) bool_cleaner = BoolCleaner(bool_df) tc.MarkupFrame.from_validation(bool_df, bool_cleaner.verdicts) """ Explanation: Now create a cleaner which validates each column differently: End of explanation """ bool_cleaner.verdicts[~bool_cleaner.verdicts.valid] """ Explanation: Note that I used "delete=False" to keep rows with invalid data, while still converting available values. Then this dataframe has the same shape as MarkupFrame.from_validation expects. "allow_nan" defaults to True and controls whether or not missing values are considered an error. End of explanation """ messy_bools_column =["T","t","on","yes", "No", "F"] messy_bools = pd.DataFrame(dict(a=messy_bools_column, b=messy_bools_column)) class BoolCleaner2(tc.Cleaner): a=tc.Bool() b=tc.Bool(true_values=["T"], false_values=["F"], allow_nan=False) bool_cleaner2 = BoolCleaner2(messy_bools) tc.MarkupFrame.from_validation(messy_bools, bool_cleaner2.verdicts) """ Explanation: Tables coming from external sources, especially spreadsheet data is notorious for having all sorts of ways to indicate booleans or missing values. The Bool validator takes three arguments to handle these cases: true_values, false_values and nan_values. End of explanation """ messy_emails =["alice@example.com", "bob@bob.com", "chris", "delta@localhost", "ernest@hemmingway@ernest.org", "fridolin@dev_server"] email_df = pd.DataFrame(dict(email=messy_emails)) class EmailCleaner(tc.Cleaner): email = tc.Email() email_cleaner = EmailCleaner(email_df) tc.MarkupFrame.from_validation(email_df, email_cleaner.verdicts) """ Explanation: Email validation Email validation is a subject onto its own. Some frameworks offer validation by simple regular expressions, which sometimes isn't enough. Other libraries or programs go so far as to ask the corresponding mail server if it knows a particular address. In almost all generic usecases, you expect email names to adhere to a very specific form, meaning a username "at" a particular globally identifiable domain name. It is assumed that every computer in the world can resolve this domain name to the same physical server. Email standards and most eail servers however, don't require "fully qualified domain names" or even globally resolvable domains. "root@localhost" is a perfectly valid email address, but completely useless in most circumstances where you want to collect or use email addresses. TableCleaner's Email validator class is based on Django's validation method. End of explanation """
retnuh/deep-learning
embeddings/Skip-Gram_word2vec.ipynb
mit
import time import numpy as np import tensorflow as tf import utils from collections import Counter import random """ Explanation: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. End of explanation """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() """ Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. End of explanation """ words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) """ Explanation: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. End of explanation """ vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] """ Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. End of explanation """ counts = Counter(int_words) freqs = { i: counts[i]/len(words) for i in range(len(vocab_to_int)) } [counts[vocab_to_int['the']], len(words)] words[15] np.random.seed(666) ## Your code here def keep(freq, t=1e-5): x = np.sqrt(t/freq) r = np.random.random() return (x, r, r < x) def foo(pos): return [freqs[int_words[pos]], words[pos], keep(freqs[int_words[pos]])] for i in range(10): print(foo(i)) train_words = [ int_words[i] for i in range(len(int_words)) if keep(freqs[int_words[i]])[2] ] len(train_words) subsampled_count = Counter(train_words) subsampled_count[0] [int_to_vocab[2], subsampled_count[2], counts[2]] [np.max(train_words), len(vocab_to_int)] """ Explanation: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words. End of explanation """ def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' r = np.random.randint(window_size) + 1 return words[idx-r:idx]+words[idx+1:idx+1+r] """ Explanation: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window. End of explanation """ def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y """ Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. End of explanation """ train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels') """ Explanation: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1. End of explanation """ n_vocab = len(int_to_vocab) n_embedding = 200 with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform([n_vocab, n_embedding], minval=-1, maxval=1)) embed = tf.nn.embedding_lookup(embedding, inputs) """ Explanation: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. End of explanation """ [inputs.shape, embed.shape, labels.shape] # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding], mean=0.0, stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, num_classes=len(vocab_to_int), num_true=1) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) """ Explanation: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works. End of explanation """ with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints """ Explanation: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. End of explanation """ epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: ## From Thushan Ganegedara's implementation # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) """ Explanation: Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words. End of explanation """ with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) """ Explanation: Restore the trained network if you need to: End of explanation """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) """ Explanation: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. End of explanation """
jacquerie/senato.py
cirinna.ipynb
mit
import os import re from itertools import combinations import xml.etree.ElementTree as ET from matplotlib import pyplot as plt from scipy.cluster.hierarchy import dendrogram, linkage %matplotlib inline DATA_FOLDER = 'data/cirinna' NAMESPACE = {'an': 'http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD03'} ALPHANUM_REGEX = re.compile('[\W+]', re.UNICODE) """ Explanation: Automated Clustering of Similar Amendments The Italian Senate is clogged by computer-generated amendments. This notebook aims to cluster similar amendments in an automated fashion, so that the appropriate Senate procedures can be used to get rid of them in one sweep. We begin as usual with some imports, some Jupyter magic and some useful constants. End of explanation """ def to_tokens(s): return set(ALPHANUM_REGEX.sub(' ', s).lower().split()) def jaccard_distance(x, y): return 1 - (len(x['tokens'] & y['tokens']) / len(x['tokens'] | y['tokens'])) """ Explanation: The problem we want to solve is an unsupervised clustering in an unknown number of clusters. The usual algorithm used to solve it is some variation of hierarchical clustering combined with some heuristics to "cut" the resulting dendrogram at a certain height to produce the predicted clusters. All variations of hierarchical clustering require us to define some distance metric between elements. In our case, elements are free texts, so we use a distance related to Jaccard Similarity on the tokens of the text, where a token is a contiguous string of alphanumeric characters. End of explanation """ amendments = [] for filename in sorted(os.listdir(DATA_FOLDER)): if filename.startswith('.'): continue tree = ET.parse(os.path.join(DATA_FOLDER, filename)) _id = tree.find('.//an:FRBRnumber', NAMESPACE).get('value') authors = [el.text for el in tree.findall('.//an:docProponent', NAMESPACE)] raw = ' '.join(tree.find('.//an:amendmentContent', NAMESPACE).itertext()) tokens = to_tokens(raw) amendments.append({'_id': _id, 'authors': authors, 'raw': raw, 'tokens': tokens}) """ Explanation: Using the XML data downloaded by the Scrapy spider, we build an array called amendments. Each element of the array is a dictionary whose structure is exemplified by the following: python { '_id': '1.100', 'authors': ['SACCONI', "D'ASCOLA", 'AIELLO', 'ALBERTINI', ..., 'DI BIAGIO'], 'raw': 'Sopprimere gli articoli da 1 a 10.', 'tokens': set(['1', '10', 'a', 'articoli', 'da', 'gli', 'sopprimere']) } End of explanation """ first_amendments = amendments[:100] first_distances = [jaccard_distance(x, y) for x, y in combinations(first_amendments, 2)] """ Explanation: To check if the algorithm is working correctly, we restrict ourselves to the first hundred amendments. End of explanation """ Z_first = linkage(first_distances, method='complete') plt.figure(figsize=(25, 50)) plt.title('Z_first') dendrogram( Z_first, orientation='right', leaf_font_size=12., ) plt.show() """ Explanation: We now compute an hierarchical clustering on these first hundred elements, and we visualize the results as a dendrogram. End of explanation """ for i in [77, 72, 68, 64, 60, 56, 52, 48, 92, 89, 84, 80, 96]: print('{i}: {snippet}'.format(i=i, snippet=first_amendments[i]['raw'][:76])) """ Explanation: It appears that the algorithm found several clusters, highlighted by different colors. Let's inspect the last one: End of explanation """ for i in [78, 73, 69, 65, 61, 57, 53, 49, 93, 90, 85, 81]: print('{i}: {snippet}'.format(i=i, snippet=first_amendments[i]['raw'][:76])) """ Explanation: We see that, in fact, all amendments of this cluster are variations of a single one. Let's now try with the second to last cluster: End of explanation """ for i in [6, 97]: print('{i}: {snippet}'.format(i=i, snippet=first_amendments[i]['raw'][:76])) """ Explanation: Again, all amendments in this cluster are variations of a single one. Moreover, they differ from the previous cluster for the addition of the last sentence, which is why the hierarchical clustering algorithm will eventually merge the two clusters. To double check, let's try with amendments 6 and 97, which are not part of the same cluster: End of explanation """ distances = [jaccard_distance(x, y) for x, y in combinations(amendments, 2)] Z_all = linkage(distances, method='complete') plt.figure(figsize=(25, 10)) plt.title('Z_all') dendrogram( Z_all, no_labels=True, ) plt.show() """ Explanation: It appears that, in fact, the text of these two amendments is significantly different. Finally, let's run the algorithm on all amendments at once. End of explanation """
crystalzhaizhai/cs207_yi_zhai
homeworks/HW6/HW6_finished.ipynb
mit
from enum import Enum class AccountType(Enum): SAVINGS = 1 CHECKING = 2 """ Explanation: Homework 6 Due: Tuesday, October 10 at 11:59 PM Problem 1: Bank Account Revisited We are going to rewrite the bank account closure problem we had a few assignments ago, only this time developing a formal class for a Bank User and Bank Account to use in our closure (recall previously we just had a nonlocal variable amount that we changed). Some Preliminaries: First we are going to define two types of bank accounts. Use the code below to do this: End of explanation """ AccountType.SAVINGS """ Explanation: An Enum stands for an enumeration, it's a convenient way for you to define lists of things. Typing: End of explanation """ AccountType.SAVINGS == AccountType.SAVINGS AccountType.SAVINGS == AccountType.CHECKING """ Explanation: returns a Python representation of an enumeration. You can compare these account types: End of explanation """ AccountType.SAVINGS.name """ Explanation: To get a string representation of an Enum, you can use: End of explanation """ class BankAccount(): def __init__(self,owner,accountType): self.owner=owner self.accountType=accountType self.balance=0 def withdraw(self,amount): if amount<0: raise ValueError("amount<0") if self.balance<amount: raise ValueError("withdraw more than balance") self.balance-=amount def deposit(self,amount): if amount<0: raise ValueError("amount<0") self.balance+=amount def __str__(self): return "owner:{!s} account type:{!s}".format(self.owner,self.accountType.name) def __len__(self): return self.balance myaccount=BankAccount("zhaizhai",AccountType.CHECKING) print(myaccount.balance) """ Explanation: Part 1: Create a BankAccount class with the following specification: Constructor is BankAccount(self, owner, accountType) where owner is a string representing the name of the account owner and accountType is one of the AccountType enums Methods withdraw(self, amount) and deposit(self, amount) to modify the account balance of the account Override methods __str__ to write an informative string of the account owner and the type of account, and __len__ to return the balance of the account End of explanation """ class BankUser(): def __init__(self,owner): self.owner=owner self.SavingAccount=None self.CheckingAccount=None def addAccount(self,accountType): if accountType==AccountType.SAVINGS: if self.SavingAccount==None: self.SavingAccount=BankAccount(self.owner,accountType) else: print("more than one saving account!") raise AttributeError("more than one saving account!") elif accountType==AccountType.CHECKING: if self.CheckingAccount==None: self.CheckingAccount=BankAccount(self.owner,accountType) else: print("more than one checking account!") raise AttributeError("more than one checking account!") else: print("no such account type!") raise ValueError("no such account type!") def getBalance(self,accountType): if accountType==AccountType.SAVINGS: if self.SavingAccount==None: print("saving account not exist") raise AttributeError("saving account not exist") else: return self.SavingAccount.balance elif accountType==AccountType.CHECKING: if self.CheckingAccount==None: print("checking account not exist") raise AttributeError("checking account not exist") else: return self.CheckingAccount.balance else: print("no such account type!") raise AttributeError("no such account type!") def deposit(self,accountType,amount): if accountType==AccountType.SAVINGS: if self.SavingAccount==None: print("saving account not exist") raise AttributeError("saving account not exist") else: return self.SavingAccount.deposit(amount) elif accountType==AccountType.CHECKING: if self.CheckingAccount==None: print("checking account not exist") raise AttributeError("checking account not exist") else: return self.CheckingAccount.deposit(amount) else: print("no such account type!") raise AttributeError("no such account type!") def withdraw(self,accountType,amount): if accountType==AccountType.SAVINGS: if self.SavingAccount==None: print("saving account not exist") raise AttributeError("saving account not exist") else: return self.SavingAccount.withdraw(amount) elif accountType==AccountType.CHECKING: if self.CheckingAccount==None: print("checking account not exist") raise AttributeError("checking account not exist") else: return self.CheckingAccount.withdraw(amount) else: print("no such account type!") raise AttributeError("no such account type!") def __str__(self): s="owner:{!s}".format(self.owner) if self.SavingAccount!=None: s=s+"account type: Saving balance:{:.2f}".format(self.SavingAccount.balance) if self.CheckingAccount!=None: s=s+"account type: Checking balance:{:.2f}".format(self.CheckingAccount.balance) return s newuser=BankUser("zhaizhai") print(newuser) newuser.addAccount(AccountType.SAVINGS) print(newuser) newuser.deposit(AccountType.SAVINGS,2) newuser.withdraw(AccountType.SAVINGS,1) print(newuser) newuser.withdraw(AccountType.CHECKING,1) """ Explanation: Part 2: Write a class BankUser with the following specification: Constructor BankUser(self, owner) where owner is the name of the account. Method addAccount(self, accountType) - to start, a user will have no accounts when the BankUser object is created. addAccount will add a new account to the user of the accountType specified. Only one savings/checking account per user, return appropriate error otherwise Methods getBalance(self, accountType), deposit(self, accountType, amount), and withdraw(self, accountType, amount) for a specific AccountType. Override __str__ to have an informative summary of user's accounts. End of explanation """ def ATMSession(bankUser): def Interface(): option1=input("Enter Options:\ 1)Exit\ 2)Creat Account\ 3)Check Balance\ 4)Deposit\ 5)Withdraw") if option1=="1": Interface() return option2=input("Enter Options:\ 1)Checking\ 2)Saving") if option1=="2": if option2=="1": bankUser.addAccount(AccountType.CHECKING) Interface() return elif option2=="2": bankUser.addAccount(AccountType.SAVINGS) Interface() return else: print("no such account type") raise AttributeError("no such account type") if option1=="3": if option2=="1": print(bankUser.getBalance(AccountType.CHECKING)) Interface() return elif option2=="2": print(bankUser.getBalance(AccountType.SAVINGS)) Interface() return else: print("no such account type") raise AttributeError("no such account type") if option1=="4": option3=input("Enter Interger Amount, Cannot be Negative:") if option2=="1": bankUser.deposit(AccountType.CHECKING,int(option3)) Interface() return elif option2=="2": bankUser.deposit(AccountType.SAVINGS,int(option3)) Interface() return else: print("no such account type") raise AttributeError("no such account type") if option1=="5": option3=input("Enter Interger Amount, Cannot be Negative:") if option2=="1": bankUser.withdraw(AccountType.CHECKING,int(option3)) Interface() return elif option2=="2": bankUser.withdraw(AccountType.SAVINGS,int(option3)) Interface() return else: print("no such account type") raise AttributeError("no such account type") print("no such operation") raise AttributeError("no such operation") return Interface myATM=ATMSession(newuser) myATM() print(newuser) """ Explanation: Write some simple tests to make sure this is working. Think of edge scenarios a user might try to do. Part 3: ATM Closure Finally, we are going to rewrite a closure to use our bank account. We will make use of the input function which takes user input to decide what actions to take. Write a closure called ATMSession(bankUser) which takes in a BankUser object. Return a method called Interface that when called, would provide the following interface: First screen for user will look like: Enter Option: 1)Exit 2)Create Account 3)Check Balance 4)Deposit 5)Withdraw Pressing 1 will exit, any other option will show the options: Enter Option: 1)Checking 2)Savings If a deposit or withdraw was chosen, then there must be a third screen: Enter Integer Amount, Cannot Be Negative: This is to keep the code relatively simple, if you'd like you can also curate the options depending on the BankUser object (for example, if user has no accounts then only show the Create Account option), but this is up to you. In any case, you must handle any input from the user in a reasonable way that an actual bank would be okay with, and give the user a proper response to the action specified. Upon finishing a transaction or viewing balance, it should go back to the original screen End of explanation """ %%file bank.py from enum import Enum class AccountType(Enum): SAVINGS = 1 CHECKING = 2 class BankAccount(): def __init__(self,owner,accountType): self.owner=owner self.accountType=accountType self.balance=0 def withdraw(self,amount): if type(amount)!=int: raise ValueError("not integer amount") if amount<0: raise ValueError("amount<0") if self.balance<amount: raise ValueError("withdraw more than balance") self.balance-=amount def deposit(self,amount): if type(amount)!=int: raise ValueError("not integer amount") if amount<0: raise ValueError("amount<0") self.balance+=amount def __str__(self): return "owner:{!s} account type:{!s}".format(self.owner,self.accountType.name) def __len__(self): return self.balance def ATMSession(bankUser): def Interface(): option1=input("Enter Options:\ 1)Exit\ 2)Creat Account\ 3)Check Balance\ 4)Deposit\ 5)Withdraw") if option1=="1": return option2=input("Enter Options:\ 1)Checking\ 2)Saving") if option1=="2": if option2=="1": bankUser.addAccount(AccountType.CHECKING) return elif option2=="2": bankUser.addAccount(AccountType.SAVINGS) return else: print("no such account type") raise AttributeError("no such account type") if option1=="3": if option2=="1": print(bankUser.getBalance(AccountType.CHECKING)) return elif option2=="2": print(bankUser.getBalance(AccountType.SAVINGS)) return else: print("no such account type") raise AttributeError("no such account type") if option1=="4": option3=input("Enter Interger Amount, Cannot be Negative:") if option2=="1": bankUser.deposit(AccountType.CHECKING,int(option3)) return elif option2=="2": bankUser.deposit(AccountType.SAVINGS,int(option3)) return else: print("no such account type") raise AttributeError("no such account type") if option1=="5": option3=input("Enter Interger Amount, Cannot be Negative:") if option2=="1": bankUser.withdraw(AccountType.CHECKING,int(option3)) return elif option2=="2": bankUser.withdraw(AccountType.SAVINGS,int(option3)) return else: print("no such account type") raise AttributeError("no such account type") print("no such operation") raise AttributeError("no such operation") return Interface """ Explanation: Part 4: Put everything in a module Bank.py We will be grading this problem with a test suite. Put the enum, classes, and closure in a single file named Bank.py. It is very important that the class and method specifications we provided are used (with the same capitalization), otherwise you will receive no credit. End of explanation """ class Regression(): def __init__(self,X,y): self.X=X self.y=y self.alpha=0.1 def fit(self,X,y): return def get_params(self): return self.beta def predict(self,X): import numpy as np return np.dot(X,self.beta) def score(self,X,y): return 1-np.sum((y-self.predict(X))**2)/np.sum((y-np.mean(y))**2) def set_params(self,alpha): self.alpha=alpha """ Explanation: Problem 2: Linear Regression Class Let's say you want to create Python classes for three related types of linear regression: Ordinary Least Squares Linear Regression, Ridge Regression, and Lasso Regression. Consider the multivariate linear model: $$y = X\beta + \epsilon$$ where $y$ is a length $n$ vector, $X$ is an $m \times p$ matrix, and $\beta$ is a $p$ length vector of coefficients. Ordinary Least Squares Linear Regression OLS Regression seeks to minimize the following cost function: $$\|y - \beta\mathbf {X}\|^{2}$$ The best fit coefficients can be obtained by: $$\hat{\beta} = (X^T X)^{-1}X^Ty$$ where $X^T$ is the transpose of the matrix $X$ and $X^{-1}$ is the inverse of the matrix $X$. Ridge Regression Ridge Regression introduces an L2 regularization term to the cost function: $$\|y - \beta\mathbf {X}\|^{2}+\|\Gamma \mathbf {x} \|^{2}$$ Where $\Gamma = \alpha I$ for some constant $\alpha$ and the identity matrix $I$. The best fit coefficients can be obtained by: $$\hat{\beta} = (X^T X+\Gamma^T\Gamma)^{-1}X^Ty$$ Lasso Regression Lasso Regression introduces an L1 regularization term and restricts the total number of predictor variables in the model. The following cost function: $${\displaystyle \min {\beta {0},\beta }\left{{\frac {1}{m}}\left\|y-\beta {0}-X\beta \right\|{2}^{2}\right}{\text{ subject to }}\|\beta \|_{1}\leq \alpha.}$$ does not have a nice closed form solution. For the sake of this exercise, you may use the sklearn.linear_model.Lasso class, which uses a coordinate descent algorithm to find the best fit. You should only use the class in the fit() method of this exercise (ie. do not re-use the sklearn for other methods in your class). $R^2$ score The $R^2$ score is defined as: $${R^{2} = {1-{SS_E \over SS_T}}}$$ Where: $$SS_T=\sum_i (y_i-\bar{y})^2, SS_R=\sum_i (\hat{y_i}-\bar{y})^2, SS_E=\sum_i (y_i - \hat{y_i})^2$$ where ${y_i}$ are the original data values, $\hat{y_i}$ are the predicted values, and $\bar{y_i}$ is the mean of the original data values. Part 1: Base Class Write a class called Regression with the following methods: $fit(X, y)$: Fits linear model to $X$ and $y$. $get_params()$: Returns $\hat{\beta}$ for the fitted model. The parameters should be stored in a dictionary. $predict(X)$: Predict new values with the fitted model given $X$. $score(X, y)$: Returns $R^2$ value of the fitted model. $set_params()$: Manually set the parameters of the linear model. This parent class should throw a NotImplementedError for methods that are intended to be implemented by subclasses. End of explanation """ class OLSRegression(Regression): def fit(self): import numpy as np X=self.X y=self.y self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)),np.transpose(X)),y) ols1=OLSRegression([[2],[3]],[[1],[2]]) ols1.fit() ols1.predict([[2],[3]]) X=[[2],[3]] y=[[1],[2]] beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)),np.transpose(X)),y) """ Explanation: Part 2: OLS Linear Regression Write a class called OLSRegression that implements the OLS Regression model described above and inherits the Regression class. End of explanation """ class RidgeRegression(Regression): def fit(self): import numpy as np X=self.X y=self.y self.beta=np.dot(np.dot(np.linalg.pinv(np.dot(np.transpose(X),X)+self.alpha**2),np.transpose(X)),y) return ridge1=RidgeRegression([[2],[3]],[[1],[2]]) ridge1.fit() ridge1.predict([[2],[3]]) ridge1.score([[2],[3]],[[1],[2]]) """ Explanation: Part 3: Ridge Regression Write a class called RidgeRegression that implements Ridge Regression and inherits the OLSRegression class. End of explanation """ class LassoRegression(Regression): def fit(self): from sklearn.linear_model import Lasso myLs=Lasso(self.alpha) myLs.fit(self.X,self.y) self.beta=myLs.coef_.reshape((-1,1)) self.beta0=myLs.intercept_ return def predict(self,X): import numpy as np return np.dot(X,self.beta)+self.beta0 lasso1=LassoRegression([[2],[3]],[[1],[2]]) lasso1.fit() lasso1.predict([[2],[3]]) lasso1.score([[2],[3]],[[1],[2]]) from sklearn.linear_model import Lasso myLs=Lasso(alpha=0.1) myLs.fit([[2],[3]],[[1],[1]]) beta=np.array(myLs.coef_) print(beta.reshape((-1,1))) beta0=myLs.intercept_ print(beta0) """ Explanation: Part 3: Lasso Regression Write a class called LassoRegression that implements Lasso Regression and inherits the OLSRegression class. You should only use Lasso(), Lasso.fit(), Lasso.coef_, and Lasso._intercept from the sklearn.linear_model.Lasso class. End of explanation """ from sklearn.datasets import load_boston from sklearn.model_selection import KFold from sklearn.metrics import r2_score import statsmodels.api as sm import numpy as np boston=load_boston() boston_x=boston.data boston_y=boston.target kf=KFold(n_splits=2) kf.get_n_splits(boston) ols1_m=0 ridge1_m=0 lasso1_m=0 for train_index, test_index in kf.split(boston_x): X_train, X_test = boston_x[train_index], boston_x[test_index] y_train, y_test = boston_y[train_index], boston_y[test_index] y_train=y_train.reshape(-1,1) y_test=y_test.reshape(-1,1) ols1=OLSRegression(sm.add_constant(X_train),y_train) ols1.fit() ols1_m+=ols1.score(sm.add_constant(X_test),y_test) print("OLS score:",ols1.score(sm.add_constant(X_test),y_test)) ridge1=RidgeRegression(sm.add_constant(X_train),y_train) ridge1.fit() ridge1_m+=ridge1.score(sm.add_constant(X_test),y_test) print("ridge score:",ridge1.score(sm.add_constant(X_test),y_test)) lasso1=LassoRegression(X_train,y_train) lasso1.fit() lasso1_m+=lasso1.score(X_test,y_test) print("lasso score:",lasso1.score(X_test,y_test)) break print(ols1_m,ridge1_m,lasso1_m) ols1.get_params() """ Explanation: Part 4: Model Scoring You will use the Boston dataset for this part. Instantiate each of the three models above. Using a for loop, fit (on the training data) and score (on the testing data) each model on the Boston dataset. Print out the $R^2$ value for each model and the parameters for the best model using the get_params() method. Use an $\alpha$ value of 0.1. Hint: You can consider using the sklearn.model_selection.train_test_split method to create the training and test datasets. End of explanation """ ols_r=[] ridge_r=[] lasso_r=[] alpha_l=[] for alpha_100 in range(5,100,5): alpha=alpha_100/100 alpha_l.append(alpha) for train_index, test_index in kf.split(boston_x): X_train, X_test = boston_x[train_index], boston_x[test_index] y_train, y_test = boston_y[train_index], boston_y[test_index] y_train=y_train.reshape(-1,1) y_test=y_test.reshape(-1,1) ols1=OLSRegression(sm.add_constant(X_train),y_train) ols1.set_params(alpha) ols1.fit() ols_r.append(ols1.score(sm.add_constant(X_test),y_test)) ridge1=RidgeRegression(sm.add_constant(X_train),y_train) ridge1.set_params(alpha) ridge1.fit() ridge_r.append(ridge1.score(sm.add_constant(X_test),y_test)) lasso1=LassoRegression(X_train,y_train) lasso1.set_params(alpha) lasso1.fit() lasso_r.append(lasso1.score(X_test,y_test)) break import matplotlib.pyplot as plt plt.plot(alpha_l,ols_r,label="linear regression") plt.plot(alpha_l,ridge_r,label="ridge") plt.plot(alpha_l,lasso_r,label="lasso") plt.xlabel("alpha") plt.ylabel("$R^{2}$") plt.title("the relation of R squared with alpha") plt.legend() plt.show() """ Explanation: Part 5: Visualize Model Performance We can evaluate how the models perform for various values of $\alpha$. Calculate the $R^2$ scores for each model for $\alpha \in [0.05, 1]$ and plot the three lines on the same graph. To change the parameters, use the set_params() method. Be sure to label each line and add axis labels. End of explanation """
google/applied-machine-learning-intensive
content/04_classification/06_images_and_video/02-video_processing.ipynb
apache-2.0
# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/04_classification/06_images_and_video/02-video_processing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Copyright 2020 Google LLC. End of explanation """ import cv2 as cv cars_video = cv.VideoCapture('cars.mp4') """ Explanation: Video Processing In this lesson we will process video data using the OpenCV Python library. Obtain a Video Let's start by uploading the smallest version of this video to the Colab. Rename the video to cars.mp4 or change the name of the video in the code below. Reading the Video OpenCV is an open source library for performing computer vision tasks. One of these tasks is reading and writing video frames. To read the cars.mp4 video file, we use the VideoCapture class. End of explanation """ height = int(cars_video.get(cv.CAP_PROP_FRAME_HEIGHT)) width = int(cars_video.get(cv.CAP_PROP_FRAME_WIDTH)) fps = cars_video.get(cv.CAP_PROP_FPS) total_frames = int(cars_video.get(cv.CAP_PROP_FRAME_COUNT)) print(f'height: {height}') print(f'width: {width}') print(f'frames per second: {fps}') print(f'total frames: {total_frames}') print(f'video length (seconds): {total_frames / fps}') """ Explanation: Once you have created a VideoCapture object, you can obtain information about the video that you are processing. End of explanation """ cars_video.release() """ Explanation: When you are done processing a video file, it is a good idea to release the VideoCapture to free up memory in your program. End of explanation """ cars_video = cv.VideoCapture('cars.mp4') total_frames = int(cars_video.get(cv.CAP_PROP_FRAME_COUNT)) frames_read = 0 for current_frame in range(0, total_frames): cars_video.set(cv.CAP_PROP_POS_FRAMES, current_frame) ret, _ = cars_video.read() if not ret: raise Exception(f'Problem reading frame {current_frame} from video') if (current_frame+1) % 50 == 0: print(f'Read {current_frame+1} frames so far') cars_video.release() print(f'Read {total_frames} frames') """ Explanation: We can now loop through the video frame by frame. To do this we need to know the total number of frames in the video. For each frame we set the current frame position and then read that frame. This causes the frame to be loaded from disk into memory. This is done because videos can be enormous in size, so we don't necessarily want the entire thing in memory. You might also notice that we read the frame from the car's video, and then we check the return value to make sure that the read was successful. This is because the underlying video processing library is written in the C++ programming language, and a common practice in that language is to return a status code indicating if a function succeeds or not. This isn't very idiomatic in Python; it is just the underlying library's style leaking through into the Python wrapper. End of explanation """ import matplotlib.pyplot as plt cars_video = cv.VideoCapture('cars.mp4') cars_video.set(cv.CAP_PROP_POS_FRAMES, 123) ret, frame = cars_video.read() if not ret: raise Exception(f'Problem reading frame {current_frame} from video') cars_video.release() plt.imshow(frame) """ Explanation: That code took a while to execute. The video is just over a minute long, and it takes a while to iterate over every frame. Consider the amount of time it would take to perform object recognition on each frame. In practice you will be doing this kind of processing on a much bigger machine, or machines, than Colab provides for free. You can also process many frames in parallel. For our purposes, let's just make the video shorter. We'll load the video one more time, and then we'll read out a single frame to illustrate that the frame is just an image. End of explanation """ input_video = cv.VideoCapture('cars.mp4') height = int(input_video.get(cv.CAP_PROP_FRAME_HEIGHT)) width = int(input_video.get(cv.CAP_PROP_FRAME_WIDTH)) fps = input_video.get(cv.CAP_PROP_FPS) """ Explanation: Writing a Video OpenCV also supports writing video data. Let's loop through the long video that we have and save only one second of it into a new file. First we need to open our input video and get information about the frame rate, height, and width. End of explanation """ fourcc = cv.VideoWriter_fourcc(*'mp4v') output_video = cv.VideoWriter('cars-short.mp4', fourcc, fps, (width, height)) """ Explanation: Using that information we can create a VideoWriter that we'll use to write the shorter video. Video can be encoded using many different formats. In order to tell OpenCV which format to use, we choose a "four character code" from fourcc. In this case we use "mp4v" to keep our input and output files consistent. End of explanation """ for i in range(0, int(fps)): input_video.set(cv.CAP_PROP_POS_FRAMES, i) ret, frame = input_video.read() if not ret: raise Exception("Problem reading frame", i, " from video") output_video.write(frame) """ Explanation: Now we can loop through one second of video frames and write each frame to our output video. End of explanation """ input_video.release() output_video.release() """ Explanation: Once processing is complete, be sure to release the video objects from memory. End of explanation """ import os os.listdir('./') """ Explanation: And now we can list the directory to see if our new file was created. End of explanation """ # Your code goes here """ Explanation: You should now see a cars-short.mp4 file in your file browser in Colab. Download and view the video to make sure that it only lasts for a second. Notice we have only concerned ourselves with the visual portion of the video. Videos contain both visual and auditory elements. OpenCV is only concerned with computer vision, so it doesn't handle audio processing. Exercises Exercise 1 Above we shortened our video to 1 second by simply grabbing the first second of frames from the video file. Since not much typically changes from frame to frame within a second of video, a better video processing technique is to sample frames throughout the entire video and skip some frames. For example, it might be more beneficial to process every 10th frame or only process 1 of the frames in every second of video. In this exercise, take the original cars video used in this Colab and reduce it to a short 25-fps (frames per second) video by grabbing the first frame of every second of video. Save the video as cars-sampled.mp4. Student Solution End of explanation """
darshanbagul/ComputerVision
FourierTransforms/FourierTransforms.ipynb
gpl-3.0
%matplotlib inline import cv2 import numpy as np import matplotlib.pyplot as plt import cmath """ Explanation: Fourier Transform Problem 1. In this problem, given an image we perform the following tasks: 1. Compute its Fourier Transform 2. Try to reconstruct the image by applying Inverse Fourier Transform to the Fourier Image. 3. Inspect the changes in image due to above operations. Let us begin by first importing the necessary libraries and defining a few helper functions, before we approach the solutions. End of explanation """ # Function to perform 2-D Discrete Fourier transform on a given image(matrix) of size M x N. def two_dim_fourier_transform(img): M, N = img.shape[:2] dft_rep = [[0.0 for k in range(M)] for l in range(N)] for k in range(M): for l in range(N): temp_sum = 0.0 for m in range(M): for n in range(N): e = cmath.exp(- 1j * 2 * cmath.pi * (float(k * m) / M + float(l * n) / N)) temp_sum += img[m][n] * e dft_rep[l][k] = temp_sum return dft_rep # Function to perform 2-D Inverse Fourier Transform on a given Fourier image matrix. def two_dim_inv_fourier_transform(fourier): M = len(fourier) N = len(fourier[0]) idft_rep = [[0.0 for k in range(M)] for l in range(N)] for k in range(M): for l in range(N): temp_sum = 0.0 for m in range(M): for n in range(N): e = cmath.exp(1j * 2 * cmath.pi * (float(k * m) / M + float(l * n) / N)) temp_sum += fourier[m][n] * e idft_rep[l][k] = temp_sum/(M*N) return idft_rep # Function to centre the obtained Fourier transform image. Since Fourier transform is periodic, we can represent # by relocating the centre of the image for better representation.(Analogous to fftshift) def img_preprocess_centred_fourier(img): M, N = img.shape[:2] processed_img = np.zeros((M,N)) for x in range(M/2): for y in range(N/2): processed_img[x][y] = img[M/2 + x][N/2 + y] for x in range(M/2+1, M): for y in range(N/2+1, N): processed_img[x][y] = img[x - M/2 ][y - N/2] for x in range(M/2+1, M): for y in range(N/2): processed_img[x][y] = img[x - M/2][y + N/2] for x in range(M/2): for y in range(N/2+1, N/2): processed_img[x][y] = img[x + M/2][y - N/2] return processed_img # If we have an odd sized image, we convert it into an even sized image. def preprocess_odd_images(img): M, N = img.shape[:2] if M % 2 == 1 and N % 2 == 1: return img[1:][1:] elif M%2 == 1: return img[1:][:] elif N%2 == 1: return img[:][1:] else: return img # Function to plot a labeled image def plot_input(img, title): plt.imshow(img, cmap = 'gray') plt.title(title), plt.xticks([]), plt.yticks([]) plt.show() # Function to compute the Mean Squared Error between two input matrices. def min_sqr_err(mat1, mat2): min_sq = 0 M, N = mat1.shape[:2] for i in range(M): for j in range(N): min_sq += np.square(mat1[i][j] - mat2[i][j]) return min_sq """ Explanation: The above libraries are necessary for reading image, performing array and matrix operations, visualising the output plots and for handling complex numbers generated by the fourier transform. Next, we shall define all the necessary functions for solving the given problem. End of explanation """ img = cv2.imread('./fb_test.jpg', 0) img = preprocess_odd_images(img) plot_input(img,'Input image') # Details of the input image M, N = img.shape[:2] print M,N """ Explanation: Let us begin the solution by loading an input image. Since the computation of Fourier transform is extremely memory intensive, we choose a very small image (40 x 40 pixels) as input testing image. End of explanation """ processed_img = img_preprocess_centred_fourier(img) dft_rep = two_dim_fourier_transform(img) centred_dft_rep = two_dim_fourier_transform(processed_img) # Visualizing the Fourier Transform in Logarithmic Scale plot_input(10*np.log(1+np.abs(centred_dft_rep)), 'Fourier Transform') """ Explanation: Part A. Computing Fourier Transform End of explanation """ idft_rep = two_dim_inv_fourier_transform(dft_rep) plot_input(np.abs(idft_rep), 'Inverse Fourier Image') """ Explanation: As we see, above is the fourier transform of the input image. Now let us try and see if we can reconstruct the original image, by performing Inverse fourier transform on the above obtained image in the Fourier domain. Part B. Inverse Fourier Transform End of explanation """ min_sq_error = min_sqr_err(img, np.abs(idft_rep)) print "Mean squared Error between the reconstructed and original image: ", min_sq_error """ Explanation: As we see, the image obtained by performing the inverse fourier transform of the obtained fourier image, is very similar to the input image. But let us compute the Mean Squared Error(MSE) between the original image and the reconstructed image, to verify that the images are identical or not. Part C. Computing the Mean Squared Error(MSE) End of explanation """
zzsza/Datascience_School
11. 기초 확률론4 - 상관 관계/02. 확률 밀도 함수의 독립.ipynb
mit
np.set_printoptions(precision=4) pmf1 = np.array([[0, 1, 2, 3, 2, 1], [0, 2, 4, 6, 4, 2], [0, 4, 8,12, 8, 4], [0, 2, 4, 6, 4, 2], [0, 1, 2, 3, 2, 1]]) pmf1 = pmf1/pmf1.sum() pmf1 sns.heatmap(pmf1) plt.xlabel("x") plt.ylabel("y") plt.title("Joint Probability (Independent)") plt.show() """ Explanation: 확률 밀도 함수의 독립 만약 두 확률 변수 $X$, $Y$의 결합 확률 밀도 함수(joint pdf)가 주변 확률 밀도 함수(marginal pdf)의 곱으로 나타나면 두 확률 변수가 서로 독립(independent)이라고 한다. $$ f_{XY}(x, y) = f_X(x)f_Y(y) $$ 이 함수 식에서 볼 수 있듯이 독립인 경우에는 각각의 주변 확률 밀도 함수만으로 결합 확률 밀도 함수가 결정된다. 즉, $Y$가 어떤 값을 가지더라도 $X$의 주변 확률 밀도 함수는 변하지 않는다. 독립 확률 변수의 기댓값 독립 확률 변수들의 기댓값은 다음 성질을 만족한다. $$ \text{E}[XY] = \text{E}[X]\text{E}[Y] $$ (증명) $$ \begin{eqnarray} \text{E}[XY] &=& \int xy \;f_{XY}(x, y) \; dx dy \ &=& \int xy \;f_{X}(x)f_{Y}(y) \; dx dy \ &=& \int x \;f_{X}(x) \; dx \int y \;f_{Y}(y) \; dy \ &=& \text{E}[X] \text{E}[Y] \ \end{eqnarray} $$ 독립 확률 변수들의 분산 또한 독립 확률 변수들의 분산은 다음 성질을 만족한다. $$ \text{Var}[X+Y] = \text{Var}[X] + \text{Var}[Y] $$ (증명) $$ \begin{eqnarray} \text{Var}[X + Y] &=& \text{E}[((X + Y) - (\mu_X + \mu_Y))^2] \ &=& \text{E}[(X+Y)^2 - 2(X+Y)(\mu_X + \mu_Y) + (\mu_X + \mu_Y)^2] \ &=& \text{E}[X^2+2XY+Y^2] - 2(\mu_X + \mu_Y)\text{E}[X+Y] + (\mu_X + \mu_Y)^2 \ &=& \text{E}[X^2+2XY+Y^2] - 2(\mu_X + \mu_Y)^2 + (\mu_X + \mu_Y)^2 \ &=& \text{E}[X^2]+2\text{E}[XY]+\text{E}[Y^2] - (\mu_X + \mu_Y)^2 \ &=& \text{E}[X^2]+2\text{E}[X]\text{E}[Y]+\text{E}[Y^2] - (\mu_X^2 + 2\mu_X\mu_Y + \mu_Y^2) \ &=& \text{E}[X^2]-\mu_X^2+\text{E}[Y^2]-\mu_Y^2+2\text{E}[X]\text{E}[Y] - 2\mu_X\mu_Y \ &=& \text{Var}[X]+\text{Var}[Y] \ \end{eqnarray} $$ 조건부 확률 분포 또한 조건부 확률 밀도 함수에는 다음과 같은 관계가 성립한다. $$ f_{X \mid Y} (x | y_0) = \dfrac{f_{XY}(x, y=y_0)}{f_{Y}(y_0)} = \dfrac{f_{X}(x) f_{Y}(y_0)}{f_{Y}(y_0)} = f_{X}(x) $$ $$ f_{Y \mid X} (y | x_0) = \dfrac{f_{XY}(x=x_0, y)}{f_{X}(x_0)} = \dfrac{f_{X}(x_0) f_{Y}(y)}{f_{X}(x_0)} = f_{Y}(y) $$ 확률 변수 $X$가 다른 확률 변수 $Y$에 독립이면 조건부 확률 분포가 조건이 되는 확률 변수의 값에 영향을 받지 않는다. 즉, $Y$ 값이 $y_1$일 때와 $y_2$일 때의 조건부 확률 분포 $f(x \mid y_1)$과 $f(x \mid y_2)$이 같다는 의미이다. 예를 들어 다음과 같은 두 이산 확률 변수의 결합 확률 분포를 보자. End of explanation """ pmf1_marginal_x = pmf1.sum(axis=0) pmf1_marginal_y = pmf1.sum(axis=1) pmf = pmf1_marginal_x * pmf1_marginal_y[:, np.newaxis] pmf/pmf.sum() """ Explanation: 이 확률 분포는 다음 식에서 보다시피 주변 확률 분포의 곱으로 표현된다. End of explanation """ cond_x_y0 = pmf1[0, :]/pmf1_marginal_y[0] cond_x_y0 cond_x_y1 = pmf1[1, :]/pmf1_marginal_y[1] cond_x_y1 cond_x_y2 = pmf1[2, :]/pmf1_marginal_y[2] cond_x_y2 """ Explanation: 여러 가지 Y값을 바꾸어도 조건부 확률은 변하지 않는 것을 확인할 수 있다. End of explanation """ pmf2 = np.array([[0, 0, 0, 0, 1, 1], [0, 0, 1, 2, 1, 0], [0, 1, 3, 3, 1, 0], [0, 1, 2, 1, 0, 0], [1, 1, 0, 0, 0, 0]]) pmf2 = pmf2/pmf2.sum() pmf2 sns.heatmap(pmf2) plt.xlabel("x") plt.ylabel("y") plt.title("Joint Probability (Dependent)") plt.show() """ Explanation: 이번에는 다음과 같은 결합 확률 분포를 보자. 이 경우에는 독립 조건이 성립하지 않는다. End of explanation """ pmf2_marginal_x = pmf2.sum(axis=0) pmf2_marginal_y = pmf2.sum(axis=1) cond_x_y0 = pmf2[0, :]/pmf2_marginal_y[0] cond_x_y0 cond_x_y1 = pmf2[1, :]/pmf2_marginal_y[1] cond_x_y1 cond_x_y2 = pmf2[2, :]/pmf2_marginal_y[2] cond_x_y2 """ Explanation: 이 경우에는 y값에 따라 x의 조건부 확률 분포가 달라지는 것을 확인할 수 있다. End of explanation """
alfkjartan/control-computarizado
polynomial-design/notebooks/Polynomial design exercise.ipynb
mit
import numpy as np import matplotlib.pyplot as plt import control import sympy as sy """ Explanation: Effect of cancelling a process zero The following exercise is taken from Åström & Wittenmark (problem 5.3) Consider the system with pulse-transfer function $$ H(z) = \frac{z+0.7}{z^2 - 1.8z + 0.81}.$$ Use polynomial design to determine a controller such that the closed-loop system has the characteristic polynomial $$ A_c = z^2 -1.5z + 0.7. $$ Let the observer polynomial have as low order as possible, and place all observer poles in the origin (dead-beat observer). Consider the following two cases (a) The process zero is cancelled (b) The process zero is not cancelled. Simulate the two cases and discuss the differences between the two controllers. Which one should be preferred? (c) Design an incremental controller for the system End of explanation """ H = control.tf([1, 0.7], [1, -1.8, 0.81], 1) control.pzmap(H) z = sy.symbols("z", real=False) Ac = sy.Poly(z**2 - 1.5*z + 0.7,z) sy.roots(Ac) """ Explanation: Checking the poles Before solving the problem, let's look at the location of the poles of the plant and the desired closed-loop system. End of explanation """ s0,s1 = sy.symbols("s0, s1") A = sy.Poly(z**2 -1.8*z + 0.81, z) B = sy.Poly(z + 0.7, z) S = sy.Poly(s0*z + s1, z) Ac = sy.Poly(z**2 - 1.5*z + 0.7, z) Ao = sy.Poly(1, z) # Diophantine equation Dioph = A + S - Ac*Ao # Extract the coefficients Dioph_coeffs = Dioph.all_coeffs() # Solve for s0 and s1, sol = sy.solve(Dioph_coeffs, (s0,s1)) print('s_0 = %f' % sol[s0]) print('s_1 = %f' % sol[s1]) """ Explanation: So, the plant has a double pole in $z=0.9$, and the desired closed-loop system has complex-conjugated poles in $z=0.75 \pm i0.37$. (a) The feedback controller $F_b(z)$ The plant has numerator polynomial $B(z) = z+0.7$ and denominator polynomial $A(z) = z^2 - 1.8z + 0.81$. With the feedback controller $$F_b(z) = \frac{S(z)}{R(z)}$$ and feedforward $$F_f(z) = \frac{T(z)}{R(z)}$$ the closed-loop pulse-transfer function from the command signal to the output becomes $$ H_{c}(z) = \frac{\frac{T(z)}{R(z)} \frac{B(z)}{A(z)}}{1 + \frac{B(z)}{A(z)}\frac{S(z)}{R(z)}} = \frac{T(z)(z+0.7)}{A(z)R(z) + S(z)(z+0.7)}.$$ To cancel the process zero, $z+0.7$ should be a factor of $R(z)$. Write $R(z)= \bar{R}(z)(z+0.7)$ to obtain the Diophantine equation $$ A(z)\bar{R}(z) + S(z) = A_c(z)A_o(z).$$ Let's try to find a minimum-order controller that solves the Diophantine equation. The degree of the left hand side (and hence also of the right-hand side) is $$ \deg (A\bar{R} + S) = \deg A + \deg \bar{R} = 2 + \deg\bar{R}.$$ The number of equations obtained when setting the coefficients of the left- and right-hand side equal is the same as the degree of the polynomials on each side (taking into account that the leading coefficient is 1, by convention). The feedback controller can be written $$ F_b(z) = \frac{S(z)}{R(z)} = \frac{s_0z^n + s_1z^{n-1} + \cdots + s_n}{(z+0.7)(z^{n-1} + r_1z^{n-2} + \cdots + r_{n-1}}, $$ which has $(n-1) + (n+1) = 2n$ unknown parameters, where $n = \deg\bar{R} + 1$. So to obtain a Diophantine equation which gives exactly as many equations in the coefficients as unknowns, we must have $$ 2 + \deg\bar{R} = 2\deg\bar{R} + 2 \quad \Rightarrow \quad \deg\bar{R} = 0.$$ Thus, the controller becomes $$ F_b(z) = \frac{s_0z + s_1}{z+0.7}, $$ and the Diophantine equation $$ z^2 - 1.8z + 0.81 + (s_0z + s_1) = z^2 - 1.5z + 0.7$$ $$ z^2 - (1.8-s_0)z + (0.81 + s_1) = z^2 - 1.5z + 0.7, $$ with solution $$ s_0 = 1.8 - 1.5 = 0.3, \qquad s_1 = 0.7-0.81 = -0.11. $$ The right hand side of the Diophantine equation consists only of the desired characteristic polynomial $A_c(z)$, and the observer polynomial is $A_o(z) = 1$, in order for the degrees of the left- and right hand side to be the same. Let's verify by calculation using SymPy. End of explanation """ t0 = float(Ac.eval(1)) Scoeffs = [float(sol[s0]), float(sol[s1])] Rcoeffs = [1, 0.7] Fb = control.tf(Scoeffs, Rcoeffs, 1) Ff = control.tf([t0], Rcoeffs, 1) Hc = Ff * control.feedback(H, Fb) # From command-signal to output Hcu = Ff * control.feedback(1, Fb*H) tvec = np.arange(40) (t1, y1) = control.step_response(Hc,tvec) plt.figure(figsize=(14,4)) plt.step(t1, y1[0]) plt.xlabel('k') plt.ylabel('y') plt.title('Output') (t1, y1) = control.step_response(Hcu,tvec) plt.figure(figsize=(14,4)) plt.step(t1, y1[0]) plt.xlabel('k') plt.ylabel('y') plt.title('Control signal') """ Explanation: The feedforward controller $F_f(z)$ Part of the methodology of the polynomial design, is that the forward controller $F_f(z) = \frac{T(z)}{R(z)}$ should cancel the observer poles, so we set $T(z) = t_0A_o(z)$. In case (a) the observer poynomial is simply $A_o(z)=1$. However, since $R(z)=z+0.7$, we can choose $T(z) = t_0z$ and still have a causal controller $F_f(z)$. The scalar factor $t_0$ is chosen to obtain unit DC-gain of $H_c(z)$, hence $$ H_c(1) = \frac{t_0}{A_c(1)} = 1 \quad \Rightarrow \quad t_0 = A_c(1) = 1-1.5+0.7 = 0.2$$ Simulate Let's simulate a step-responses from the command signal, and plot both the output and the control signal. End of explanation """
steven-murray/halomod
devel/using_angular_corr.ipynb
mit
%matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy.integrate import simps from halomod.integrate_corr import AngularCF, angular_corr_gal, flat_z_dist, dxdz from hmf.cosmo import Cosmology from mpmath import gamma as Gamma """ Explanation: Using the Angular Correlation Function End of explanation """ # Define the correlation function, with r0=5.0 and gamma=1.8 def xi_power_law(r): return (r/5.0)**(-1.8) # Define n(z) from the builtin default function: nz = flat_z_dist(0.2,0.4) # Just check that the redshift dist looks ok z = np.linspace(0.2,0.4,100) plt.plot(z,nz(z)) # And that it integrates to 1 print simps(nz(z),z) """ Explanation: First we can try using the lower-level function angular_corr_gal. This is the function that the high-level AngularCF calls. We can use the low-level function with arbitrary formulae for the correlation function, so we can test it out. We'll just use a pure power-law $xi(r)$. Furthermore, we use the default $n(z)$ in halomod, which is a flat $z$-distribution between $z=(0.2,0.4)$. End of explanation """ # Make an array of angles (in radians) theta = np.logspace(-3,0,30)*np.pi/180 wtheta = angular_corr_gal(theta,xi_power_law,nz,zmin=0.2,zmax=0.4,logu_min=-3,logu_max=3,p_of_z=True) """ Explanation: Once we have all these components, we can put them into the lower-level function to calculate $w(\theta)$: End of explanation """ def limber_analytic_pl(theta,gamma,r0,p1,zmin,zmax,znum=100,p2=None,cosmo=Cosmology().cosmo): z = np.linspace(zmin,zmax,znum) x = (cosmo.comoving_distance(z)*cosmo.h).value if p2 is None: p2 = p1 pl = 1-gamma integ = p1(z)*p2(z)*x**pl /dxdz(z,cosmo) Aw = np.sqrt(np.pi)*r0**gamma * Gamma(gamma/2-0.5)/Gamma(gamma/2) * simps(integ,z) return Aw*theta**pl anl_sol = limber_analytic_pl(theta,1.8,5.0,nz,0.2,0.4) # Plot the analytic solution against the numerical one: plt.plot(theta,wtheta) plt.plot(theta,anl_sol) plt.xscale('log') plt.yscale('log') plt.xlabel("Angle, radians") plt.ylabel("w(theta)") """ Explanation: We can test the result, because for power-law $\xi$, there is an analytic solution to the limber equaiton: $$ w(\theta) = A_\omega \left(\frac{\theta}{1 \rm{RAD}}\right)^{1-\gamma}$$ where $$ A_\omega = \sqrt{\pi} r_o^\gamma \frac{\Gamma(\gamma/2 -1/2)}{\Gamma(\gamma/2)} \int_0^\infty d\bar{r}\ \ p_1^2(\bar{r}) r^{1-\gamma} $$ End of explanation """ wtheta = angular_corr_gal(theta,xi_power_law,nz,zmin=0.2,zmax=0.4,logu_min=-4,logu_max=4,p_of_z=True) plt.plot(theta,wtheta/anl_sol) plt.xscale('log') plt.xlabel("Angle, radians") plt.ylabel("w(theta)/analytic") """ Explanation: You can see that the numerical solutions starts to diverge from the analytic one (which is a pure power law) at btoh low and high angular separation. That's probably because our integration range in $u$ wasn't large enough. Let's try it again: End of explanation """ # To choose theta, set it in degrees then convert theta_min = 0.001 * np.pi/180.0 theta_max = 1 * np.pi/180.0 acf = AngularCF(p1=nz, zmin=0.2,zmax=0.4, logu_min=-5,logu_max=2.5, rnum = 200, theta_min=theta_min,theta_max=theta_max,p_of_z=True) plt.plot(acf.theta * 180.0/np.pi,acf.angular_corr_gal) plt.xscale('log') plt.yscale('log') plt.xlabel("Angle, degrees") plt.ylabel("w(theta)") """ Explanation: Clearly, there's still a divergence, but only of the order 0.3% at very high/low angular separation, in which case the Limber approximation breaks down anyway. There is also an offset in the normalisation by about 1%, and I'm not sure where that's coming in -- possibly in the normalisation of $n(z)$. In any case, it does basically the right thing. Now, let's try using the high-level function, which automatically generates $\xi(r)$ based on the halo model. We'll use the same $n(z)$: End of explanation """ acf = AngularCF(hmf_model="Tinker08",bias_model="Tinker10", p1=nz, zmin=0.2,zmax=0.4, logu_min=-5,logu_max=2.5, rnum = 200, theta_min=theta_min,theta_max=theta_max,p_of_z=True) print acf.angular_corr_gal """ Explanation: There's nothing really to test against here, but it looks okay (and we know the algorithm works basically from the power-law test). Just to point out a few more options you can use: You can pass any arguments that would go to the HaloModel class: End of explanation """ def radial_dist(r): return (r/300.0)**2 * np.exp(-r/300.0) acf = AngularCF(hmf_model="Tinker08",bias_model="Tinker10", p1=radial_dist, #Use our new window function zmin=0.2,zmax=0.4, #NOTE: limits are still given in terms of z logu_min=-5,logu_max=2.5, rnum = 200, theta_min=theta_min,theta_max=theta_max, p_of_z=False) #NOTE: this has to be False when the window function is n(r) print acf.angular_corr_gal """ Explanation: Furthermore, the "p1" function, which specifies the redshift distribution, can alternatively be set as a function of radial distance. It doesn't need to be normalised, since the routine will do that for you (but you might want to anyway so that multiple calls don't have to keep normalising): End of explanation """ acf = AngularCF(p1=radial_dist, z = 0.3, # Set redshift to middle of slice zmin=0.2,zmax=0.4, logu_min=-5,logu_max=2.5, rnum = 200, theta_min=theta_min,theta_max=theta_max, p_of_z=False) #NOTE: this has to be False when the window function is n(r) print acf.angular_corr_gal """ Explanation: Note also that the integral over the radial component assumes that the correlation function is the same at each redshift slice. This is of course not true over large slices. I will later implement the more general case. However, be careful to set the redshift correctly: End of explanation """
CRPropa/CRPropa3
doc/pages/example_notebooks/Diffusion/DiffusionValidationI.v4.ipynb
gpl-3.0
%matplotlib inline import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import numpy as np from scipy.stats import chisquare from scipy.integrate import quad from crpropa import * #figure settings A4heigth = 29.7/2.54 A4width = 21./2.54 """ Explanation: Diffusion Validation I This notebook simulates a diffusion process in a homogeneous background magnetic field. The diffusion tensor is anisotropic, meaning the parallel component is larger than the perpendicular component ($\kappa_\parallel = 10\cdot\kappa_\perp$). Load modules and use jupyter inline magic to use interactive plots. End of explanation """ def pdf(R, t, epsilon): """Probability distribution function of a diffusion process. The diffusion coefficient is D=1e24m^2/s R - distance from injection t - time elapsed since injection epsilon - scaling for the perpendicular component """ D = 1e24*epsilon pdf = 2 * pow(4 * np.pi * D * t, -0.5) * np.exp(- R**2. / (4 * D * t)) return pdf def dataCheck(df): """Check if all candidates are recorded 50 times.""" cnt = df.SN.value_counts()!=50 err = cnt[cnt==True].index.to_numpy() if len(err) != 0: print("Something went wrong!") print("The following serial numbers ({}) have an incomplete set of observations.".format(err)) print("Try to rerun the simulation cell or run that part of the program outside of jupyter.") print("File an issue on github if the problem persists; https://github.com/CRPropa/CRPropa3/issues") """ Explanation: Definition of the probability distribution function of the particle density in one dimension: <br> $\psi(R, t) = \frac{2}{\sqrt{4 \pi D t}} \cdot \exp{-\frac{R^2}{4 D t}}$ <br> Here, $R=||\vec{R}||$ is the norm of the position. End of explanation """ N = 10000 # Number of Snapshots # used in ObserverTimeEvolution # candidates are recorded every deltaT=2kpc/c n = 50. step = 100*kpc / n # magnetic field ConstMagVec = Vector3d(0*nG,0*nG,1*nG) BField = UniformMagneticField(ConstMagVec) # parameters used for field line tracking precision = 1e-4 minStep = 0.1*pc maxStep = 1*kpc #ratio between parallel and perpendicular diffusion coefficient epsilon = .1 # source settings # A point source at the origin is isotropically injecting 10TeV protons. source = Source() source.add(SourcePosition(Vector3d(0.))) source.add(SourceParticleType(nucleusId(1, 1))) source.add(SourceEnergy(10*TeV)) source.add(SourceIsotropicEmission()) # Output settings # Only serial number, trajectory length and current position are stored # The unit of length is set to kpc Out = TextOutput('./Test.txt') Out.disableAll() Out.enable(Output.TrajectoryLengthColumn) Out.enable(Output.CurrentPositionColumn) Out.enable(Output.SerialNumberColumn) Out.setLengthScale(kpc) # Observer settings Obs = Observer() Obs.add(ObserverTimeEvolution(step, step, n)) Obs.setDeactivateOnDetection(False) # important line, as particles would be deactivated after first detection otherwise Obs.onDetection(Out) # Difffusion Module # D_xx=D_yy= 1e23 m^2 / s, D_zz=10*D_xx # The normalization is adjusted and the energy dependence is deactivated (setting power law index alpha=0) Dif = DiffusionSDE(BField, precision, minStep, maxStep, epsilon) Dif.setScale(1./6.1) Dif.setAlpha(0.) # Boundary # Simulation ends after t=100kpc/c maxTra = MaximumTrajectoryLength(100.0*kpc) # module list # Add modules to the list and run the simulation sim = ModuleList() sim.add(Dif) sim.add(Obs) sim.add(maxTra) sim.run(source, N, True) # Close the Output modules to flush last chunk of data to file. Out.close() print("Simulation finished") """ Explanation: Simulation set-up <br> Using 10000 pseudo particles to trace the phase space. End of explanation """ df = pd.read_csv('Test.txt', delimiter='\t', names=["D", "SN", "X", "Y", "Z", "SN0", "SN1"], comment='#') df['t'] = df.D * kpc / c_light #time in seconds dataCheck(df) """ Explanation: Load the simulation data and add a time column End of explanation """ fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(1.*A4width,A4heigth/4.)) coord = ['X', 'Y', 'Z'] for i, x_i in enumerate(coord): L = [10, 50, 100] for j, l in enumerate(L): t = l*kpc/c_light s = '%.2e' % t df[df.D==l][x_i].hist(bins=np.linspace(-0.5, 0.5, 20), ax=axes[i], label='t = '+s+' s', zorder=3-j) axes[i].set_title(x_i) axes[i].set_xlim(-0.5, 0.5) axes[i].legend(loc='best') axes[i].set_xlabel('Distance [kpc]') if i>0: axes[i].set_yticklabels([]) else: axes[i].set_ylabel("number density") plt.tight_layout() """ Explanation: Distribution in x, y and z Plot the density distribution in all three coordinates. End of explanation """ fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(1.*A4width,A4heigth/4.)) coord = ['X', 'Y', 'Z'] ChiDict = {} for i, x_i in enumerate(coord): ChiDict[str(x_i)] = {} L = [10, 50, 100] colors = sns.color_palette() for j, l in enumerate(L): ChiDict[str(x_i)][str(l)] = {} if i <2: abs(df[df.D==l][x_i]).hist(bins=np.linspace(0, 0.1, 11), ax=axes[i], alpha=1., zorder=3-j, color=colors[j]) hist = np.histogram(abs(df[df.D==l][x_i]), bins=np.linspace(0,0.1,11)) PDF = np.zeros(len(hist[0])) xP = np.zeros(len(hist[0])) for k in range(len(PDF)): a, b = hist[1][k]*kpc, hist[1][k+1]*kpc xP[k] = (a+b)/2./kpc PDF[k] = quad(pdf, a,b, args=(l*kpc/c_light, 0.1))[0]*N t = l*kpc/c_light s = '%.2e' % t axes[i].plot(xP, PDF, color=colors[j], label='t = '+s+' s', zorder=10) #axes[i].legend(loc='best') axes[i].set_xlim(0., 0.1) else: abs(df[df.D==l][x_i]).hist(bins=np.linspace(0, 0.5, 11), ax=axes[i], alpha=1., zorder=3-j, color=colors[j]) hist = np.histogram(abs(df[df.D==l][x_i]), bins=np.linspace(0,0.5,11)) PDF = np.zeros(len(hist[0])) xP = np.zeros(len(hist[0])) for k in range(len(PDF)): a, b = hist[1][k]*kpc, hist[1][k+1]*kpc xP[k] = (a+b)/2./kpc PDF[k] = quad(pdf, a,b, args=(l*kpc/c_light, 1.))[0]*N axes[i].plot(xP, PDF, color=colors[j], label='t = '+s+' s', zorder=10) axes[i].set_xlim(0., 0.5) ChiDict[str(x_i)][str(l)]['Obs'] = hist[0] ChiDict[str(x_i)][str(l)]['Exp'] = PDF axes[i].set_xlabel('Distance [kpc]') axes[i].legend(loc='best') axes[i].set_title(x_i) if i>0: axes[i].set_yticklabels([]) else: axes[i].set_ylabel("number density") plt.legend() plt.tight_layout() plt.show() """ Explanation: ${\mathrm{\bf Fig 1:}}$ Distribution of particles in three directions. Z-axis is parallel to the mean magnetic field. $\kappa_{\parallel}=100*\kappa_{\perp}$ Use the absolute distance from origin $|x|, |y|, |z|$ and compare to analytical expectations. End of explanation """ C, Time, pValue = [], [], [] for c in coord: for l in L: C.append(c) t = l*kpc/c_light s = '%.2e' % t Time.append(s) pValue.append(chisquare(ChiDict[c][str(l)]['Obs'], ChiDict[c][str(l)]['Exp'])[1]) Chi = pd.DataFrame(pValue, index = [C, Time]) Chi.rename(columns = {0:"p-Value"}, inplace=True) Chi.index.names = ["Coordinate", "Time [s]"] Chi """ Explanation: ${\mathrm{\bf Fig 2:}}$ The distance from the source position follows nicely the expected pdf. Calculate the pValue of the $\chi^2$-test to prove the visual statement from above. End of explanation """
7deeptide/Design-Optimization
Homeworks/ME596_Homework_4.ipynb
gpl-3.0
import numpy as np import matplotlib import matplotlib.pyplot as plt from __future__ import division %config InlineBackend.figure_formats=['svg'] %matplotlib inline plt.rc('pdf',fonttype=3) # for proper subsetting of fonts plt.rc('axes',linewidth=0.5) # thin axes; the default for lines is 1pt al = np.linspace( 0.05, 0.15, 500) plt.plot(al, (1.11 + 1.11*al**(-0.18))/(1 - al), 'k') plt.axis([0.05, 0.15, 3.09,3.17]) plt.title("Objective Function") plt.ylabel("f") plt.xlabel("x/D") plt.show() """ Explanation: Problem Statement Maximum in-plane stress of a plane with a through-hole is given by $$ \sigma = \frac{Kp}{(D-d)t},$$ where $t$ is the thickness of the plate, $p$ is the pressure applied, and $K$ is the stress concentration factor. $K$ is given by $$K = 1.11 +1.11(\frac{d}{D})^{-0.18}.$$ We must find the hole size that minimizes the $\sigma$. We shall make the notational simplification $\frac{d}{D} = x.$ We can note from the problem formulation that we cannot write $\sigma$ strictly in terms of $x$; we must assume a value for $D$. We shall thence assume $D$, as well as $p$, and $t$ are all equal to 1. Given our assumptions we can we the stress equation as $$\sigma (1 - x) = K.$$ This can be further simplified in terms of an explicit objective funtion as $$\sigma = \frac{1.11 + 1.11 x^{-0.18}}{1 - x}.$$ The objective function is plotted below. End of explanation """ #Equal Interval Search #Erin Schmidt #Adapted, with significant modification, from Arora et al.'s APOLLO #implementation found in "Introduction to Optimum Design" 1st Ed. (1989). import numpy as np def func(al, count): #the objective function count = count + 1 f = (1.11 + 1.11*al**(-0.18))/(1 - al) return f, count def mini(au, al, count): #evaluates f at the minimum (or optimum) stationary point alpha = (au + al)*0.5 (f, count) = func(alpha, count) return f, alpha, count def equal(delta, epsilon, count, al): (f, count) = func(al, count) fl = f #function value at lower bound #delta = 0.01 #step-size #au = 0.15 #alpha upper bound while True: aa = delta (f, count) = func(aa, count) fa = f if fa > fl: delta = delta * 0.1 else: break while True: au = aa + delta (f, count) = func(au, count) fu = f if fa > fu: al = aa aa = au fl = fa fa = fu else: break while True: if (au - al) > epsilon: #compares interval size to convergence criteria delta = delta * 0.1 aa = al #intermediate alpha fa = fl #intermediate alpha function value while True: au = aa + delta (f, count) = func(au, count) fu = f if fa > fu: al = aa aa = au fl = fa fa = fu continue else: break continue else: (f, alpha, count) = mini(au, al, count) return f, alpha, count #run the program delta = 0.01 epsilon = 1E-3 count = 0 al = 0.01 # alpha lower bound (f, alpha, count) = equal(delta, epsilon, count, al) print('The minimum is at {:.4f}'.format(alpha)) print('The function value at the minimum = {:.4f}'.format(f)) print('Total number of function calls = {}'.format(count)) """ Explanation: We can see (at least qualitatively), from the plot of the objective funtion that on the interval $0.05 < \alpha < 0.15$ the optimum value lies somewhere between 0.09 and 0.10, and the function evaluated in that range has an average value of about 3.10. We shall proceed to find the minimum of the objective function by using both an equal interval search algorithem and a polynomial approximation. Equal Interval Search End of explanation """ # Polynomial approximation (4-point cubic) # -Erin Schmidt import numpy as np from math import sqrt # make an array with random values between 0.05 and 0.15 with 4 entries x = (0.05 + np.random.sample(4)*0.15) # make an array of function values at the 4 points of x def f(x): # the objective function return (1.11 + 1.11*x**(-0.18))/(1 - x) f_array = [] i = 0 while i <= len(x) - 1: f_array.append(f(x[i])) i += 1 # use the equations from Vanderplaats 1984 to solve coefficients q1 = x[2]**3 * (x[1] - x[0]) - x[1]**3 * (x[2] - x[0]) + x[0]**3 * (x[2] - x[1]) q2 = x[3]**3 * (x[1] - x[0]) - x[1]**3 * (x[3] - x[0]) + x[0]**3 * (x[3] - x[1]) q3 = (x[2] - x[1]) * (x[1] - x[0]) * (x[2] - x[0]) q4 = (x[3] - x[1]) * (x[1] - x[0]) * (x[3] - x[0]) q5 = f_array[2] * (x[1] - x[0]) - f_array[1] * (x[2] - x[0]) + f_array[0] * (x[2] - x[1]) q6 = f_array[3] * (x[1] - x[0]) - f_array[1] * (x[3] - x[0]) + f_array[0] * (x[3] - x[1]) a3 = (q3*q6 - q4*q5)/(q2*q3 - q1*q4) a2 = (q5 - a3*q1)/q3 a1 = (f_array[1] - f_array[0])/(x[1] - x[0]) - \ a3*(x[1]**3 - x[0]**3)/(x[1] - x[0]) - a2*(x[0] + x[1]) a0 = f_array[0] - a1*x[0] - a2*x[0]**2 - a3*x[0]**3 a = [a1, 2*a2, 3*a3] #coefficients of f' # find the zeros of the f' polynomial (using the quadratic formula) b = a2**2 - 3*a1*a3 X1 = (-a2 + sqrt(b))/(3*a3) X2 = (-a2 - sqrt(b))/(3*a3) print('roots = ', X1, X2) # plot the results plt.rc('pdf',fonttype=3) # for proper subsetting of fonts plt.rc('axes',linewidth=0.5) # thin axes; the default for lines is 1pt x = np.linspace( 0.05, 0.15, 500) plt.plot(x, a0 +a1*x + a2*x**2 + a3*x**3, 'k--', label='Poly. approx.') plt.plot(x, (1.11 + 1.11*x**(-0.18))/(1 - x), 'k', label='Objective func.') plt.axis([0.05, 0.15, 3.09,3.17]) legend = plt.legend(loc='upper center', shadow=False, fontsize='large') plt.ylabel("f") plt.xlabel("x/D") plt.show() poly_root = [0.097620387704, 0.0985634827486, 0.0969340736066, \ 0.098775097463, 0.102426814371, 0.101638472077, \ 0.0991941169039, 0.095873175811] print('polynomial root std. deviation = ', np.std(poly_root)) """ Explanation: Polynomial Approximation End of explanation """
espressomd/espresso
doc/tutorials/lattice_boltzmann/lattice_boltzmann_poiseuille_flow.ipynb
gpl-3.0
import logging import sys %matplotlib inline import matplotlib.pyplot as plt plt.rcParams.update({'font.size': 18}) import numpy as np import tqdm import espressomd import espressomd.lb import espressomd.lbboundaries import espressomd.shapes logging.basicConfig(level=logging.INFO, stream=sys.stdout) espressomd.assert_features(['LB_BOUNDARIES_GPU']) # System constants BOX_L = 16.0 TIME_STEP = 0.01 system = espressomd.System(box_l=[BOX_L] * 3) system.time_step = TIME_STEP system.cell_system.skin = 0.4 """ Explanation: Poiseuille flow in ESPResSo Poiseuille flow is the flow through a pipe or (in our case) a slit under a homogeneous force density, e.g. gravity. In the limit of small Reynolds numbers, the flow can be described with the Stokes equation. We assume the slit being infinitely extended in $y$ and $z$ direction and a force density $f_y$ on the fluid in $y$ direction. No slip-boundary conditions (i.e. $\vec{u}=0$) are located at $x = \pm h/2$. Assuming invariance in $y$ and $z$ direction and a steady state, the Stokes equation is simplified to: \begin{equation} \mu \partial_x^2 u_y = f_y \end{equation} where $f_y$ denotes the force density and $\mu$ the dynamic viscosity. This can be integrated twice and the integration constants are chosen so that $u_y=0$ at $x = \pm h/2$ to obtain the solution to the planar Poiseuille flow [8]: \begin{equation} u_y(x) = \frac{f_y}{2\mu} \left(h^2/4-x^2\right) \end{equation} We will simulate a planar Poiseuille flow using a square box, two walls with normal vectors $\left(\pm 1, 0, 0 \right)$, and an external force density applied to every node. 1. Setting up the system End of explanation """ # LB parameters AGRID = 0.5 VISCOSITY = 2.0 FORCE_DENSITY = [0.0, 0.001, 0.0] DENSITY = 1.5 # LB boundary parameters WALL_OFFSET = AGRID """ Explanation: 1.1 Setting up the lattice-Boltzmann fluid We will now create a lattice-Boltzmann fluid confined between two walls. End of explanation """ logging.info("Iterate until the flow profile converges (5000 LB updates).") for _ in tqdm.trange(20): system.integrator.run(5000 // 20) """ Explanation: Create a lattice-Boltzmann actor and append it to the list of system actors. Use the GPU implementation of LB. You can refer to section setting up a LB fluid in the user guide. python logging.info("Setup LB fluid.") lbf = espressomd.lb.LBFluidGPU(agrid=AGRID, dens=DENSITY, visc=VISCOSITY, tau=TIME_STEP, ext_force_density=FORCE_DENSITY) system.actors.add(lbf) Create a LB boundary and append it to the list of system LB boundaries. You can refer to section using shapes as lattice-Boltzmann boundary in the user guide. ```python logging.info("Setup LB boundaries.") top_wall = espressomd.shapes.Wall(normal=[1, 0, 0], dist=WALL_OFFSET) bottom_wall = espressomd.shapes.Wall(normal=[-1, 0, 0], dist=-(BOX_L - WALL_OFFSET)) top_boundary = espressomd.lbboundaries.LBBoundary(shape=top_wall) bottom_boundary = espressomd.lbboundaries.LBBoundary(shape=bottom_wall) system.lbboundaries.add(top_boundary) system.lbboundaries.add(bottom_boundary) ``` 2. Simulation We will now simulate the fluid flow until we reach the steady state. End of explanation """ logging.info("Extract fluid velocities along the x-axis") fluid_positions = (np.arange(lbf.shape[0]) + 0.5) * AGRID # get all velocities as Numpy array and extract y components only fluid_velocities = (lbf[:,:,:].velocity)[:,:,:,1] # average velocities in y and z directions (perpendicular to the walls) fluid_velocities = np.average(fluid_velocities, axis=(1,2)) def poiseuille_flow(x, force_density, dynamic_viscosity, height): return force_density / (2 * dynamic_viscosity) * (height**2 / 4 - x**2) # Note that the LB viscosity is not the dynamic viscosity but the # kinematic viscosity (mu=LB_viscosity * density) x_values = np.linspace(0.0, BOX_L, lbf.shape[0]) HEIGHT = BOX_L - 2.0 * AGRID # analytical curve y_values = poiseuille_flow(x_values - (HEIGHT / 2 + AGRID), FORCE_DENSITY[1], VISCOSITY * DENSITY, HEIGHT) # velocity is zero inside the walls y_values[np.nonzero(x_values < WALL_OFFSET)] = 0.0 y_values[np.nonzero(x_values > BOX_L - WALL_OFFSET)] = 0.0 fig1 = plt.figure(figsize=(10, 6)) plt.plot(x_values, y_values, '-', linewidth=2, label='analytical') plt.plot(fluid_positions, fluid_velocities, 'o', label='simulation') plt.xlabel('Position on the $x$-axis', fontsize=16) plt.ylabel('Fluid velocity in $y$-direction', fontsize=16) plt.legend() plt.show() """ Explanation: 3. Data analysis We can now extract the flow profile and compare it to the analytical solution for the planar Poiseuille flow. End of explanation """
privong/pythonclub
sessions/07-pandas/01 - Pandas tutorial.ipynb
gpl-3.0
import numpy as np from __future__ import print_function import pandas as pd pd.__version__ """ Explanation: Reading and manipulating datasets with Pandas This notebook shows how to create Series and Dataframes with Pandas. Also, how to read CSV files and creaate pivot tables. The first part is based on the chapter 3 of the <a href=" http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.01-Introducing-Pandas-Objects.ipynb">Python Data Science Handbook</a>. Author: Roberto Muñoz <br /> Email: rmunoz@uc.cl End of explanation """ data = pd.Series([0.25, 0.5, 0.75, 1.0]) data """ Explanation: 1. The Pandas Series Object A Pandas Series is a one-dimensional array of indexed data. It can be created from a list or array as follows: End of explanation """ data.values """ Explanation: As we see in the output, the Series wraps both a sequence of values and a sequence of indices, which we can access with the values and index attributes. The values are simply a familiar NumPy array: End of explanation """ data.index """ Explanation: The index is an array-like object of type pd.Index, which we'll discuss in more detail momentarily. End of explanation """ data[1] """ Explanation: Like with a NumPy array, data can be accessed by the associated index via the familiar Python square-bracket notation: End of explanation """ data = pd.Series([0.25, 0.5, 0.75, 1.0], index=['a', 'b', 'c', 'd']) data """ Explanation: Series as generalized NumPy array From what we've seen so far, it may look like the Series object is basically interchangeable with a one-dimensional NumPy array. The essential difference is the presence of the index: while the Numpy Array has an implicitly defined integer index used to access the values, the Pandas Series has an explicitly defined index associated with the values. End of explanation """ data['b'] """ Explanation: And the item access works as expected: End of explanation """ population_dict = {'Arica y Parinacota': 243149, 'Antofagasta': 631875, 'Metropolitana de Santiago': 7399042, 'Valparaiso': 1842880, 'Bíobío': 2127902, 'Magallanes y Antártica Chilena': 165547} population = pd.Series(population_dict) population """ Explanation: Series as specialized dictionary In this way, you can think of a Pandas Series a bit like a specialization of a Python dictionary. A dictionary is a structure that maps arbitrary keys to a set of arbitrary values, and a Series is a structure which maps typed keys to a set of typed values. This typing is important: just as the type-specific compiled code behind a NumPy array makes it more efficient than a Python list for certain operations, the type information of a Pandas Series makes it much more efficient than Python dictionaries for certain operations. End of explanation """ population['Arica y Parinacota'] """ Explanation: You can notice the indexes were sorted lexicographically. That's the default behaviour in Pandas End of explanation """ population['Metropolitana':'Valparaíso'] """ Explanation: Unlike a dictionary, though, the Series also supports array-style operations such as slicing: End of explanation """ # Area in km^2 area_dict = {'Arica y Parinacota': 16873.3, 'Antofagasta': 126049.1, 'Metropolitana de Santiago': 15403.2, 'Valparaiso': 16396.1, 'Bíobío': 37068.7, 'Magallanes y Antártica Chilena': 1382291.1} area = pd.Series(area_dict) area """ Explanation: 2. The Pandas DataFrame Object The next fundamental structure in Pandas is the DataFrame. Like the Series object discussed in the previous section, the DataFrame can be thought of either as a generalization of a NumPy array, or as a specialization of a Python dictionary. We'll now take a look at each of these perspectives. DataFrame as a generalized NumPy array If a Series is an analog of a one-dimensional array with flexible indices, a DataFrame is an analog of a two-dimensional array with both flexible row indices and flexible column names. End of explanation """ regions = pd.DataFrame({'population': population, 'area': area}) regions regions.index regions.columns """ Explanation: Now that we have this along with the population Series from before, we can use a dictionary to construct a single two-dimensional object containing this information: End of explanation """ regions['area'] """ Explanation: DataFrame as specialized dictionary Similarly, we can also think of a DataFrame as a specialization of a dictionary. Where a dictionary maps a key to a value, a DataFrame maps a column name to a Series of column data. For example, asking for the 'area' attribute returns the Series object containing the areas we saw earlier: End of explanation """ pd.DataFrame(population, columns=['population']) """ Explanation: Constructing DataFrame objects A Pandas DataFrame can be constructed in a variety of ways. Here we'll give several examples. From a single Series object¶ A DataFrame is a collection of Series objects, and a single-column DataFrame can be constructed from a single Series: End of explanation """ pd.DataFrame({'population': population, 'area': area}, columns=['population', 'area']) """ Explanation: From a dictionary of Series objects As we saw before, a DataFrame can be constructed from a dictionary of Series objects as well: End of explanation """ regiones_file='data/chile_regiones.csv' provincias_file='data/chile_provincias.csv' comunas_file='data/chile_comunas.csv' regiones=pd.read_csv(regiones_file, header=0, sep=',') provincias=pd.read_csv(provincias_file, header=0, sep=',') comunas=pd.read_csv(comunas_file, header=0, sep=',') print('regiones table: ', regiones.columns.values.tolist()) print('provincias table: ', provincias.columns.values.tolist()) print('comunas table: ', comunas.columns.values.tolist()) regiones.head() provincias.head() comunas.head() regiones_provincias=pd.merge(regiones, provincias, how='outer') regiones_provincias.head() provincias_comunas=pd.merge(provincias, comunas, how='outer') provincias_comunas.head() regiones_provincias_comunas=pd.merge(regiones_provincias, comunas, how='outer') regiones_provincias_comunas.index.name='ID' regiones_provincias_comunas.head() regiones_provincias_comunas.to_csv('chile_demographic_data.csv', index=False) """ Explanation: 3. Reading a CSV file and doing common Pandas operations End of explanation """ data_file='data/chile_demographic.csv' data=pd.read_csv(data_file, header=0, sep=',') data data.sort_values('Poblacion') data.sort_values('Poblacion', ascending=False) (data.groupby(data['Region'])['Poblacion','Superficie'].sum()) (data.groupby(data['Region'])['Poblacion','Superficie'].sum()).sort_values(['Poblacion']) """ Explanation: 4. Loading ful dataset End of explanation """ surveygizmo=regiones_provincias_comunas[['RegionNombre','ProvinciaNombre','ComunaNombre']] surveygizmo.loc[:,'RegionNombre']=surveygizmo.apply(lambda x: x['RegionNombre'].replace("'",""), axis=1) surveygizmo.loc[:,'ProvinciaNombre']=surveygizmo.apply(lambda x: x['ProvinciaNombre'].replace("'",""), axis=1) surveygizmo.loc[:,'ComunaNombre']=surveygizmo.apply(lambda x: x['ComunaNombre'].replace("'",""), axis=1) surveygizmo.rename(columns={'RegionNombre': 'Region:', 'ProvinciaNombre': 'Provincia:', 'ComunaNombre': 'Comuna:'}, inplace=True) surveygizmo.to_csv('chile_demographic_surveygizmo.csv', index=False) surveygizmo.head() """ Explanation: OLD End of explanation """
pbutenee/ml-tutorial
release/1/anomaly_detection.ipynb
mit
import pickle with open('data/past_data.pickle', 'rb') as file: past = pickle.load(file, encoding='latin1') with open('data/all_data.pickle', 'rb') as file: all_data = pickle.load(file, encoding='latin1') print(f'Past data shape = {past.shape}') print(f'Full data shape = {all_data.shape}') """ Explanation: Anomaly Detection In this part will learn how you to build an anomaly detection model yourself. 1. Load Data First we will load the data using a pickle format. The data we use contains the page views of one of our own websites and for convenience there is only 1 data point per hour. End of explanation """ import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(20,4)) # This creates a new figure with the dimensions of 20 by 4 plt.plot(past) # This creates the actual plot plt.show() # This shows the plot """ Explanation: 2. Plot past data To plot the past data we will use matplotlib.pyplot. For convenience we import it as plt. % matplotlib inline makes sure you can see the output in the notebook. (Use % matplotlib notebook if you want to make it interactive. Don't forget to click the power button to finish the interaction and to be able to plot a new figure.) End of explanation """ import numpy as np ##### Implement this part of the code ##### raise NotImplementedError("Code not implemented, follow the instructions.") # maximum = ? # minimum = ? print(minimum, maximum) """ Explanation: 3. Find the minimum and maximum Use np.nanmax() and np.nanmin() to find the minmum and maximum while ignoring the NaNs. End of explanation """ plt.figure(figsize=(20,4)) plt.plot(past) plt.axhline(maximum, color='r') plt.axhline(minimum, color='r') plt.show() """ Explanation: And plot these together with the data using the plt.axhline() function. End of explanation """ plt.figure(figsize=(20,4)) plt.plot(all_data, color='g') plt.plot(past, color='b') plt.axhline(maximum, color='r') plt.axhline(minimum, color='r') plt.show() """ Explanation: 4. Testing the model on unseen data Now plot all the data instead of just the past data End of explanation """ ##### Implement this part of the code ##### raise NotImplementedError("Code not implemented, follow the instructions.") # reshaped_past = ? assert len(reshaped_past.shape) == 2 assert reshaped_past.shape[1] == 24 """ Explanation: You can clearly see now that this model does not detect any anomalies. However, the last day of data clearly looks different compared to the other days. In what follows we will build a better model for anomaly detection that is able to detect these 'shape shifts' as well. 5. Building a model with seasonality To do this we are going to take a step by step approach. Maybe it won't be clear at first why every step is necessary, but that will become clear throughout the process. First we are going to reshape the past data to a 2 dimensional array with 24 columns. This will give us 1 row for each day and 1 column for each hour. For this we are going to use the np.reshape() function. The newshape parameter is a tuple which in this case should be (-1, 24). If you use a -1 the reshape function will automatically compute that dimension. Pay attention to the order in which the numbers are repositioned (the default ordering should work fine here). End of explanation """ ##### Implement this part of the code ##### raise NotImplementedError("Code not implemented, follow the instructions.") # average_past = ? assert average_past.shape == (24,) plt.plot(average_past) plt.show() """ Explanation: Now we are going to compute the average over all days. For this we are going to use the np.mean() with the axis variable set to the first dimension (axis=0). Next we are going to plot this. End of explanation """ model = [] for i in range(6): ##### Implement this part of the code ##### raise NotImplementedError("Code not implemented, follow the instructions.") # model = np.concatenate( ? ) plt.figure(figsize=(20,4)) plt.plot(model, color='k') plt.plot(past, color='b') plt.show() """ Explanation: What you can see in the plot above is the average number of page views for each hour of the day. Now let's plot this together with the past data on 1 plot. Use a for loop and the np.concatenate() function to concatenate this average 6 times into the variable model. End of explanation """ ##### Implement this part of the code ##### raise NotImplementedError("Code not implemented, follow the instructions.") # delta_max = ? # delta_min = ? print(delta_min, delta_max) """ Explanation: In the next step we are going to compute the maximum (= positive) and minimum (= negative) deviations from the average to determine what kind of deviations are normal. (Just subtract the average/model from the past and take the min and the max of that) End of explanation """ plt.figure(figsize=(20,4)) plt.plot(model, color='k') plt.plot(past, color='b') plt.plot(model + delta_max, color='r') plt.plot(model + delta_min, color='r') plt.show() """ Explanation: Now let's plot this. End of explanation """ model_all = np.concatenate((model, average_past)) plt.figure(figsize=(20,4)) plt.plot(all_data, color='g') plt.plot(model_all, color='k') plt.plot(past, color='b') plt.plot(model_all + delta_max, color='r') plt.plot(model_all + delta_min, color='r') plt.show() """ Explanation: Now let's test this on all data End of explanation """ anomaly_timepoints = np.argwhere(np.logical_or(all_data < model_all + delta_min, all_data > model_all + delta_max)) plt.figure(figsize=(20,4)) plt.scatter(anomaly_timepoints, all_data[anomaly_timepoints], color='r', linewidth=8) plt.plot(all_data, color='g') plt.plot(model_all, color='k') plt.plot(past, color='b') plt.plot(model_all + delta_max, color='r') plt.plot(model_all + delta_min, color='r') plt.xlim(0, len(all_data)) plt.show() print(f'The anomaly occurs at the following timestamps: {anomaly_timepoints}') """ Explanation: Now you can clearly see where the anomaly is detected by this more advanced model. The code below can gives you the exact indices where an anomaly is detected. The functions uses are the following np.argwhere() and np.logical_or(). End of explanation """
sertansenturk/tomato
demos/joint_analysis_demo.ipynb
agpl-3.0
data_folder = os.path.join('..', 'sample-data') # score inputs symbtr_name = 'ussak--sazsemaisi--aksaksemai----neyzen_aziz_dede' txt_score_filename = os.path.join(data_folder, symbtr_name, symbtr_name + '.txt') mu2_score_filename = os.path.join(data_folder, symbtr_name, symbtr_name + '.mu2') # instantiate audio_mbid = 'f970f1e0-0be9-4914-8302-709a0eac088e' audio_filename = os.path.join(data_folder, symbtr_name, audio_mbid, audio_mbid + '.mp3') # instantiate analyzer objects scoreAnalyzer = SymbTrAnalyzer(verbose=True) audioAnalyzer = AudioAnalyzer(verbose=True) jointAnalyzer = JointAnalyzer(verbose=True) """ Explanation: JointAnalyzer assumes the individual audio analysis and score analysis is applied earlier. End of explanation """ # score (meta)data analysis score_features = scoreAnalyzer.analyze( txt_score_filename, mu2_score_filename) # predominant melody extraction audio_pitch = audioAnalyzer.extract_pitch(audio_filename) # NOTE: do not call pitch filter later as aligned_pitch_filter will be more effective """ Explanation: First we compute the input score and audio features for joint analysis. End of explanation """ # joint analysis joint_features, score_informed_audio_features = jointAnalyzer.analyze( txt_score_filename, score_features, audio_filename, audio_pitch) # redo some steps in audio analysis score_informed_audio_features = audioAnalyzer.analyze( metadata=False, pitch=False, **score_informed_audio_features) # get a summary of the analysis summarized_features = jointAnalyzer.summarize( audio_features={'pitch': audio_pitch}, score_features=score_features, joint_features=joint_features, score_informed_audio_features=score_informed_audio_features) # plot plt.rcParams['figure.figsize'] = [20, 8] fig, ax = jointAnalyzer.plot(summarized_features) ax[0].set_ylim([50, 500]) plt.show() """ Explanation: Next, you can use the single line call "analyze," which does all the available analysis simultaneously. You can then update the audio analysis using the joint analysis results. End of explanation """ # score (meta)data analysis score_features = scoreAnalyzer.analyze( txt_score_filename, mu2_score_filename, symbtr_name=symbtr_name) # predominant melody extraction audio_pitch = audioAnalyzer.extract_pitch(audio_filename) # joint analysis # score-informed tonic and tempo estimation tonic, tempo = jointAnalyzer.extract_tonic_tempo( txt_score_filename, score_features, audio_filename, audio_pitch) # section linking and note-level alignment aligned_sections, notes, section_links, section_candidates = jointAnalyzer.align_audio_score( txt_score_filename, score_features, audio_filename, audio_pitch, tonic, tempo) # aligned pitch filter pitch_filtered, notes_filtered = jointAnalyzer.filter_pitch(audio_pitch, notes) # aligned note model note_models, pitch_distribution, aligned_tonic = jointAnalyzer.compute_note_models( pitch_filtered, notes_filtered, tonic['symbol']) # recompute the audio features using the filtered pitch and tonic # pitch histograms pitch_class_distribution = copy.deepcopy(pitch_distribution) pitch_class_distribution.to_pcd() # get the melodic progression model melodic_progression = audioAnalyzer.compute_melodic_progression(pitch_filtered) # transposition (ahenk) identification transposition = audioAnalyzer.identify_transposition(aligned_tonic, aligned_tonic['symbol']) """ Explanation: ... or the individual calls are given below. End of explanation """
jotterbach/SimpleNeuralNets
examples/Causal_vs_Noncausal_Learning.ipynb
apache-2.0
%load_ext autoreload import numpy as np from numpy.polynomial.polynomial import polyval import numpy.random as rd import matplotlib.pyplot as plt import seaborn as sns import sys %matplotlib inline sys.path.append('./NeuralNetworks') # Configuration for plots fig_size = (12,8) font_size = 14 """ Explanation: Causal vs. Non-causal Learning using Neural Networks End of explanation """ def drawFromCubic(x_min, x_max, lst_of_coefficients, n_samples, noise_x, noise_y): x = np.linspace(x_min, x_max, 10000) sample_x = rd.choice(x, size=n_samples) sample_y = polyval(sample_x, lst_of_coefficients) return sample_x + rd.normal(scale=noise_x, size=n_samples), sample_y + rd.normal(scale=noise_y, size=n_samples) n_samples = 4000 y_train, x_train = drawFromCubic(-1.15, 1.15, [0, -.65, 0, 1], n_samples, 0.01, 0.05) n_test_samples = 500 y_test, x_test = drawFromCubic(-1.15, 1.15, [0, -.65, 0, 1], n_test_samples, 0.01, 0.05) plt.figure(figsize=fig_size); plt.scatter(y_train, x_train); plt.xlabel('input', fontsize=font_size); plt.ylabel('target', fontsize=font_size); fig = plt.gcf(); fig.set_figwidth(12); fig.set_figheight(8); plt.xticks(size = 18); plt.yticks(size = 18); """ Explanation: Learning causal relationship - The Feed-Forward Network architecture We want to learn a non-linear simple input-output relationship using a neural network. The data is generated from a cubic polynomial with some noise added. End of explanation """ import SimpleNeuralNets.FeedForwardNN.FFNN as ffn import theano.tensor as tensor import theano X = tensor.dmatrix("X") y = tensor.dvector("y") network = ffn.FFNN(X, y, 1, [25], 1, [tensor.tanh], out_activation = None, **{'normalized_gradient' : True}) kwargs = { 'print_loss': False, 'n_iterations': 10000, 'learning_rate': 0.001 } ffn_losses, ffn_test_losses = network.train(y_train.reshape([n_samples, 1]), x_train, test_input=y_test.reshape([n_test_samples, 1]), test_target=x_test, **kwargs) plt.figure(figsize=fig_size); plt.loglog(ffn_losses, label = 'train'); plt.loglog(ffn_test_losses, label = 'test'); plt.xlabel('input', fontsize=font_size); plt.ylabel('loss', fontsize=font_size); plt.legend(fontsize=18); fig = plt.gcf(); fig.set_figwidth(12); fig.set_figheight(8); plt.xticks(size = 18); plt.yticks(size = 18); x_test_plot = np.linspace(-1.25,1.25,2000).reshape([2000,1]) plt.figure(figsize=fig_size) plt.plot(y_train, x_train, 'ro', alpha=0.05); plt.plot(x_test_plot, network.predict(x_test_plot), linewidth=3, color='k'); plt.xlabel('input', fontsize=font_size); plt.ylabel('target', fontsize=font_size); fig = plt.gcf(); fig.set_figwidth(12); fig.set_figheight(8); plt.xticks(size = 18); plt.yticks(size = 18); """ Explanation: Causal relation In this section we learn the above relationship using a simple Feed-Forward Network architecture. End of explanation """ plt.figure(figsize=fig_size); plt.scatter(x_train, y_train); plt.xlabel('input', fontsize=font_size); plt.ylabel('target', fontsize=font_size); fig = plt.gcf(); fig.set_figwidth(12); fig.set_figheight(8); plt.xticks(size = 18); plt.yticks(size = 18); import SimpleNeuralNets.FeedForwardNN.FFNN as ffn import theano.tensor as tensor import theano X = tensor.dmatrix("X") y = tensor.dvector("y") network = ffn.FFNN(X, y, 1, [25], 1, [tensor.tanh], out_activation = None, **{'normalized_gradient' : True}) kwargs = { 'print_loss': False, 'n_iterations': 10000, 'learning_rate': 0.001 } losses, test_losses = network.train(x_train.reshape([n_samples, 1]), y_train, test_input=x_test.reshape([n_test_samples, 1]), test_target=y_test, **kwargs) x_test_plot = np.linspace(-1.25,1.25,2000).reshape([2000,1]) plt.figure(figsize=fig_size) plt.plot(x_train, y_train, 'ro', alpha=0.05); plt.plot(x_test_plot, network.predict(x_test_plot), linewidth=3, color='k'); plt.xlabel('input', fontsize=font_size); plt.ylabel('target', fontsize=font_size); fig = plt.gcf(); fig.set_figwidth(12); fig.set_figheight(8); plt.xticks(size = 18); plt.yticks(size = 18); """ Explanation: Noncausal relation with a Feed-Forward Network Let's switch the input with the target variable. The relation is now a multi-valued function and hence cannot be represented by a straight-forward input-output relation. We see that the FFN is not very apt at learning the target. This is because the FFN has causality fundamentally built into its architecture. End of explanation """ def gaussian(x, mu, sigma): return np.exp(-np.power(x - mu,2)/(2 * np.power(sigma, 2))) / np.sqrt(2 * np.pi * np.power(sigma, 2)) def gaussian_array(x, y, mu, sigma, mix): n_dim = mu.shape[0] lst = [] for idx in range(len(x)): val = 0 for dim in range(n_dim): val += mix[dim, idx] * gaussian(y, mu[dim, idx], sigma[dim, idx]) lst.append(val) return np.array(lst).T import SimpleNeuralNets.MixtureDensityModel.MDN as mdn import theano.tensor as tensor import theano X = tensor.dmatrix("X") y = tensor.dvector("y") network = mdn.MDN(X, y, 1, [10, 10], 1, 3, [tensor.tanh, tensor.tanh], **{'normalized_gradient': True}) kwargs = { 'l1_strength': 5, 'learning_rate': 0.01, 'n_iterations': 50000, 'print_loss': False, 'sigma_weight_init' : 0.1 } losses, test_losses = network.train(x_train.reshape([n_samples, 1]), y_train, test_input=x_test.reshape([n_test_samples, 1]), test_target=y_test, **kwargs) plt.figure(figsize=fig_size); plt.semilogx(losses, label = 'train'); plt.semilogx(test_losses, label = 'test'); plt.xlabel('input', fontsize=font_size); plt.ylabel('loss', fontsize=font_size); plt.legend(fontsize=18); fig = plt.gcf(); fig.set_figwidth(12); fig.set_figheight(8); plt.xticks(size = 18); plt.yticks(size = 18); from matplotlib import cm n_points = 1000 x_test_plot = np.sort(rd.uniform(low=-1.25, high=1.25, size=n_points)).reshape([n_points,1]) y_test_plot = np.linspace(-1.25,1.25, 500) mu, sigma, mix = network.predict_params(x_test_plot) arr = gaussian_array(x_test_plot, y_test_plot, mu, sigma, mix) X, Y = np.meshgrid(x_test_plot, y_test_plot) plt.figure(figsize=fig_size) plt.plot(x_train, y_train, 'bo', alpha = 0.05); plt.contour(X, Y, arr, linewidths=1.5, levels=np.arange(0, 7.5, .5), cmap=cm.hot); fig = plt.gcf(); fig.set_figwidth(12); fig.set_figheight(8); plt.xticks(size = 18); plt.yticks(size = 18); """ Explanation: Learning a Non-Causal Relationship -- the Mixture Density Model To enable a Neural Network to learn multiple outputs we have to allow for functions that mimick several values at the same time. A way to do this is to give a probability estimate of how likely it is to pick a given value of the multi-valued relation. To achieve this the Mixture Density Model (MDN) learns the parameters of a sum of probability distributions. Let's get started. End of explanation """
biof-309-python/BIOF309-2016-Fall
Week_08/Week 08 - 02 - Dictionaries.ipynb
mit
dna = "ATCGATCGATCGTACGCTGA" a_count = dna.count("A") """ Explanation: Dictionaries, (Sets, Tuples) Source: This materials is adapted from Python for Biologists and Learn Python 3 in Y Minutes. You can read more about dictionaries and tuples in the Python for Everyone book. Storing paired data Suppose we want to count the number of As in a DNA sequence. Carrying out the calculation is quite straightforward – in fact it’s one of the first things we did in talking about strings. End of explanation """ dna = "ATCGATCGATCGTACGCTGA" a_count = dna.count("A") t_count = dna.count("T") g_count = dna.count("G") c_count = dna.count("C") """ Explanation: How will our code change if we want to generate a complete list of base counts for the sequence? We’ll add a new variable for each base: End of explanation """ dna = "ATCGATCGATCGTACGCTGA" aa_count = dna.count("AA") at_count = dna.count("AT") ag_count = dna.count("AG") # ...etc... """ Explanation: and now our code is starting to look rather repetitive. It’s not too bad for the four individual bases, but what if we want to generate counts for the 16 dinucleotides: End of explanation """ dna = "ATCGATCGATCGTACGCTGA" aaa_count = dna.count("AAA") aat_count = dna.count("AAT") aag_count = dna.count("AAG") # ...etc... """ Explanation: or the 64 trinucleotides: End of explanation """ dna = "AATGATCGATCGTACGCTGA" all_counts = [] for base1 in ['A', 'T', 'G', 'C']: for base2 in ['A', 'T', 'G', 'C']: for base3 in ['A', 'T', 'G', 'C']: trinucleotide = base1 + base2 + base3 count = dna.count(trinucleotide) print("count is " + str(count) + " for " + trinucleotide) all_counts.append(count) print(all_counts) """ Explanation: For trinucleotides and longer, the situation is particularly bad. The DNA sequence is 20 bases long, so it only contains 18 overlapping trinucleotides in total. This means that we’ll end up with 64 different variables, at least 46 of which will hold the value zero. One possible way round this is to store the values in a list. If we use three nested loops, we can generate all possible trinucleotides, calculate the count for each one, and store all the counts in a list: End of explanation """ print("count for TGA is " + str(all_counts[24])) """ Explanation: Although the code is above is quite compact, and doesn’t require huge numbers of variables, the output shows two problems with this approach: Firstly, the data are still incredibly sparse – the vast majority of the counts are zero. Secondly, the counts themselves are now disconnected from the trinucleotides. If we want to look up the count for a single trinucleotide – for example, TGA – we first have to figure out that TGA was the 25th trinucleotide generated by our loops. Only then can we get the element at the correct index: End of explanation """ dna = "AATGATCGATCGTACGCTGA" all_trinucleotides = [] all_counts = [] for base1 in ['A', 'T', 'G', 'C']: for base2 in ['A', 'T', 'G', 'C']: for base3 in ['A', 'T', 'G', 'C']: trinucleotide = base1 + base2 + base3 count = dna.count(trinucleotide) all_trinucleotides.append(trinucleotide) all_counts.append(count) print(all_counts) print(all_trinucleotides) """ Explanation: We can try various tricks to get round this problem. What if we generated two lists – one of counts, and one of the trinucleotides themselves? End of explanation """ i = all_trinucleotides.index('TGA') c = all_counts[i] print('count for TGA is ' + str(c)) """ Explanation: Now we have two lists of the same length, with a one-to-one correspondence between the elements: This allows us to look up the count for a given trinucleotide in a slightly more appealing way – we can look up the index of the trinucleotide in the all_trinucleotides list, then get the count at the same index in the all_counts list: End of explanation """ enzymes = { 'EcoRI':r'GAATTC', 'AvaII':r'GG(A|T)CC', 'BisI':'GC[ATGC]GC' } """ Explanation: This is a little bit nicer, but still has major drawbacks. We’re still storing all those zeros, and now we have two lists to keep track of. We need to be incredibly careful when manipulating either of the two lists to make sure that they stay perfectly synchronized – if we make any change to one list but not the other, then there will no longer be a one-to-one correspondence between elements and we’ll get the wrong answer when we try to look up a count. This approach is also slow1 . To find the index of a given trinucleotide in the all_trinucleotides list, Python has to look at each element one at a time until it finds the one we’re looking for. This means that as the size of the list grows2 , the time taken to look up the count for a given element will grow alongside it. If we take a step back and think about the problem in more general terms, what we need is a way of storing pairs of data (in this case, trinucleotides and their counts) in a way that allows us to efficiently look up the count for any given trinucleotide. This problem of storing paired data is incredibly common in programming. We might want to store: protein sequence names and their sequences DNA restriction enzyme names and their motifs codons and their associated amino acid residues colleagues’ names and their email addresses sample names and their co-ordinates words and their definitions All these are examples of what we call key-value pairs. In each case we have pairs of keys and values: Key Value trinucleotide count name protein sequence name restriction enzyme motif codon amino acid residue name email address sample coordinates word definition The last example in this table – words and their definitions – is an interesting one because we have a tool in the physical world for storing this type of data – a dictionary. Python’s tool for solving this type of problem is also called a dictionary (usually abbreviated to dict) and in this section we’ll see how to create and use them. Creating a dictionary The syntax for creating a dictionary is similar to that for creating a list, but we use curly brackets rather than square ones. Each pair of data, consisting of a key and a value, is called an item. When storing items in a dictionary, we separate them with commas. Within an individual item, we separate the key and the value with a colon. Here’s a bit of code that creates a dictionary of restriction enzymes (using data from the previous section) with three items: End of explanation """ enzymes = { 'EcoRI' : r'GAATTC', 'AvaII' : r'GG(A|T)CC', 'BisI' : r'GC[ATGC]GC' } """ Explanation: In this case, the keys and values are both strings3 . Splitting the dictionary definition over several lines makes it easier to read: End of explanation """ print(enzymes['BisI']) """ Explanation: but doesn’t affect the code at all. To retrieve a bit of data from the dictionary – i.e. to look up the motif for a particular enzyme – we write the name of the dictionary, followed by the key in square brackets: End of explanation """ enzymes = {} enzymes['EcoRI'] = r'GAATTC' enzymes['AvaII'] = r'GG(A|T)CC' enzymes['BisI'] = r'GC[ATGC]GC' """ Explanation: The code looks very similar to using a list, but instead of giving the index of the element we want, we’re giving the key for the value that we want to retrieve. Dictionaries are a very useful way to store data, but they come with some restrictions. The only types of data we are allowed to use as keys are strings and numbers4 , so we can’t, for example, create a dictionary where the keys are file objects. Values can be whatever type of data we like. Also, keys must be unique – we can’t store multiple values for the same key. You might think that this makes dicts less useful, but there are ways round the problem of storing multiple values – we won’t need them for the examples in this chapter, but the chapter on complex data structures in Advanced Python for Biologists gives details. In real-life programs, it’s relatively rare that we’ll want to create a dictionary all in one go like in the example above. More often, we’ll want to create an empty dictionary, then add key/value pairs to it (just as we often create an empty list and then add elements to it). To create an empty dictionary we simply write a pair of curly brackets on their own, and to add elements, we use the square-brackets notation on the left-hand side of an assignment. Here’s a bit of code that stores the restriction enzyme data one item at a time: End of explanation """ enzymes = { 'EcoRI' : r'GAATTC', 'AvaII' : r'GG(A|T)CC', 'BisI' : r'GC[ATGC]GC' } # remove the EcoRI enzyme from the dict enzymes.pop('EcoRI') """ Explanation: We can delete a key from a dictionary using the pop method. pop actually returns the value and deletes the key at the same time: End of explanation """ dna = "AATGATCGATCGTACGCTGA" counts = {} for base1 in ['A', 'T', 'G', 'C']: for base2 in ['A', 'T', 'G', 'C']: for base3 in ['A', 'T', 'G', 'C']: trinucleotide = base1 + base2 + base3 count = dna.count(trinucleotide) counts[trinucleotide] = count print(counts) """ Explanation: Let’s take another look at the trinucleotide count example from the start of the section. Here’s how we store the trinucleotides and their counts in a dictionary: End of explanation """ print(counts['TGA']) """ Explanation: We can see from the output that the trinucleotides and their counts are stored together in one variable: {'ACC': 0, 'ATG': 1, 'AAG': 0, 'AAA': 0, 'ATC': 2, 'AAC': 0, 'ATA': 0, 'AGG': 0, 'CCT': 0, 'CTC': 0, 'AGC': 0, 'ACA': 0, 'AGA': 0, 'CAT': 0, 'AAT': 1, 'ATT': 0, 'CTG': 1, 'CTA': 0, 'ACT': 0, 'CAC': 0, 'ACG': 1, 'CAA': 0, 'AGT': 0, 'CAG': 0, 'CCG': 0, 'CCC': 0, 'CTT': 0, 'TAT': 0, 'GGT': 0, 'TGT': 0, 'CGA': 1, 'CCA': 0, 'TCT': 0, 'GAT': 2, 'CGG': 0, 'TTT': 0, 'TGC': 0, 'GGG': 0, 'TAG': 0, 'GGA': 0, 'TAA': 0, 'GGC': 0, 'TAC': 1, 'TTC': 0, 'TCG': 2, 'TTA': 0, 'TTG': 0, 'TCC': 0, 'GAA': 0, 'TGG': 0, 'GCA': 0, 'GTA': 1, 'GCC': 0, 'GTC': 0, 'GCG': 0, 'GTG': 0, 'GAG': 0, 'GTT': 0, 'GCT': 1, 'TGA': 2, 'GAC': 0, 'CGT': 1, 'TCA': 0, 'CGC': 1} We still have a lot of repetitive counts of zero, but looking up the count for a particular trinucleotide is now very straightforward: End of explanation """ dna = "AATGATCGATCGTACGCTGA" counts = {} for base1 in ['A', 'T', 'G', 'C']: for base2 in ['A', 'T', 'G', 'C']: for base3 in ['A', 'T', 'G', 'C']: trinucleotide = base1 + base2 + base3 count = dna.count(trinucleotide) if count > 0: counts[trinucleotide] = count print(counts) """ Explanation: We no longer have to worry about either “memorizing” the order of the counts or maintaining two separate lists. Let’s now see if we can find a way of avoiding storing all those zero counts. We can add an if statement that ensures that we only store a count if it’s greater than zero: End of explanation """ print(counts['TGA']) """ Explanation: When we look at the output from the above code, we can see that the amount of data we’re storing is much smaller – just the counts for the trinucleotides that are greater than zero: {'ATG': 1, 'ACG': 1, 'ATC': 2, 'GTA': 1, 'CTG': 1, 'CGC': 1, 'GAT': 2, 'CGA': 1, 'AAT': 1, 'TGA': 2, 'GCT': 1, 'TAC': 1, 'TCG': 2, 'CGT': 1} Now we have a new problem to deal with. Looking up the count for a given trinucleotide works fine when the count is positive: End of explanation """ print(counts['AAA']) """ Explanation: But when the count is zero, the trinucleotide doesn’t appear as a key in the dictionary: End of explanation """ if 'AAA' in counts: print(counts('AAA')) """ Explanation: so we will get a KeyError when we try to look it up: There are two possible ways to fix this. We can check for the existence of a key in a dictionary (just like we can check for the existence of an element in a list), and only try to retrieve it once we know it exists: End of explanation """ print(counts['TGA']) print(counts.get('TGA')) """ Explanation: Alternatively, we can use the dictionary’s get method. get usually works just like using square brackets: the following two lines do exactly the same thing: End of explanation """ print("count for TGA is " + str(counts.get('TGA', 0))) print("count for AAA is " + str(counts.get('AAA', 0))) print("count for GTA is " + str(counts.get('GTA', 0))) print("count for TTT is " + str(counts.get('TTT', 0))) """ Explanation: The thing that makes get really useful, however, is that it can take an optional second argument, which is the default value to be returned if the key isn’t present in the dictionary. In this case, we know that if a given trinucleotide doesn’t appear in the dictionary then its count is zero, so we can give zero as the default value and use get to print out the count for any trinucleotide: End of explanation """ for base1 in ['A', 'T', 'G', 'C']: for base2 in ['A', 'T', 'G', 'C']: for base3 in ['A', 'T', 'G', 'C']: trinucleotide = base1 + base2 + base3 if counts.get(trinucleotide, 0) == 2: print(trinucleotide) """ Explanation: As we can see from the output, we now don’t have to worry about whether or not each trinucleotide appears in the dictionary – get takes care of everything and returns zero when appropriate: Iterating over a dictionary What if, instead of looking up a single item from a dictionary, we want to do something for all items? For example, imagine that we wanted to take our counts dictionary variable from the code above and print out all trinucleotides where the count was 2. One way to do it would be to use our three nested loops again to generate all possible trinucleotides, then look up the count for each one and decide whether or not to print it: End of explanation """ print(counts.keys()) """ Explanation: As we can see from the output, this works perfectly well: But it seems inefficient to go through the whole process of generating all possible trinucleotides again, when the information we want – the list of trinucleotides – is already in the dictionary. A better approach would be to read the list of keys directly from the dictionary, which is what the keys method does. Iterating over keys When used on a dictionary, the keys method returns a list of all the keys in the dictionary: End of explanation """ for trinucleotide in counts.keys(): if counts.get(trinucleotide) == 2: print(trinucleotide) """ Explanation: Looking at the output5 confirms that this is the list of trinucleotides we want to consider (remember that we’re looking for trinucleotides with a count of two, so we don’t need to consider ones that aren’t in the dictionary as we already know that they have a count of zero): ['ATG', 'ACG', 'ATC', 'GTA', 'CTG', 'CGC', 'GAT', 'CGA', 'AAT', 'TGA', 'GCT', 'TAC', 'TCG', 'CGT'] Using keys, our code for printing out all the trinucleotides that appear twice in the DNA sequence becomes a lot more concise: End of explanation """ for key in my_dict.keys(): value = my_dict.get(key) # do something with key and value """ Explanation: This version prints exactly the same set of trinucleotides as the more verbose method: Before we move on, take a moment to compare the output immediately above this paragraph with the output from the three-loop version from earlier in this section. You’ll notice that while the set of trinucleotides is the same, the order in which they appear is different. This illustrates an important point about dictionaries – they are inherently unordered. That means that when we use the keys method to iterate over a dictionary, we can’t rely on processing the items in the same order that we added them. This is in contrast to lists, which always maintain the same order when looping. If we want to control the order in which keys are printed we can use the sorted method to sort the list before processing it: for trinucleotide in sorted(counts.keys()): if counts.get(trinucleotide) == 2: print(trinucleotide) Iterating over items In the example code above, the first thing we need to do inside the loop is to look up the value for the current key. This is a very common pattern when iterating over dictionaries – so common, in fact, that Python has a special shorthand for it. Instead of doing this: End of explanation """ for key, value in my_dict.items(): # do something with key and value """ Explanation: We can use the items method to iterate over pairs of data, rather than just keys: End of explanation """ for trinucleotide, count in counts.items(): if count == 2: print(trinucleotide) """ Explanation: The items method does something slightly different from all the other methods we’ve seen so far in this book; rather than returning a single value, or a list of values, it returns a list of pairs of values. That’s why we have to give two variable names at the start of the loop. Here’s how we can use the items method to process our dictionary of trinucleotide counts just like before: End of explanation """ # Tuples are like lists but are immutable. tup = (1, 2, 3) tup[0] # => 1 tup[0] = 3 # Raises a TypeError # Note that a tuple of length one has to have a comma after the last element but # tuples of other lengths, even zero, do not. type((1)) # => <class 'int'> type((1,)) # => <class 'tuple'> type(()) # => <class 'tuple'> # You can do most of the list operations on tuples too len(tup) # => 3 tup + (4, 5, 6) # => (1, 2, 3, 4, 5, 6) tup[:2] # => (1, 2) 2 in tup # => True # You can unpack tuples (or lists) into variables a, b, c = (1, 2, 3) # a is now 1, b is now 2 and c is now 3 # You can also do extended unpacking a, *b, c = (1, 2, 3, 4) # a is now 1, b is now [2, 3] and c is now 4 # Tuples are created by default if you leave out the parentheses d, e, f = 4, 5, 6 # Now look how easy it is to swap two values e, d = d, e # d is now 5 and e is now 4 # Dictionaries store mappings empty_dict = {} # Here is a prefilled dictionary filled_dict = {"one": 1, "two": 2, "three": 3} # Note keys for dictionaries have to be immutable types. This is to ensure that # the key can be converted to a constant hash value for quick look-ups. # Immutable types include ints, floats, strings, tuples. invalid_dict = {[1,2,3]: "123"} # => Raises a TypeError: unhashable type: 'list' valid_dict = {(1,2,3):[1,2,3]} # Values can be of any type, however. # Look up values with [] filled_dict["one"] # => 1 # Get all keys as an iterable with "keys()". We need to wrap the call in list() # to turn it into a list. We'll talk about those later. Note - Dictionary key # ordering is not guaranteed. Your results might not match this exactly. list(filled_dict.keys()) # => ["three", "two", "one"] # Get all values as an iterable with "values()". Once again we need to wrap it # in list() to get it out of the iterable. Note - Same as above regarding key # ordering. list(filled_dict.values()) # => [3, 2, 1] # Check for existence of keys in a dictionary with "in" "one" in filled_dict # => True 1 in filled_dict # => False # Looking up a non-existing key is a KeyError filled_dict["four"] # KeyError # Use "get()" method to avoid the KeyError filled_dict.get("one") # => 1 filled_dict.get("four") # => None # The get method supports a default argument when the value is missing filled_dict.get("one", 4) # => 1 filled_dict.get("four", 4) # => 4 # "setdefault()" inserts into a dictionary only if the given key isn't present filled_dict.setdefault("five", 5) # filled_dict["five"] is set to 5 filled_dict.setdefault("five", 6) # filled_dict["five"] is still 5 # Adding to a dictionary filled_dict.update({"four":4}) # => {"one": 1, "two": 2, "three": 3, "four": 4} #filled_dict["four"] = 4 #another way to add to dict # Remove keys from a dictionary with del del filled_dict["one"] # Removes the key "one" from filled dict # Sets store ... well sets empty_set = set() # Initialize a set with a bunch of values. Yeah, it looks a bit like a dict. Sorry. some_set = {1, 1, 2, 2, 3, 4} # some_set is now {1, 2, 3, 4} # Similar to keys of a dictionary, elements of a set have to be immutable. invalid_set = {[1], 1} # => Raises a TypeError: unhashable type: 'list' valid_set = {(1,), 1} # Can set new variables to a set filled_set = some_set # Add one more item to the set filled_set.add(5) # filled_set is now {1, 2, 3, 4, 5} # Do set intersection with & other_set = {3, 4, 5, 6} filled_set & other_set # => {3, 4, 5} # Do set union with | filled_set | other_set # => {1, 2, 3, 4, 5, 6} # Do set difference with - {1, 2, 3, 4} - {2, 3, 5} # => {1, 4} # Do set symmetric difference with ^ {1, 2, 3, 4} ^ {2, 3, 5} # => {1, 4, 5} # Check if set on the left is a superset of set on the right {1, 2} >= {1, 2, 3} # => False # Check if set on the left is a subset of set on the right {1, 2} <= {1, 2, 3} # => True # Check for existence in a set with in 2 in filled_set # => True 10 in filled_set # => False """ Explanation: This method is generally preferred for iterating over items in a dictionary, as it makes the intention of the code very clear. Many of the problems that we solve by iterating over dicts can also be solved using comprehensions – there’s a whole chapter devoted to comprehensions in Advanced Python for Biologists, so take a look. End of explanation """
johnnyliu27/openmc
examples/jupyter/mg-mode-part-iii.ipynb
mit
import os import matplotlib.pyplot as plt import numpy as np import openmc %matplotlib inline """ Explanation: This Notebook illustrates the use of the the more advanced features of OpenMC's multi-group mode and the openmc.mgxs.Library class. During this process, this notebook will illustrate the following features: Calculation of multi-group cross sections for a simplified BWR 8x8 assembly with isotropic and angle-dependent MGXS. Automated creation and storage of MGXS with openmc.mgxs.Library Fission rate comparison between continuous-energy and the two multi-group OpenMC cases. To avoid focusing on unimportant details, the BWR assembly in this notebook is greatly simplified. The descriptions which follow will point out some areas of simplification. Generate Input Files End of explanation """ # Instantiate some elements elements = {} for elem in ['H', 'O', 'U', 'Zr', 'Gd', 'B', 'C', 'Fe']: elements[elem] = openmc.Element(elem) """ Explanation: We will be running a rodded 8x8 assembly with Gadolinia fuel pins. Let's create all the elemental data we would need for this case. End of explanation """ materials = {} # Fuel materials['Fuel'] = openmc.Material(name='Fuel') materials['Fuel'].set_density('g/cm3', 10.32) materials['Fuel'].add_element(elements['O'], 2) materials['Fuel'].add_element(elements['U'], 1, enrichment=3.) # Gadolinia bearing fuel materials['Gad'] = openmc.Material(name='Gad') materials['Gad'].set_density('g/cm3', 10.23) materials['Gad'].add_element(elements['O'], 2) materials['Gad'].add_element(elements['U'], 1, enrichment=3.) materials['Gad'].add_element(elements['Gd'], .02) # Zircaloy materials['Zirc2'] = openmc.Material(name='Zirc2') materials['Zirc2'].set_density('g/cm3', 6.55) materials['Zirc2'].add_element(elements['Zr'], 1) # Boiling Water materials['Water'] = openmc.Material(name='Water') materials['Water'].set_density('g/cm3', 0.6) materials['Water'].add_element(elements['H'], 2) materials['Water'].add_element(elements['O'], 1) # Boron Carbide for the Control Rods materials['B4C'] = openmc.Material(name='B4C') materials['B4C'].set_density('g/cm3', 0.7 * 2.52) materials['B4C'].add_element(elements['B'], 4) materials['B4C'].add_element(elements['C'], 1) # Steel materials['Steel'] = openmc.Material(name='Steel') materials['Steel'].set_density('g/cm3', 7.75) materials['Steel'].add_element(elements['Fe'], 1) """ Explanation: With the elements we defined, we will now create the materials we will use later. Material Definition Simplifications: This model will be run at room temperature so the NNDC ENDF-B/VII.1 data set can be used but the water density will be representative of a module with around 20% voiding. This water density will be non-physically used in all regions of the problem. Steel is composed of more than just iron, but we will only treat it as such here. End of explanation """ # Instantiate a Materials object materials_file = openmc.Materials(materials.values()) # Export to "materials.xml" materials_file.export_to_xml() """ Explanation: We can now create a Materials object that can be exported to an actual XML file. End of explanation """ # Set constants for the problem and assembly dimensions fuel_rad = 0.53213 clad_rad = 0.61341 Np = 8 pin_pitch = 1.6256 length = float(Np + 2) * pin_pitch assembly_width = length - 2. * pin_pitch rod_thick = 0.47752 / 2. + 0.14224 rod_span = 7. * pin_pitch surfaces = {} # Create boundary planes to surround the geometry surfaces['Global x-'] = openmc.XPlane(x0=0., boundary_type='reflective') surfaces['Global x+'] = openmc.XPlane(x0=length, boundary_type='reflective') surfaces['Global y-'] = openmc.YPlane(y0=0., boundary_type='reflective') surfaces['Global y+'] = openmc.YPlane(y0=length, boundary_type='reflective') # Create cylinders for the fuel and clad surfaces['Fuel Radius'] = openmc.ZCylinder(R=fuel_rad) surfaces['Clad Radius'] = openmc.ZCylinder(R=clad_rad) surfaces['Assembly x-'] = openmc.XPlane(x0=pin_pitch) surfaces['Assembly x+'] = openmc.XPlane(x0=length - pin_pitch) surfaces['Assembly y-'] = openmc.YPlane(y0=pin_pitch) surfaces['Assembly y+'] = openmc.YPlane(y0=length - pin_pitch) # Set surfaces for the control blades surfaces['Top Blade y-'] = openmc.YPlane(y0=length - rod_thick) surfaces['Top Blade x-'] = openmc.XPlane(x0=pin_pitch) surfaces['Top Blade x+'] = openmc.XPlane(x0=rod_span) surfaces['Left Blade x+'] = openmc.XPlane(x0=rod_thick) surfaces['Left Blade y-'] = openmc.YPlane(y0=length - rod_span) surfaces['Left Blade y+'] = openmc.YPlane(y0=9. * pin_pitch) """ Explanation: Now let's move on to the geometry. The first step is to define some constants which will be used to set our dimensions and then we can start creating the surfaces and regions for the problem, the 8x8 lattice, the rods and the control blade. Before proceeding let's discuss some simplifications made to the problem geometry: - To enable the use of an equal-width mesh for running the multi-group calculations, the intra-assembly gap was increased to the same size as the pitch of the 8x8 fuel lattice - The can is neglected - The pin-in-water geometry for the control blade is ignored and instead the blade is a solid block of B4C - Rounded corners are ignored - There is no cladding for the water rod End of explanation """ # Set regions for geometry building regions = {} regions['Global'] = \ (+surfaces['Global x-'] & -surfaces['Global x+'] & +surfaces['Global y-'] & -surfaces['Global y+']) regions['Assembly'] = \ (+surfaces['Assembly x-'] & -surfaces['Assembly x+'] & +surfaces['Assembly y-'] & -surfaces['Assembly y+']) regions['Fuel'] = -surfaces['Fuel Radius'] regions['Clad'] = +surfaces['Fuel Radius'] & -surfaces['Clad Radius'] regions['Water'] = +surfaces['Clad Radius'] regions['Top Blade'] = \ (+surfaces['Top Blade y-'] & -surfaces['Global y+']) & \ (+surfaces['Top Blade x-'] & -surfaces['Top Blade x+']) regions['Top Steel'] = \ (+surfaces['Global x-'] & -surfaces['Top Blade x-']) & \ (+surfaces['Top Blade y-'] & -surfaces['Global y+']) regions['Left Blade'] = \ (+surfaces['Left Blade y-'] & -surfaces['Left Blade y+']) & \ (+surfaces['Global x-'] & -surfaces['Left Blade x+']) regions['Left Steel'] = \ (+surfaces['Left Blade y+'] & -surfaces['Top Blade y-']) & \ (+surfaces['Global x-'] & -surfaces['Left Blade x+']) regions['Corner Blade'] = \ regions['Left Steel'] | regions['Top Steel'] regions['Water Fill'] = \ regions['Global'] & ~regions['Assembly'] & \ ~regions['Top Blade'] & ~regions['Left Blade'] &\ ~regions['Corner Blade'] """ Explanation: With the surfaces defined, we can now construct regions with these surfaces before we use those to create cells End of explanation """ universes = {} cells = {} for name, mat, in zip(['Fuel Pin', 'Gd Pin'], [materials['Fuel'], materials['Gad']]): universes[name] = openmc.Universe(name=name) cells[name] = openmc.Cell(name=name) cells[name].fill = mat cells[name].region = regions['Fuel'] universes[name].add_cell(cells[name]) cells[name + ' Clad'] = openmc.Cell(name=name + ' Clad') cells[name + ' Clad'].fill = materials['Zirc2'] cells[name + ' Clad'].region = regions['Clad'] universes[name].add_cell(cells[name + ' Clad']) cells[name + ' Water'] = openmc.Cell(name=name + ' Water') cells[name + ' Water'].fill = materials['Water'] cells[name + ' Water'].region = regions['Water'] universes[name].add_cell(cells[name + ' Water']) universes['Hole'] = openmc.Universe(name='Hole') cells['Hole'] = openmc.Cell(name='Hole') cells['Hole'].fill = materials['Water'] universes['Hole'].add_cell(cells['Hole']) """ Explanation: We will begin building the 8x8 assembly. To do that we will have to build the cells and universe for each pin type (fuel, gadolinia-fuel, and water). End of explanation """ # Create fuel assembly Lattice universes['Assembly'] = openmc.RectLattice(name='Assembly') universes['Assembly'].pitch = (pin_pitch, pin_pitch) universes['Assembly'].lower_left = [pin_pitch, pin_pitch] f = universes['Fuel Pin'] g = universes['Gd Pin'] h = universes['Hole'] lattices = [[f, f, f, f, f, f, f, f], [f, f, f, f, f, f, f, f], [f, f, f, g, f, g, f, f], [f, f, g, h, h, f, g, f], [f, f, f, h, h, f, f, f], [f, f, g, f, f, f, g, f], [f, f, f, g, f, g, f, f], [f, f, f, f, f, f, f, f]] # Store the array of lattice universes universes['Assembly'].universes = lattices cells['Assembly'] = openmc.Cell(name='Assembly') cells['Assembly'].fill = universes['Assembly'] cells['Assembly'].region = regions['Assembly'] """ Explanation: Let's use this pin information to create our 8x8 assembly. End of explanation """ # The top portion of the blade, poisoned with B4C cells['Top Blade'] = openmc.Cell(name='Top Blade') cells['Top Blade'].fill = materials['B4C'] cells['Top Blade'].region = regions['Top Blade'] # The left portion of the blade, poisoned with B4C cells['Left Blade'] = openmc.Cell(name='Left Blade') cells['Left Blade'].fill = materials['B4C'] cells['Left Blade'].region = regions['Left Blade'] # The top-left corner portion of the blade, with no poison cells['Corner Blade'] = openmc.Cell(name='Corner Blade') cells['Corner Blade'].fill = materials['Steel'] cells['Corner Blade'].region = regions['Corner Blade'] # Water surrounding all other cells and our assembly cells['Water Fill'] = openmc.Cell(name='Water Fill') cells['Water Fill'].fill = materials['Water'] cells['Water Fill'].region = regions['Water Fill'] """ Explanation: So far we have the rods and water within the assembly , but we still need the control blade and the water which fills the rest of the space. We will create those cells now End of explanation """ # Create root Universe universes['Root'] = openmc.Universe(name='root universe', universe_id=0) universes['Root'].add_cells([cells['Assembly'], cells['Top Blade'], cells['Corner Blade'], cells['Left Blade'], cells['Water Fill']]) """ Explanation: OpenMC requires that there is a "root" universe. Let us create our root universe and fill it with the cells just defined. End of explanation """ universes['Root'].plot(origin=(length / 2., length / 2., 0.), pixels=(500, 500), width=(length, length), color_by='material', colors={materials['Fuel']: (1., 0., 0.), materials['Gad']: (1., 1., 0.), materials['Zirc2']: (0.5, 0.5, 0.5), materials['Water']: (0.0, 0.0, 1.0), materials['B4C']: (0.0, 0.0, 0.0), materials['Steel']: (0.4, 0.4, 0.4)}) """ Explanation: What do you do after you create your model? Check it! We will use the plotting capabilities of the Python API to do this for us. When doing so, we will coloring by material with fuel being red, gadolinia-fuel as yellow, zirc cladding as a light grey, water as blue, B4C as black and steel as a darker gray. End of explanation """ # Create Geometry and set root universe geometry = openmc.Geometry(universes['Root']) # Export to "geometry.xml" geometry.export_to_xml() """ Explanation: Looks pretty good to us! We now must create a geometry that is assigned a root universe and export it to XML. End of explanation """ # OpenMC simulation parameters batches = 1000 inactive = 20 particles = 1000 # Instantiate a Settings object settings_file = openmc.Settings() settings_file.batches = batches settings_file.inactive = inactive settings_file.particles = particles settings_file.output = {'tallies': False} settings_file.verbosity = 4 # Create an initial uniform spatial source distribution over fissionable zones bounds = [pin_pitch, pin_pitch, 10, length - pin_pitch, length - pin_pitch, 10] uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True) settings_file.source = openmc.source.Source(space=uniform_dist) # Export to "settings.xml" settings_file.export_to_xml() """ Explanation: With the geometry and materials finished, we now just need to define simulation parameters, including how to run the model and what we want to learn from the model (i.e., define the tallies). We will start with our simulation parameters in the next block. This will include setting the run strategy, telling OpenMC not to bother creating a tallies.out file, and limiting the verbosity of our output to just the header and results to not clog up our notebook with results from each batch. End of explanation """ # Instantiate a 2-group EnergyGroups object groups = openmc.mgxs.EnergyGroups() groups.group_edges = np.array([0., 0.625, 20.0e6]) """ Explanation: Create an MGXS Library Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class. End of explanation """ # Initialize a 2-group Isotropic MGXS Library for OpenMC iso_mgxs_lib = openmc.mgxs.Library(geometry) iso_mgxs_lib.energy_groups = groups """ Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the problem geometry. This library will use the default setting of isotropically-weighting the multi-group cross sections. End of explanation """ # Specify multi-group cross section types to compute iso_mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission', 'nu-scatter matrix', 'multiplicity matrix', 'chi'] """ Explanation: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. Just like before, we will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections: "total", "absorption", "nu-fission", '"fission", "nu-scatter matrix", "multiplicity matrix", and "chi". "multiplicity matrix" is needed to provide OpenMC's multi-group mode with additional information needed to accurately treat scattering multiplication (i.e., (n,xn) reactions)) explicitly. End of explanation """ # Instantiate a tally Mesh mesh = openmc.Mesh() mesh.type = 'regular' mesh.dimension = [10, 10] mesh.lower_left = [0., 0.] mesh.upper_right = [length, length] # Specify a "mesh" domain type for the cross section tally filters iso_mgxs_lib.domain_type = "mesh" # Specify the mesh over which to compute multi-group cross sections iso_mgxs_lib.domains = [mesh] """ Explanation: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material" "cell", "universe", and "mesh" domain types. For the sake of example we will use a mesh to gather our cross sections. This mesh will be set up so there is one mesh bin for every pin cell. End of explanation """ # Set the scattering format to histogram and then define the number of bins # Avoid a warning that corrections don't make sense with histogram data iso_mgxs_lib.correction = None # Set the histogram data iso_mgxs_lib.scatter_format = 'histogram' iso_mgxs_lib.histogram_bins = 11 """ Explanation: Now we will set the scattering treatment that we wish to use. In the mg-mode-part-ii notebook, the cross sections were generated with a typical P3 scattering expansion in mind. Now, however, we will use a more advanced technique: OpenMC will directly provide us a histogram of the change-in-angle (i.e., $\mu$) distribution. Where as in the mg-mode-part-ii notebook, all that was required was to set the legendre_order attribute of mgxs_lib, here we have only slightly more work: we have to tell the Library that we want to use a histogram distribution (as it is not the default), and then tell it the number of bins. For this problem we will use 11 bins. End of explanation """ # Let's repeat all of the above for an angular MGXS library so we can gather # that in the same continuous-energy calculation angle_mgxs_lib = openmc.mgxs.Library(geometry) angle_mgxs_lib.energy_groups = groups angle_mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission', 'nu-scatter matrix', 'multiplicity matrix', 'chi'] angle_mgxs_lib.domain_type = "mesh" angle_mgxs_lib.domains = [mesh] angle_mgxs_lib.correction = None angle_mgxs_lib.scatter_format = 'histogram' angle_mgxs_lib.histogram_bins = 11 # Set the angular bins to 8 angle_mgxs_lib.num_azimuthal = 8 """ Explanation: Ok, we made our isotropic library with histogram-scattering! Now why don't we go ahead and create a library to do the same, but with angle-dependent MGXS. That is, we will avoid making the isotropic flux weighting approximation and instead just store a cross section for every polar and azimuthal angle pair. To do this with the Python API and OpenMC, all we have to do is set the number of polar and azimuthal bins. Here we only need to set the number of bins, the API will convert all of angular space into equal-width bins for us. Since this problem is symmetric in the z-direction, we only need to concern ourselves with the azimuthal variation here. We will use eight angles. Ok, we will repeat all the above steps for a new library object, but will also set the number of azimuthal bins at the end. End of explanation """ # Check the libraries - if no errors are raised, then the library is satisfactory. iso_mgxs_lib.check_library_for_openmc_mgxs() angle_mgxs_lib.check_library_for_openmc_mgxs() """ Explanation: Now that our libraries have been setup, let's make sure they contain the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of the mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections. End of explanation """ # Construct all tallies needed for the multi-group cross section library iso_mgxs_lib.build_library() angle_mgxs_lib.build_library() """ Explanation: Lastly, we use our two Library objects to construct the tallies needed to compute all of the requested multi-group cross sections in each domain. We expect a warning here telling us that the default Legendre order is not meaningful since we are using histogram scattering. End of explanation """ # Create a "tallies.xml" file for the MGXS Library tallies_file = openmc.Tallies() iso_mgxs_lib.add_to_tallies_file(tallies_file, merge=True) angle_mgxs_lib.add_to_tallies_file(tallies_file, merge=True) """ Explanation: The tallies within the libraries can now be exported to a "tallies.xml" input file for OpenMC. End of explanation """ # Instantiate tally Filter mesh_filter = openmc.MeshFilter(mesh) # Instantiate the Tally tally = openmc.Tally(name='mesh tally') tally.filters = [mesh_filter] tally.scores = ['fission'] # Add tally to collection tallies_file.append(tally, merge=True) # Export all tallies to a "tallies.xml" file tallies_file.export_to_xml() """ Explanation: In addition, we instantiate a fission rate mesh tally for eventual comparison of results. End of explanation """ # Run OpenMC openmc.run() """ Explanation: Time to run the calculation and get our results! End of explanation """ # Move the StatePoint File ce_spfile = './statepoint_ce.h5' os.rename('statepoint.' + str(batches) + '.h5', ce_spfile) # Move the Summary file ce_sumfile = './summary_ce.h5' os.rename('summary.h5', ce_sumfile) """ Explanation: To make the files available and not be over-written when running the multi-group calculation, we will now rename the statepoint and summary files. End of explanation """ # Load the statepoint file, but not the summary file, as it is a different filename than expected. sp = openmc.StatePoint(ce_spfile, autolink=False) """ Explanation: Tally Data Processing Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file, but not automatically linking the summary file. End of explanation """ su = openmc.Summary(ce_sumfile) sp.link_with_summary(su) """ Explanation: In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. This is necessary for the openmc.Library to properly process the tally data. We first create a Summary object and link it with the statepoint. Normally this would not need to be performed, but since we have renamed our summary file to avoid conflicts with the Multi-Group calculation's summary file, we will load this in explicitly. End of explanation """ # Initialize MGXS Library with OpenMC statepoint data iso_mgxs_lib.load_from_statepoint(sp) angle_mgxs_lib.load_from_statepoint(sp) """ Explanation: The statepoint is now ready to be analyzed. To create our libraries we simply have to load the tallies from the statepoint into each Library and our MGXS objects will compute the cross sections for us under-the-hood. End of explanation """ # Allow the API to create our Library, materials, and geometry file iso_mgxs_file, materials_file, geometry_file = iso_mgxs_lib.create_mg_mode() # Tell the materials file what we want to call the multi-group library materials_file.cross_sections = 'mgxs.h5' # Write our newly-created files to disk iso_mgxs_file.export_to_hdf5('mgxs.h5') materials_file.export_to_xml() geometry_file.export_to_xml() """ Explanation: The next step will be to prepare the input for OpenMC to use our newly created multi-group data. Isotropic Multi-Group OpenMC Calculation We will now use the Library to produce the isotropic multi-group cross section data set for use by the OpenMC multi-group solver. If the model to be run in multi-group mode is the same as the continuous-energy mode, the openmc.mgxs.Library class has the ability to directly create the multi-group geometry, materials, and multi-group library for us. Note that this feature is only useful if the MG model is intended to replicate the CE geometry - it is not useful if the CE library is not the same geometry (like it would be for generating MGXS from a generic spectral region). This method creates and assigns the materials automatically, including creating a geometry which is equivalent to our mesh cells for which the cross sections were derived. End of explanation """ # Set the energy mode settings_file.energy_mode = 'multi-group' # Export to "settings.xml" settings_file.export_to_xml() """ Explanation: Next, we can make the changes we need to the settings file. These changes are limited to telling OpenMC to run a multi-group calculation and provide the location of our multi-group cross section file. End of explanation """ # Create a "tallies.xml" file for the MGXS Library tallies_file = openmc.Tallies() # Add our fission rate mesh tally tallies_file.append(tally) # Export to "tallies.xml" tallies_file.export_to_xml() """ Explanation: Let's clear up the tallies file so it doesn't include all the extra tallies for re-generating a multi-group library End of explanation """ geometry_file.root_universe.plot(origin=(length / 2., length / 2., 0.), pixels=(300, 300), width=(length, length), color_by='material') """ Explanation: Before running the calculation let's look at our meshed model. It might not be interesting, but let's take a look anyways. End of explanation """ # Execute the Isotropic MG OpenMC Run openmc.run() """ Explanation: So, we see a 10x10 grid with a different color for every material, sounds good! At this point, the problem is set up and we can run the multi-group calculation. End of explanation """ # Move the StatePoint File iso_mg_spfile = './statepoint_mg_iso.h5' os.rename('statepoint.' + str(batches) + '.h5', iso_mg_spfile) # Move the Summary file iso_mg_sumfile = './summary_mg_iso.h5' os.rename('summary.h5', iso_mg_sumfile) """ Explanation: Before we go the angle-dependent case, let's save the StatePoint and Summary files so they don't get over-written End of explanation """ # Let's repeat for the angle-dependent case angle_mgxs_lib.load_from_statepoint(sp) angle_mgxs_file, materials_file, geometry_file = angle_mgxs_lib.create_mg_mode() angle_mgxs_file.export_to_hdf5() """ Explanation: Angle-Dependent Multi-Group OpenMC Calculation Let's now run the calculation with the angle-dependent multi-group cross sections. This process will be the exact same as above, except this time we will use the angle-dependent Library as our starting point. We do not need to re-write the materials, geometry, or tallies file to disk since they are the same as for the isotropic case. End of explanation """ # Execute the angle-dependent OpenMC Run openmc.run() """ Explanation: At this point, the problem is set up and we can run the multi-group calculation. End of explanation """ # Load the isotropic statepoint file iso_mgsp = openmc.StatePoint(iso_mg_spfile, autolink=False) iso_mgsum = openmc.Summary(iso_mg_sumfile) iso_mgsp.link_with_summary(iso_mgsum) # Load the angle-dependent statepoint file angle_mgsp = openmc.StatePoint('statepoint.' + str(batches) + '.h5') """ Explanation: Results Comparison In this section we will compare the eigenvalues and fission rate distributions of the continuous-energy, isotropic multi-group and angle-dependent multi-group cases. We will begin by loading the multi-group statepoint files, first the isotropic, then angle-dependent. The angle-dependent was not renamed, so we can autolink its summary. End of explanation """ ce_keff = sp.k_combined iso_mg_keff = iso_mgsp.k_combined angle_mg_keff = angle_mgsp.k_combined # Find eigenvalue bias iso_bias = 1.0E5 * (ce_keff - iso_mg_keff) angle_bias = 1.0E5 * (ce_keff - angle_mg_keff) """ Explanation: Eigenvalue Comparison Next, we can load the eigenvalues for comparison and do that comparison End of explanation """ print('Isotropic to CE Bias [pcm]: {0:1.1f}'.format(iso_bias.nominal_value)) print('Angle to CE Bias [pcm]: {0:1.1f}'.format(angle_bias.nominal_value)) """ Explanation: Let's compare the eigenvalues in units of pcm End of explanation """ sp_files = [sp, iso_mgsp, angle_mgsp] titles = ['Continuous-Energy', 'Isotropic Multi-Group', 'Angle-Dependent Multi-Group'] fiss_rates = [] fig = plt.figure(figsize=(12, 6)) for i, (case, title) in enumerate(zip(sp_files, titles)): # Get our mesh tally information mesh_tally = case.get_tally(name='mesh tally') fiss_rates.append(mesh_tally.get_values(scores=['fission'])) # Reshape the array fiss_rates[-1].shape = mesh.dimension # Normalize the fission rates fiss_rates[-1] /= np.mean(fiss_rates[-1][fiss_rates[-1] > 0.]) # Set 0s to NaNs so they show as white fiss_rates[-1][fiss_rates[-1] == 0.] = np.nan fig = plt.subplot(1, len(titles), i + 1) # Plot only the fueled regions plt.imshow(fiss_rates[-1][1:-1, 1:-1], cmap='jet', origin='lower', vmin=0.4, vmax=4.) plt.title(title + '\nFission Rates') """ Explanation: We see a large reduction in error by switching to the usage of angle-dependent multi-group cross sections! Of course, this rodded and partially voided BWR problem was chosen specifically to exacerbate the angular variation of the reaction rates (and thus cross sections). Such improvements should not be expected in every case, especially if localized absorbers are not present. It is important to note that both eigenvalues can be improved by the application of finer geometric or energetic discretizations, but this shows that the angle discretization may be a factor for consideration. Fission Rate Distribution Comparison Next we will visualize the mesh tally results obtained from our three cases. This will be performed by first obtaining the one-group fission rate tally information from our state point files. After we have this information we will re-shape the data to match the original mesh laydown. We will then normalize, and finally create side-by-side plots of all. End of explanation """ # Calculate and plot the ratios of MG to CE for each of the 2 MG cases ratios = [] fig, axes = plt.subplots(figsize=(12, 6), nrows=1, ncols=2) for i, (case, title, axis) in enumerate(zip(sp_files[1:], titles[1:], axes.flat)): # Get our ratio relative to the CE (in fiss_ratios[0]) ratios.append(np.divide(fiss_rates[i + 1], fiss_rates[0])) # Plot only the fueled regions im = axis.imshow(ratios[-1][1:-1, 1:-1], cmap='bwr', origin='lower', vmin = 0.9, vmax = 1.1) axis.set_title(title + '\nFission Rates Relative\nto Continuous-Energy') # Add a color bar fig.subplots_adjust(right=0.8) cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7]) fig.colorbar(im, cax=cbar_ax) """ Explanation: With this colormap, dark blue is the lowest power and dark red is the highest power. We see general agreement between the fission rate distributions, but it looks like there may be less of a gradient near the rods in the continuous-energy and angle-dependent MGXS cases than in the isotropic MGXS case. To better see the differences, let's plot ratios of the fission powers for our two multi-group cases compared to the continuous-energy case t End of explanation """
vipmunot/Data-Science-Course
Data Visualization/Lab 3/lab03_Munot_Vipul.ipynb
mit
import pandas as pd import numpy as np import warnings warnings.filterwarnings('ignore') """ Explanation: W3 Lab Assignment Submit the .ipynb file to Canvas with file name w03_lab_lastname_firstname.ipynb. In this lab, we will introduce pandas, matplotlib, and seaborn and continue to use the imdb.csv file from the last lab. There will be some exercises, and as usual, write your code in the empty cells to answer them. Importing libraries I think some of you have already used pandas. Pandas is a library for high-performance data analysis, and makes makes tedious jobs of reading, manipulating, analyzing data super easy and nice. You can even plot directly using pandas. If you used R before, you'll see a lot of similarity with the R's dataframe and pandas's dataframe. End of explanation """ %matplotlib inline """ Explanation: Matplotlib magic Jupyter notebook provides several magic commands. These are the commands that you can use only in the notebook (not in IDLE for instance). One of the greatest magic command is matplotlib inline, which displays plots within the notebook instead of creating figure files. End of explanation """ import matplotlib.pyplot as plt """ Explanation: There are many ways to import matplotlib, but the most common way is: End of explanation """ df = pd.read_csv('imdb.csv', delimiter='\t') """ Explanation: Q1: Revisting W2 lab Let's revisit last week's exercise with pandas. It's very easy to read CSV files with pandas, using the panda.read_csv() function. This function has many many options and it may be worthwhile to take a look at available options. Things that you need to be careful are: delimiter or sep: the data file may use ',', tab, or any weird character to separate fields. You can't read data properly if this option is incorrect. header: some data files have "header" row that contains the names of the columns. If you read it as data or use the first row as the header, you'll have problems. na_values or na_filter: often the dataset is incomplete and contains missing data (NA, NaN (not a number), etc.). It's very important to take care of them properly. You don't need to create dictionaries and other data structures. Pandas just imports the whole table into a data structure called DataFrame. You can do all kinds of interesting manipulation with the DataFrame. End of explanation """ df.head() """ Explanation: Let's look at the first few rows to get some sense of the data. End of explanation """ df.head(2) """ Explanation: You can see more, or less lines of course End of explanation """ df['Year'].head(3) """ Explanation: You can extract one column by using dictionary-like expression End of explanation """ df[['Year','Rating']].head(3) """ Explanation: or select multiple columns End of explanation """ df[:10] """ Explanation: To get the first 10 rows End of explanation """ df[['Year','Rating']][:10] """ Explanation: We can also select both rows and columns. For example, to select the first 10 rows of the 'Year' and 'Rating' columns: End of explanation """ df[:10][['Year','Rating']] """ Explanation: You can swap the order of rows and columns. But, when you deal with large datasets, You may want to stick to this principle: Try to reduce the size of the dataset you are handling as soon as possible, and as much as possible. For instance, if you have a billion rows with three columns, getting the small row slice (df[:10]) and working with this small slice can be much better than getting the column slice (df['Year']) and working with this slice (still contains billion items). End of explanation """ print( min(df['Year']), df['Year'].min(), max(df['Year']), df['Year'].max() ) year_nummovies = df["Year"].value_counts() year_nummovies.head() """ Explanation: It is very easy to answer the question of the number of movies per year. The value_counts() function counts how many times each data value (year) appears. End of explanation """ print( np.mean(df['Rating']), np.mean(df['Votes']) ) """ Explanation: To calculate average ratings and votes End of explanation """ print( df['Rating'].mean() ) """ Explanation: or you can even do End of explanation """ geq = df['Year'] >= 1990 leq = df['Year'] <= 1999 movie_nineties = df[geq & leq] movie_nineties.head() """ Explanation: To get the median ratings of movies in 1990s, we first select only movies in that decade End of explanation """ print( movie_nineties['Rating'].median(), movie_nineties['Votes'].median() ) """ Explanation: Then, we can do the calculation End of explanation """ sorted_by_rating = movie_nineties.sort('Rating', ascending=False) sorted_by_rating[:10] """ Explanation: Finally, if we want to know the top 10 movies in 1990s, we can use the sort() function: End of explanation """ # implement here df[(df['Year']==1994)]['Rating'].describe([.1,0.9]) df['Rating'].median() """ Explanation: Exercise Calculate the following basic characteristics of ratings of movies only in 1994: 10th percentile, median, mean, 90th percentile. http://docs.scipy.org/doc/numpy/reference/generated/numpy.percentile.html http://pandas.pydata.org/pandas-docs/stable/text.html Write your code in the cell below End of explanation """ df['Year'].hist() """ Explanation: Q2: Basic plotting with pandas Pandas provides some easy ways to draw plots by using matplotlib. Dataframe object has several plotting functions. For instance, End of explanation """ # implement here plt.hist(df[(df['Year']>2000) & (df['Year']<2014)]['Rating'],bins = 10) """ Explanation: Exercise Can you plot the histogram of ratings of the movies between 2000 and 2014? End of explanation """ plt.hist(df['Rating'], bins=10) """ Explanation: Q3: Basic plotting with matplotlib Let's plot the histogram of ratings using the pyplot.hist() function. End of explanation """ # implement here plt.hist(df[(df['Year']>2000) & (df['Year']<2014)]['Rating'],bins = 20,facecolor='g') plt.xlabel('bins') plt.ylabel('# Ratings') plt.title('Histogram of Rating Distribution for years 2000-2014') plt.grid(True) """ Explanation: Exercise Let's try to make some style changes to the plot: change the color from blue to whatever you want http://matplotlib.org/users/pyplot_tutorial.html#working-with-text http://matplotlib.org/api/colors_api.html add labels of x and y axis change the number of bins to 20 End of explanation """ import seaborn as sns """ Explanation: Q4: Basic plotting with Seaborn Seaborn sits on the top of matplotlib and makes it easier to draw statistical plots. Most plots that you create with Seaborn can be created with matplotlib. It just typically requires a lot more work. Be sure seaborn has been installed on your computer, otherwise run conda install seaborn End of explanation """ plt.hist(df['Rating'], bins=10) """ Explanation: Let's do nothing and just run the histgram again End of explanation """ sns.distplot(df['Rating']) """ Explanation: We can use the distplot() function to plot the histogram. End of explanation """ # implement here sns.distplot(df['Rating'],bins = 10,kde=False) plt.xlabel('bins') plt.ylabel('# Ratings') plt.title('Histogram of Rating Distribution for years 2000-2014') """ Explanation: Exercise Read the document about the function and make the following changes: http://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.distplot.html change the number of bins to 10; not to show kde; End of explanation """
rdempsey/python-for-sharing
pandas-for-noobs/Three Pandas Tips for Pandas Noobs.ipynb
mit
# Import the Python libraries we need import pandas as pd # Define a variable for the accidents data file f = './data/accidents1k.csv' # Use read_csv() to import the data accidents = pd.read_csv(f, sep=',', header=0, index_col=False, parse_dates=True, tupleize_cols=False, error_bad_lines=False, warn_bad_lines=True, skip_blank_lines=True, low_memory=False ) # Run the head() command to see the top 5 rows of the data accidents.head() """ Explanation: Three Pandas Tips For Noobs For those new to Pandas, you'll learn a number of tips that will help with your data engineering and analysis tasks. You may find these buried in the documentation or StackOverflow posts, but I'm consolidating them here for you. Here's what's covered: Ensuring changes you make to DataFrames stick Applying a function with no arguments to a DataFrame Applying a function with arguments to a DataFrame Here's the link to the original dataset we're using: UK Road Safety Data Additional Resources This and much more is covered in my upcoming book: Python Business Intelligence Cookbook, now available for pre-order from Packt Publishing. Import The Data The first thing we need to do is import the data into a DataFrame. I suggest using the read_csv() method from Pandas for this. End of explanation """ # Fill in the NaN values and check the DataFrame accidents.fillna(value=0).head() """ Explanation: 1. Ensuring Your Changes Stick There are many ways to fill in missing (NaN) values in a DataFrame; some people use the mean of the column, others enter 0. You can do whatever you want. However, just because you tell Pandas to fill in the missing values doesn't mean the change will stick. Let's use the fillna() method of the DataFrame and see what happens. End of explanation """ accidents.head() """ Explanation: Hrm, it looks like the DataFrame is updated, but is it? I think not! End of explanation """ # Fill the NaN values and ensure the DataFrame is indeed updated. accidents.fillna(value=0, inplace=True) accidents.head() """ Explanation: What the heck?! The missing values haven't actually been updated. So how do we make the change stick? Using the inplace=True argument like so... End of explanation """ # Let's take a look at the Date column accidents['Date'].head() """ Explanation: Success! The DataFrame has now been updated. 2. Applying a Function With No Arguments to a DataFrame One of the reasons Pandas rocks is that you can apply a function to either a single column of a DataFrame or an entire DataFrame, using the apply() function. You'll be using this often, so here's how. End of explanation """ # Define a function to convert a string to a date. def convert_string_to_date(s): """ Given a string, use the to_datetime function of Pandas to convert it to a datetime, and then return it. """ return pd.to_datetime(s) # Apply the function to the Data column using the apply() function. # Note: we do not have to explicitly pass in the value in the row being processed. accidents['Date'] = accidents['Date'].apply(convert_string_to_date) # Let's check it out. accidents['Date'].head() """ Explanation: According to Pandas, the Date is an object, meaning it doesn't actually see it as a date. Let's change that. End of explanation """ # Create a few dicts and a DataFrame to hold the mappings for the accident data # Accident severity severity = { 1: 'fatal', 2: 'serious', 3: 'fairly serious' } # Day of Week days_of_week = { 1: 'Sunday', 2: 'Monday', 3: 'Tuesday', 4: 'Wednesday', 5: 'Thursday', 6: 'Friday', 7: 'Saturday', 0: 'Earlier this week' } # Road surfaces, updated to fit the sensationalism of a news broadcast road_surfaces = { 1: 'dry', 2: 'wet', 3: 'snow-covered', 4: 'frosty', 5: 'flooded', 6: 'oily', 7: 'muddy', -1: 'Data missing or out of range', } # Local Authority (District) - create a DataFrame from the CSV file f = './data/accidents1k.csv' # Use read_csv() to create a DataFrame from the local_authority_district mapping tab of the data dictionary. # There are almost 1000 districts, hence I put them into a CSV file. districts = pd.read_csv('./data/local_authority_district.csv', sep=',', header=0, index_col=0, parse_dates=False, tupleize_cols=False, error_bad_lines=False, warn_bad_lines=True, skip_blank_lines=True, low_memory=False ) # Define a function to create a one-sentence summary of the record. def create_summary(day_of_week, accident_severity, road_surface, local_authority_district): """ Create a one-sentence summary of the record. Parameters: integer values for the Day_of_Week, Accident_Severity, Road_Surface_Conditions and Local_Authority_(District) columns """ # Perform the value lookups in the dicts and DataFrame dow = days_of_week[day_of_week] sev = severity[accident_severity] road = road_surfaces[road_surface] lad = districts.loc[local_authority_district].label # If the day of week was specified use the first sentence variation, otherwise use the second # Yes, this is redundant and we could optimize it. I leave that to you! if day_of_week != 0: return "On {} a {} accident occured on a {} road in {}".format(dow, sev, road, lad) else: return "{} a {} accident occured on a {} road in {}".format(dow, sev, road, lad) # Create a new column in the DataFrame and fill it with the summary produced by the create_summary function # Pass in the parameters needed to create the summary accidents['summary'] = accidents.apply(lambda x: create_summary(x['Day_of_Week'], x['Accident_Severity'], x['Road_Surface_Conditions'], x['Local_Authority_(District)']), axis=1) # Let's see some results! accidents['summary'].head() # Let's view an entire summary accidents['summary'][0] """ Explanation: Voila! Our data column is now a datetime. 3. Applying a Function With Arguments to a DataFrame Along with applying a function to a single column, another common task is to create an additional column based on the values in two or more columns. In order to do that, we need create a function that takes multiple parameters, and then apply it to the DataFrame. We'll be using the same apply() function we used in the previous tip, plus a little lambda magic. End of explanation """
TheMitchWorksPro/DataTech_Playground
PY_Basics/TMWP_DictionaryBasics.ipynb
mit
# Ex39 in Learn Python the Hard Way: # https://learnpythonthehardway.org/book/ex39.html # edited, expanded, and made PY3.x compliant by Mitch before inclusion in this notebook # create a mapping of state to abbreviation states = { 'Oregon': 'OR', 'Florida': 'FL', 'California': 'CA', 'New York': 'NY', 'Michigan': 'MI' } # create a basic set of states and some cities in them cities = { 'CA': 'San Francisco', 'MI': 'Detroit', 'FL': 'Jacksonville' } # add some more cities cities['NY'] = 'New York' cities['OR'] = 'Portland' # print out some cities print('-' * 10) print("Two cities:") print("NY State has: %s" %cities['NY']) print("OR State has: %s" %cities['OR']) # print some states print('-' * 10) print("Abbreviations for Two States:") # PY 2.7 syntax from original code: print "Michigan's abbreviation is: ", states['Michigan'] print("Michigan's abbreviation is: %s" %states['Michigan']) print("Florida's abbreviation is: %s" %states['Florida']) # do it by using the state then cities dict print('-' * 10) print("State Abbreviation extracted from cities dictionary:") print("Michigan has: %s" %cities[states['Michigan']]) print("Florida has: %s" %cities[states['Florida']]) # print every state abbreviation print('-' * 10) print("Every State Abbreviation:") for state, abbrev in states.items(): print("%s is abbreviated %s" % (state, abbrev)) # print every city in state print('-' * 10) print("Every city in Every State:") for abbrev, city in cities.items(): print("%s has the city %s" %(abbrev, city)) # now do both at the same time print('-' * 10) print("Do Both at Once:") for state, abbrev in states.items(): print("%s state is abbreviated %s and has city %s" % ( state, abbrev, cities[abbrev])) print('-' * 10) """ Explanation: <div align="right">Env: Python [conda env:PY27_Test]</div> <div align="right">Env: Python [conda env:PY36] </div> Working With Dictionaries - The Basics These excercises come from multiple sources and show the basics of creating, modifying, and sorting dictionaries. All code was created in Python 2.7 and cross-tested in Python 3.6. Quick Guide to the basics: - Create Dictionary: dictionary = { key1:value, key2:value2 } - Example: myDict1 = { 'one':'first thing', 'two':'secondthing' } - Example: myDict2 = { 1:43, 2:600, 3:-1000.4 } - Example: myDict3 = { 1:"text", 2:345, 3:'another value' } - Add to a dictionary: myDict1['newVal'] = 'something stupid' - Resulting Dictionary: myDict1 = { 'one':'first thing', 'two':'secondthing', 'newVal':'something stupid' } - Remove from dictionary: del myDict1['newVal'] - now myDict1 is back the way it was: { 'one':'first thing', 'two':'secondthing' } In This Document: - Immediately Below: modified "Learn Python The HardWay" dictionary example (covers much of the basics) - .get() - safely retrieve dict value - sorting - When to use different sorted container options - using comprehensions w/ dictionaries - over-riding "missing" key - find nth element - nested dictionaries - more dictionary and nested dictionary resourses - browse below for more topics that may not be in this list ... End of explanation """ # ex 39 Python the Hard Way modified code continued ... # safely get a abbreviation by state that might not be there state = states.get('Texas') if not state: print("Sorry, no Texas.") # get a city with a default value city = cities.get('TX', 'Does Not Exist') print("The city for the state 'TX' is: %s" % city) print("The city for the state 'FL' is: %s" % states.get('Florida')) city2 = states.get('Hawaii') print("city2: %s" %city2) if city2 == None: city2 = 'Value == None' elif city2 == '': city2 = 'Value is empty ""' elif not city2: city2 = 'Value Missing (Passed not test)' else: city2 = 'No Such Value' print("The city for the state 'HI' is: %s" % city2) print("These commands used .get() to safely retrieve a value") # more tests on above code from Learn Python the Hard Way: print(not city2) # tests that produce an error - numerical indexing has no meaning in dictionaries: # print states[1][1] # what happens if all keys are not unique? foods = { 'fruit': 'banana', 'fruit': 'apple', 'meat': 'beef' } for foodType, indivFood in foods.items(): print("%s includes %s" % (foodType, indivFood)) # answer: does not happen. 2nd attempt to use same key over-writes the first # remove elements from a dictionary del foods['meat'] # add an element to dictionary foods['vegetables'] = 'carrot' foods['meats'] = 'chicken' # change an element to a dictionary foods['vegetables'] = 'corn' foods # from MIT Big Data Class: # Associative Arrays ==> Called "Dictionaries" or "Maps" in Python # each value has a key that you can use to find it - { Key:Value } super_heroes = {'Spider Man' : 'Peter Parker', 'Super Man' : 'Clark Kent', 'Wonder Woman': 'Dianna Prince', 'The Flash' : 'Barry Allen', 'Professor X' : 'Charles Exavior', 'Wolverine' : 'Logan'} print("%s %s" %("len(super_heroes): )", len(super_heroes))) print("%s %s" %("Secret Identity for The Flash:", super_heroes['The Flash'])) del super_heroes['Wonder Woman'] print("%s %s" %("len(super_heroes): )", len(super_heroes))) print(super_heroes) super_heroes['Wolverine'] = 'John Logan' print("Secret Identify for Wolverine:", super_heroes.get("Wolverine")) print("Keys ... then Values (for super_heroes):") print(super_heroes.keys()) print(super_heroes.values()) """ Explanation: <a id="get" name="get"></a> .get() examples End of explanation """ # list of dictionaries: FoodList = [foods, {'meats':'beef', 'fruit':'banana', 'vegetables':'broccoli'}] print(FoodList[0]) print(FoodList[1]) # dictionary of dictionaries (sometimes called "nested dictionary"): # note: this is an example only. In real world, sinde FoodList is inclusive of foods, you probably would not include both # uniform structures (same number of levels across all elements) is also advisable if possible nestedDict = { 'heroes':super_heroes, 'foods': foods, 'complex_foods':FoodList } print(nestedDict['heroes']) print('-'*72) print(nestedDict['complex_foods']) """ Explanation: <a id="nested" name="nested"></a> nested dictionaries End of explanation """ # Help on Collections Objects including Counter, OrderedDict, dequeu, etc: # https://docs.python.org/2/library/collections.html # regular dictionary does not necessarily preserve order (things added in randomly?) # original order of how you add elements is prserved in OrderedDict from collections import OrderedDict myOrdDict = OrderedDict({'banana': 3, 'apple': 4, 'pear': 1, 'orange': 2}) print(myOrdDict) myOrdDict['pork belly'] = 7 print(myOrdDict) myOrdDict['sandwich'] = 5 print(myOrdDict) myOrdDict['hero'] = 5 print(myOrdDict) # sorting the ordered dictionary ... # dictionary sorted by key # replacing original OrderedDict w/ results myOrdDict = OrderedDict(sorted(myOrdDict.items(), key=lambda t: t[0])) print("myOrdDict (sorted by key):\n %s" %myOrdDict) # dictionary sorted by value myOrdDict2 = OrderedDict(sorted(myOrdDict.items(), key=lambda t: t[1])) print("myOrdDict2 (sorted by value):\n %s" %myOrdDict2) # dictionary sorted by length of the key string myOrdDict3 = OrderedDict(sorted(myOrdDict.items(), key=lambda t: len(t[0]))) print("myOrdDict3 (sorted by length of key):\n %s" %myOrdDict3) # collections.OrderedDict(sorted(dictionary.items(), reverse=True)) # pd.Series(OrderedDict(sorted(browser.items(), key=lambda v: v[1]))) # changing sort order to reverse key sort: myOrdDict3 = OrderedDict(sorted(myOrdDict.items(), reverse=True)) print("myOrdDict3 (reverse key sort):\n %s" %myOrdDict3) # testing of above strategy ... usually works but encountered cases where it failed for no known reason # lambda approach may be more reliable: import pandas as pd # value sort as pandas series: myOrdDict4 = pd.Series(OrderedDict(sorted(myOrdDict.items(), key=lambda v: v[1]))) print("myOrdDict4 (value sort / alternate method):\n %s" %myOrdDict4) # value sort in reverse order: myOrdDict5 = OrderedDict(sorted(myOrdDict.items(), key=lambda t: (-t[1],t[0]))) print("myOrdDict5 (sorted by value in reverse order):\n %s" %myOrdDict5) # Help on Collections Objects including Counter, OrderedDict, dequeu, etc: # https://docs.python.org/2/library/collections.html # sample using a list: # for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']: # cnt[word] += 1 from collections import Counter cnt = Counter() for num in myOrdDict.values(): cnt[num] +=1 print(cnt) # http://stackoverflow.com/questions/11089655/sorting-dictionary-python-3 # another approach proposed in 2013 on Stack Overflow (but this may have been newer than OrderdDict at the time) ''' Help topic recommends this approach: pip install sortedcontainers Then: from sortedcontainers import SortedDict myDic = SortedDict({10: 'b', 3:'a', 5:'c'}) sorted_list = list(myDic.keys()) ''' print("conda install sortedcontainers is available in Python 2.7 and 3.6 as of April 2017") # some dictionaries to work with ... super_heroes # created earlier super_heroes['The Incredible Hulk'] = 'Bruce Banner' super_heroes # seems to alpha sort on keys anyway # quick case study exploring another means of reverse sorting (from Stack Overflow): reversed_tst = OrderedDict(list(super_heroes.items())[::-1]) reversed_tst # note how in this instance, we don't get what we expected # this example might not be advisable ... # however ... if we combine methodologies: reversed_tst = OrderedDict(sorted(super_heroes.items(), key=lambda v: v[1])[::-1]) reversed_tst # now the values are in reverse order ... # however ... if we combine methodologies: reversed_tst = OrderedDict(sorted(super_heroes.items(), key=lambda k: k)[::-1]) reversed_tst # now the keys are in reverse order ... fruitDict = {3: 'banana', 4: 'pear', 1: 'apple', 2: 'orange'} fruitDict # dictionaries appear to alpha sort at least on output making it hard to spot the effects below # help on library: # http://www.grantjenks.com/docs/sortedcontainers/sorteddict.html # test sample code from Stack Overflow post: from sortedcontainers import SortedDict myDic = SortedDict({10: 'b', 3:'a', 5:'c'}) sorted_list = list(myDic.keys()) print(myDic) print(sorted_list) fruitDict = SortedDict(fruitDict) sorted_list = list(fruitDict.keys()) print(fruitDict) print(sorted_list) """ Explanation: <a id="nest2" name="nest2"></a> Working With Dictionaries and Nested Dictionaries - Helpful Code This section has additional resources for working with dictionaries and nested dictionaries: FileDataObj code - the FileDataObject stores contents from a file in a nested dictionary and explores sorting and summarising the nested dict dictionary and nested dictionary functions in a PY module - merging dictionaries, adding to a nested dictionary, summarizing a nested dictionary, etc. (some of this code was created from the previous example) <a id="sorting" name="sorting"></a> Sorting End of explanation """ # MIT Big Data included a demo of this type of index/access to a dictionary in a Python 2.7 notebook # the code is organized in a try-except block here so it won't halt the notebook if converted to Python 3.6 def print_1st_keyValue(someDict): try: print(someDict.values()[0]) # only works in Python 2.7 except Exception as ee: print(str(type(ee)) + ": " + str(ee)) # error from PY 3.6: # <class 'TypeError'>: 'dict_values' object does not support indexing finally: try: print(someDict.keys()[0]) # only works in Python 2.7 except Exception as ee: print(str(type(ee)) + ": " + str(ee)) # error from PY 3.6: # <class 'TypeError'>: 'dict_keys' object does not support indexing print_1st_keyValue(super_heroes) print_1st_keyValue(myOrdDict) # run same test on ordered dictionaries # failed in Python 3.6, worked in Python 2.7 # reminder: syntax is orderedDict.values()[0], orderedDict.keys()[0] print_1st_keyValue(fruitDict) # run same test on sorted dictionary - # this works in Python 3.6 and 2.7 # reminder: syntax is sortedDict.values()[0], sortedDict.keys()[0] """ Explanation: <a id="when" name="when"></a>As per the examples above ... So when to do what? - OrderedDict: will store whatever you put into it in whatever order you first record the data (maintaing that order) - SortedDict: by default will alpha sort the data (over-riding original order) and maintain it for you in sorted order - Dict: Don't care about storing it in order? just sort and output the results without storing it in new container **Final note: only SortedDict allows indexing by numerical order on the data (by-passing keys) under both Python 2.7 and 3.6 (as shown in the next section) <a id="nth_elem" name="nth_elem"></a> Find the nth element in a dictionary End of explanation """ # dictionary comprehension [ k for k in fruitDict if k > 2 ] [ fruitDict[k] for k in fruitDict if k > 1 ] newDict = { k*2:'fruit - '+fruitDict[k] for k in fruitDict if k > 1 and len(fruitDict[k]) >=6} print(newDict) type(newDict) """ Explanation: <a id="comprehensions" name="comprehensions"></a> Dictionary Comprehensions End of explanation """ class KeyDict(dict): def __missing__(self, key): #self[key] = key # uncomment if desired behavior is to add keys when they are not found (w/ key as value) #this version returns the key that was not found return key kdTst = KeyDict(super_heroes) print(kdTst['The Incredible Hulk']) print(kdTst['Ant Man']) # value not found so it returns itself as per __missing__ over-ride help(SortedDict) """ Explanation: <a id="misskey" name="misskey"></a> keyDict object End of explanation """
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/computer_vision_fun/labs/classifying_images_using_dropout_and_batchnorm_layer.ipynb
apache-2.0
import tensorflow as tf print(tf.version.VERSION) """ Explanation: Classifying Images using Dropout and Batchnorm Layer Introduction In this notebook, you learn how to build a neural network to classify the tf-flowers dataset using dropout and batchnorm layer. Learning objectives Define Helper Functions. Apply dropout and batchnorm layer. Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete this notebook first and then review the solution notebook. End of explanation """ # Helper functions def training_plot(metrics, history): f, ax = plt.subplots(1, len(metrics), figsize=(5*len(metrics), 5)) for idx, metric in enumerate(metrics): ax[idx].plot(history.history[metric], ls='dashed') ax[idx].set_xlabel("Epochs") ax[idx].set_ylabel(metric) ax[idx].plot(history.history['val_' + metric]); ax[idx].legend([metric, 'val_' + metric]) # Call model.predict() on a few images in the evaluation dataset def plot_predictions(filename): f, ax = plt.subplots(3, 5, figsize=(25,15)) dataset = (tf.data.TextLineDataset(filename). map(decode_csv)) for idx, (img, label) in enumerate(dataset.take(15)): ax[idx//5, idx%5].imshow((img.numpy())); batch_image = tf.reshape(img, [1, IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS]) batch_pred = model.predict(batch_image) pred = batch_pred[0] label = CLASS_NAMES[label.numpy()] pred_label_index = tf.math.argmax(pred).numpy() pred_label = CLASS_NAMES[pred_label_index] prob = pred[pred_label_index] ax[idx//5, idx%5].set_title('{}: {} ({:.4f})'.format(label, pred_label, prob)) def show_trained_weights(model): # CLASS_NAMES is ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] LAYER = 1 # Layer 0 flattens the image, layer=1 is the first dense layer WEIGHT_TYPE = 0 # 0 for weight, 1 for bias f, ax = plt.subplots(1, 5, figsize=(15,15)) for flower in range(len(CLASS_NAMES)): weights = model.layers[LAYER].get_weights()[WEIGHT_TYPE][:, flower] min_wt = tf.math.reduce_min(weights).numpy() max_wt = tf.math.reduce_max(weights).numpy() flower_name = CLASS_NAMES[flower] print("Scaling weights for {} in {} to {}".format( flower_name, min_wt, max_wt)) weights = (weights - min_wt)/(max_wt - min_wt) ax[flower].imshow(weights.reshape(IMG_HEIGHT, IMG_WIDTH, 3)); ax[flower].set_title(flower_name); import matplotlib.pylab as plt import numpy as np import tensorflow as tf IMG_HEIGHT = 224 IMG_WIDTH = 224 IMG_CHANNELS = 3 def read_and_decode(filename, reshape_dims): # Read the file img = tf.io.read_file(filename) # Convert the compressed string to a 3D uint8 tensor. img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS) # Use `convert_image_dtype` to convert to floats in the [0,1] range. img = tf.image.convert_image_dtype(img, tf.float32) # Resize the image to the desired size. # TODO 1 -- Your code here CLASS_NAMES = [item.numpy().decode("utf-8") for item in tf.strings.regex_replace( tf.io.gfile.glob("gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/*"), "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/", "")] CLASS_NAMES = [item for item in CLASS_NAMES if item.find(".") == -1] print("These are the available classes:", CLASS_NAMES) # the label is the index into CLASS_NAMES array def decode_csv(csv_row): record_defaults = ["path", "flower"] filename, label_string = tf.io.decode_csv(csv_row, record_defaults) img = read_and_decode(filename, [IMG_HEIGHT, IMG_WIDTH]) label = tf.argmax(tf.math.equal(CLASS_NAMES, label_string)) return img, label """ Explanation: Define Helper Functions Reading and Preprocessing image data End of explanation """ def train_and_evaluate(batch_size = 32, lrate = 0.0001, l1 = 0, l2 = 0.001, dropout_prob = 0.4, num_hidden = [64, 16]): regularizer = tf.keras.regularizers.l1_l2(l1, l2) train_dataset = (tf.data.TextLineDataset( "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/train_set.csv"). map(decode_csv)).batch(batch_size) eval_dataset = (tf.data.TextLineDataset( "gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/eval_set.csv"). map(decode_csv)).batch(32) # this doesn't matter # NN with multiple hidden layers layers = [tf.keras.layers.Flatten( input_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS), name='input_pixels')] for hno, nodes in enumerate(num_hidden): layers.extend([ tf.keras.layers.Dense(nodes, kernel_regularizer=regularizer, name='hidden_dense_{}'.format(hno)), tf.keras.layers.BatchNormalization(scale=False, # ReLU center=False, # have bias in Dense name='batchnorm_dense_{}'.format(hno)), #move activation to come after batchnorm tf.keras.layers.Activation('relu', name='relu_dense_{}'.format(hno)), # TODO 2 -- Your code here layers.append( tf.keras.layers.Dense(len(CLASS_NAMES), kernel_regularizer=regularizer, activation='softmax', name='flower_prob') ) model = tf.keras.Sequential(layers, name='flower_classification') model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lrate), loss=tf.keras.losses.SparseCategoricalCrossentropy( from_logits=False), metrics=['accuracy']) print(model.summary()) history = model.fit(train_dataset, validation_data=eval_dataset, epochs=10) training_plot(['loss', 'accuracy'], history) return model model = train_and_evaluate(dropout_prob=0.4) """ Explanation: Apply dropout and batchnorm layer A deep neural network (DNN) is a neural network with more than one hidden layer. Each time you add a layer, the number of trainable parameters increases. Therefore,you need a larger dataset. You still have only 3700 flower images which might cause overfitting. Dropouts are the regularization technique that is used to prevent overfitting in the model. Batch normalization is a layer that allows every layer of the network to do learning more independently. The layer is added to the sequential model to standardize the input or the outputs. Add a dropout and batchnorm layer after each of the hidden layers. Dropout Dropout is one of the oldest regularization techniques in deep learning. At each training iteration, it drops random neurons from the network with a probability p (typically 25% to 50%). In practice, neuron outputs are set to 0. The net result is that these neurons will not participate in the loss computation this time around and they will not get weight updates. Different neurons will be dropped at each training iteration. Batch normalization Our input pixel values are in the range [0,1] and this is compatible with the dynamic range of the typical activation functions and optimizers. However, once we add a hidden layer, the resulting output values will no longer lie in the dynamic range of the activation function for subsequent layers. When this happens, the neuron output is zero, and because there is no difference by moving a small amount in either direction, the gradient is zero. There is no way for the network to escape from the dead zone. To fix this, batch norm normalizes neuron outputs across a training batch of data, i.e. it subtracts the average and divides by the standard deviation. This way, the network decides, through machine learning, how much centering and re-scaling to apply at each neuron. In Keras, you can selectively use one or the other: tf.keras.layers.BatchNormalization(scale=False, center=True) When using batch normalization, remember that: 1. Batch normalization goes between the output of a layer and its activation function. So, rather than set activation='relu' in the Dense layer’s constructor, we’d omit the activation function, and then add a separate Activation layer. 2. If you use center=True in batch norm, you do not need biases in your layer. The batch norm offset plays the role of a bias. 3. If you use an activation function that is scale-invariant (i.e. does not change shape if you zoom in on it) then you can set scale=False. ReLu is scale-invariant. Sigmoid is not. End of explanation """
dnxbjyj/python-basic
libs/ConfigParser/handout.ipynb
mit
import ConfigParser cf = ConfigParser.ConfigParser() cf.read('./sys.conf') """ Explanation: 用ConfigParser模块读写conf配置文件 ConfigParser是Python内置的一个读取配置文件的模块,用它来读取和修改配置文件非常方便,本文介绍一下它的基本用法。 数据准备 假设当前目录下有一个名为sys.conf的配置文件,其内容如下: ```bash [db] db_host=127.0.0.1 db_port=22 db_user=root db_pass=root123 [concurrent] thread = 10 processor = 20 ``` 注:配置文件中,各个配置项其实是用等号'='隔开的键值对,这个等号两边如果有空白符,在处理的时候都会被自动去掉。但是key之前不能存在空白符,否则会报错。 配置文件介绍 配置文件即conf文件,其文件结构多为键值对的文件结构,比如上面的sys.conf文件。 conf文件有2个层次结构,[]中的文本是section的名称,下面的键值对列表是item,代表每个配置项的键和值。 初始化ConfigParser实例 End of explanation """ s = cf.sections() print '【Output】' print s """ Explanation: 读取所有的section列表 section即[]中的内容。 End of explanation """ opt = cf.options('concurrent') print '【Output】' print opt """ Explanation: 读取指定section下options key列表 options即某个section下的每个键值对的key. End of explanation """ items = cf.items('concurrent') print '【Output】' print items """ Explanation: 获取指定section下的键值对字典列表 End of explanation """ db_host = cf.get('db','db_host') db_port = cf.getint('db','db_port') thread = cf.getint('concurrent','thread') print '【Output】' print db_host,db_port,thread """ Explanation: 按照指定数据类型读取配置值 cf对象有get()、getint()、getboolean()、getfloat()四种方法来读取不同数据类型的配置项的值。 End of explanation """ cf.set('db','db_pass','newpass') # 修改完了要写入才能生效 with open('sys.conf','w') as f: cf.write(f) """ Explanation: 修改某个配置项的值 比如要修改一下数据库的密码,可以这样修改: End of explanation """ cf.add_section('log') cf.set('log','name','mylog.log') cf.set('log','num',100) cf.set('log','size',10.55) cf.set('log','auto_save',True) cf.set('log','info','%(bar)s is %(baz)s!') # 同样的,要写入才能生效 with open('sys.conf','w') as f: cf.write(f) """ Explanation: 添加一个section End of explanation """ cf.remove_section('log') # 同样的,要写入才能生效 with open('sys.conf','w') as f: cf.write(f) """ Explanation: 执行上面代码后,sys.conf文件多了一个section,内容如下: bash [log] name = mylog.log num = 100 size = 10.55 auto_save = True info = %(bar)s is %(baz)s! 移除某个section End of explanation """ cf.remove_option('db','db_pass') # 同样的,要写入才能生效 with open('sys.conf','w') as f: cf.write(f) """ Explanation: 移除某个option End of explanation """
rsignell-usgs/ipython-notebooks
files/ncSOS_and_OWSlib.ipynb
unlicense
%matplotlib inline from owslib.sos import SensorObservationService import pdb from owslib.etree import etree import pandas as pd import datetime as dt import numpy as np url = 'http://sdf.ndbc.noaa.gov/sos/server.php?request=GetCapabilities&service=SOS&version=1.0.0' ndbc = SensorObservationService(url) # usgs woods hole # buoy data (single current meter) url='http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/1211-AA.cdf' usgs = SensorObservationService(url) contents = usgs.contents usgs.contents off = usgs.offerings[1] off.name off.response_formats off.observed_properties off.procedures # the get observation request below works. How can we recreate this using OWSLib? # http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/1211-A1H.cdf?service=SOS&version=1.0.0&request=GetObservation&responseFormat=text%2Fxml%3Bsubtype%3D%22om%2F1.0.0%22&offering=1211-A1H&observedProperty=u_1205&procedure=urn:ioos:station:gov.usgs:1211-A1H #pdb.set_trace() response = usgs.get_observation(offerings=['1211-AA'], responseFormat='text/xml;subtype="om/1.0.0"', observedProperties=['http://mmisw.org/ont/cf/parameter/eastward_sea_water_velocity'], procedure='urn:ioos:station:gov.usgs:1211-AA') print(response[0:4000]) # usgs woods hole ADCP data # url='http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/9111aqd-a.nc' # adcp = SensorObservationService(url) root = etree.fromstring(response) print(root) # root.findall(".//{%(om)s}Observation" % root.nsmap ) values = root.find(".//{%(swe)s}values" % root.nsmap ) date_value = np.array( [ (dt.datetime.strptime(d,"%Y-%m-%dT%H:%M:%SZ"),float(v)) for d,v in [l.split(',') for l in values.text.split()]] ) ts = pd.Series(date_value[:,1],index=date_value[:,0]) ts.plot(figsize=(12,4), grid='on'); """ Explanation: Accessing ncSOS with OWSLib We have an ncSOS server with a get observation example that works: http://geoport-dev.whoi.edu/thredds/sos/usgs/data2/notebook/1211-AA.cdf?service=SOS&amp;version=1.0.0&amp;request=GetObservation&amp;responseFormat=text%2Fxml%3Bsubtype%3D%22om%2F1.0.0%22&amp;offering=1211-AA&amp;observedProperty=http://mmisw.org/ont/cf/parameter/eastward_sea_water_velocity&amp;procedure=urn:ioos:station:gov.usgs.cmgp:1211-AA But can we formulate, request and process this same query (and others like it) using OWSlib? End of explanation """ start = '1977-01-03T00:00:00Z' stop = '1977-01-07T00:00:00Z' response = usgs.get_observation(offerings=['1211-AA'], responseFormat='text/xml;subtype="om/1.0.0"', observedProperties=['http://mmisw.org/ont/cf/parameter/eastward_sea_water_velocity'], procedure='urn:ioos:station:gov.usgs:1211-AA', eventTime='{}/{}'.format(start,stop)) root = etree.fromstring(response) date_value = np.array( [ (dt.datetime.strptime(d,"%Y-%m-%dT%H:%M:%SZ"),float(v)) for d,v in [l.split(',') for l in values.text.split()]] ) ts = pd.Series(date_value[:,1],index=date_value[:,0]) ts.plot(figsize=(12,4), grid='on'); """ Explanation: Now try setting time range via eventTime. End of explanation """